id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.12842
Text Similarity from Image Contents using Statistical and Semantic Analysis Techniques
Plagiarism detection is one of the most researched areas among the Natural Language Processing(NLP) community. A good plagiarism detection covers all the NLP methods including semantics, named entities, paraphrases etc. and produces detailed plagiarism reports. Detection of Cross Lingual Plagiarism requires deep knowledge of various advanced methods and algorithms to perform effective text similarity checking. Nowadays the plagiarists are also advancing themselves from hiding the identity from being catch in such offense. The plagiarists are bypassed from being detected with techniques like paraphrasing, synonym replacement, mismatching citations, translating one language to another. Image Content Plagiarism Detection (ICPD) has gained importance, utilizing advanced image content processing to identify instances of plagiarism to ensure the integrity of image content. The issue of plagiarism extends beyond textual content, as images such as figures, graphs, and tables also have the potential to be plagiarized. However, image content plagiarism detection remains an unaddressed challenge. Therefore, there is a critical need to develop methods and systems for detecting plagiarism in image content. In this paper, the system has been implemented to detect plagiarism form contents of Images such as Figures, Graphs, Tables etc. Along with statistical algorithms such as Jaccard and Cosine, introducing semantic algorithms such as LSA, BERT, WordNet outperformed in detecting efficient and accurate plagiarism.
Sagar Kulkarni, Sharvari Govilkar, Dhiraj Amin
2023-08-24T15:06:04Z
http://arxiv.org/abs/2308.12842v1
# Text Similarity From Image Contents Using Statistical and Semantic Analysis Techniques ###### Abstract Plagiarism detection is one of the most researched areas among the Natural Language Processing(NLP) community. A good plagiarism detection covers all the NLP methods including semantics, named entities, paraphrases etc. and produces detailed plagiarism reports. Detection of Cross lingual Plagiarism requires deep knowledge of various advanced methods and algorithms to perform effective text similarity checking. Nowadays the plagiarists are also advancing themselves from hiding the identity from being catch in such offense. The plagiarists are bypassed from being detected with techniques like paraphrasing, synonym replacement, mismatching citations, translating one language to another. Image Content Plagiarism Detection (ICPD) has gained importance, utilizing advanced image content processing to identify instances of plagiarism to ensure the integrity of image content. The issue of plagiarism extends beyond textual content, as images such as figures, graphs, and tables also have the potential to be plagiarized. However, image content plagiarism detection remains an unaddressed challenge. Therefore, there is a critical need to develop methods and systems for detecting plagiarism in image content. In this paper, the system has been implemented to detect plagiarism form contents of Images such as Figures, Graphs, Tables etc. Along with statistical algorithms such as Jaccard and Cosine, introducing semantic algorithms such as LSA, BERT, WordNet outperformed in detecting efficient and accurate plagiarism. Plagiarism, Image Contents Plagiarism, Detection, NER, Text similarity, Jaccard, Cosine, LSA, BERT, WordNet ## 1 Introduction Plagiarism is a big problem in academics, researches and it can be a big problem in every department of the education sector. Plagiarism is defined as copying someone's work and presenting it as one's own work. The problem of plagiarism has become an important issue in the field of education and technology due to the wide use and availability of electronic devices and internet makes it easy for students, authors and even academic people to access and use any piece of information and embed it into his/her own work without proper citation. The objectives and importance of plagiarism detection are multifaceted. Firstly, plagiarism detection acts as a deterrent, discouraging individuals from engaging in plagiarism by creating awareness about the consequences and potential repercussions. By fostering a culture of originality and integrity, it upholds the values of academic and professional communities. Secondly, plagiarism detection serves as a proactive mechanism to identify and address instances of plagiarism before they tarnish the reputation of individuals or institutions. By promptly identifying and addressing plagiarism cases, it helps maintain the credibility and trustworthiness of scholarly and creative works. Additionally, plagiarism detection supports the development of a scholarly and creative ecosystem based on trust, accountability, and respect for intellectual contributions. It encourages proper citation and attribution practices, thereby giving due credit to the original authors and creators. Furthermore, plagiarism detection ensures fair competition among researchers, writers, and creators, as it prevents the undue advantage gained by those who engage in plagiarism. Not only text but images such as Figures, Graphs, Tables, flowcharts etc. can be plagiarized. If the author has not mentioned the credit for the original author from where he/she copied the image then it is said to be plagiarized. ## 2 Literature Review Most of the research has been carried out with statistical approaches using bag of word and dictionary-based approaches for detecting similarity between the text. Plagiarism Detection for textual contents with semantic algorithms. Already we have understood and implemented statistical based approaches for textual similarity identification and we concluded that the use of only type statistical approaches will not be fair enough. It is required to introduce algorithms which will check the textual contents and find semantic relation to the actual contents of other documents to determine accurate plagiarism. A part of Image plagiarism detection has already been implemented and presented in a previous seminar. The prior system was able to detect Plagiarism of image when it is compared with another single image. The actual system will perform if we compare the suspicious image contents with the images stored in the database as a novel. In this a single suspicious image will be compared with all of the images and it is a convenient and accurate way to check whether the image is plagiarized or not. The proposed system in [1]classifies the image and then extracts the text data from them. Image Classification is done using the Convolutional Neural Network, which classifies images based on feature extraction. Text extraction is done using the Python-Tesseract OCR package. The text extraction from images is very important to our research also so as to detect plagiarism in images too.The authors in[2] have developed a text extraction model for image documents, using a combination of the two powerful methods Connected Component and Edge Based Method, in order to enhance performance and accuracy of text extraction authors have used an integrated simulation tool. There are many research papers available which perform textual plagiarism detection. But checking plagiarism in only text is not sufficient. There are also a few other areas (such as Tables, Figures, Graphs, Images, Citations etc.) where plagiarism can happen. Through our rigorous literature survey, we haven't found any such kind of research which detects plagiarism from non-textual data too. In one of our own previous papers [3], we have attempted to design the Image to text conversion through API and detecting the text similarity with statistical methods. Over the period of time in the last six months we realized that the API that we were using was capable of accepting images which are stored on cloud/ server. But for our research it is very important that the system must accept the input image which is stored in the computer memory disk. Also, an important point to consider is that the system must be trained enough with a large number of images which can further be capable of understanding the suspicious image if any which is plagiarized. The authors in [4] have made an attempt to detect the text from the images based on Natural scene. The authors agree that there is no single system which will accept images containing different languages and the system will interpret it and extract text. So, it is very important while designing any method to consider the language or script of the text which is embedded in the images. There is no perfect one stop solution for recognizing the script of all the languages using a single method. The deep learning and machine learning approaches can be useful for achieving this task.Another approach of using Fuzzy rule-based decision and artificial neural network to detect the characters and numbers has presented in [5]. The system is designed which works for detecting characters and numbers present in the images by using pure image processing techniques the system is designed to detect the handwritten textual part of the images. In [6], the authors have proved that Microsoft vision took much less time to recognize the text from the images. Microsoft was 37% faster than Google and 3.15 times faster than Amazon. Microsoft had no language filter on its text extraction. It kept incorrectly identifying words and characters from non-English languages, which severely hurt its accuracy score. Again, the text extraction needs a lot more manual evaluation and tweaking to ensure that results are scored accurately. In our research we are planning to detect image plagiarism where images contain textual information in English script. Google is good for accuracy, Amazon is good for cost,and Microsoft is good for speed. For text processing all three do not have big differences so we selected to go with Microsoft API as it's quite faster than the other two. Also, there is a facility with Microsoft vision to use API free of cost for 5000 attempts/month which is not the case with others. The demonstration of use of Google cloud API to extract text from images is given in [7]. It has been observed that google computer vision is providing a good platform to detect the text contained from the image. The accuracy of the system is also acceptable. When we tried to adopt the model in our project then we came to know that the Google computer vision API requires you to subscribe which credit card details and also after a few free trials, it will be charged automatically. Thus, we decided that the same results we can obtain from Microsoft Vision API as well like google computer vision API. So, we configured the system with Microsoft Vision API and observed the accuracy of the system. The Microsoft Vision API is also performing best when the images are in proper readable format. If the image is skewed and having blur text then Microsoft Vision API gives improper output. The intention of this literature review was to look for the best suitable approach to perform Image to text conversion and detecting plagiarism within the images if any. Overall, there are many image processing approaches that can extract text from the images such as Edge based, Component based etc. But doing so was dragging our main objective of the research towards the Image processing part. Thus, we were looking for an easy and better solution to get textual contents from the images. For this we came across the API usage by which we can achieve the satisfactorily accurate text contents from the images. After the literature survey we decided to use Microsoft vision API to extract textual information from the images. The authors in [8], proposed a system that classifies the image and then extracts the text data from them. Image Classification is done using the Convolutional Neural Network, which classifies images based on feature extraction. Text extraction is done using the Python-Tesseract OCR package. The user interface is required to be designed so that users can upload the image files into the system. Another approach of extracting text from images is presented in [9]. Text extraction from image documents has been done using a combination of the two powerful methods Connected Component and Edge Based Method. Finally, the extracted and recognized words are converted to speech for proper use for visually impaired people. The system is more of a pure image processing system. In [10], it has been observed and proved that with Google cloud vision API the output is quite satisfactory when the images are clear and without any noise but when the noise is added into the images the Google Cloud API is unable to detect images or even the text within those images. The API is unable to recognize the text when the noise level is from 10 % and above. Thus, the selection of Microsoft API is still worth adopting in our research ## 3 System Overview There are high chances that the plagiarist will take advantage of the loopholes and copy existing images/drawings/graphs as it is in their own research articles. Image plagiarism is a serious issue and needs to be addressed as it is important as the text plagiarism detection. The experimentation will involve a diverse dataset containing various image formats like jpg, jpeg, png, bmp for image content plagiarism detection. Conversion of Image to Text the system will extract the textual contents from the images using Microsoft API. The extracted text will be used for similarity identification between the text contents of other images in the corpus. The intention is to check the contents of those images or tables with the contents of the suspicious images or tables and not the shape of the images or tables. Input Documents A suspicious English text document will be given as input to the system. The system will compare the similarity of this document with all the reference documents and produce a plagiarism report. Pre-processing and NLP Operations The system is extended and implemented in such a way that it will read all the corpus images and extract all the textual data from images which are stored in the separate files. Text Similarity Identification When a suspicious image is given as an input to the system then the system will extract the text contents from it and with the help of a few advanced algorithms it will compare the text with the corpus contents. Percentage of Plagiarism Contents The percentage of similarity depends on the amount of text contents copied in suspicious images from the original corpus images. Plagiarism detection from Images/Graphs/Tables Plagiarism detection within image contents or drawings or tables is an important area that needs to be considered as seriously. There is a high possibility that the plagiarist copies the images/drawings from existing articles by changing the orientation of images. Thus, it is required to check plagiarism from non-textual things from the articles. Our system is upgraded to check plagiarism by comparing suspicious images with all corpus images at single time. This image plagiarism system is designed with two methods: cosine similarity and vectored TF-IDF similarity method. The system creates the vocabulary of all the words present in the actual contents of all the images. This vocabulary is used to compare the textual content of suspicious images with the corpus collection.The vectored TF-IDF method creates a vector of each image content which is extracted and saved as a separate document in the Corpus. When a suspicious image is given as an input then the Microsoft Vision will be used to extract the textual content from that image and for the textual contents, TF-IDF is created which will be further used for comparing with all the vectors of corpus collection documents. ## 4 Implementation It is likely that when we compare one to one image then plagiarism may not be found but if we compare the suspicious image with all images in the corpus collection then the results can be notably different. Also the plagiarism detection within table contents and charts/graphs has been checked as it has been asked by subject matter experts. This image content plagiarism detection system is designed with two methods: cosine similarity and vectored TF-IDF similarity method. The system creates the vocabulary of all the words present in the actual contents of all the images. This vocabulary is used to compare the textual content of suspicious images with the corpus collection. The vectored TF-IDF method creates a vector of each image content which is extracted and saved as a separate document in the Corpus. When a suspicious image is given as an input then the Microsoft Vision will be used to extract the textual content from that image and for the textual contents, TF-IDF is created which will be further used for comparing with all the vectors of corpus collection documents. The sample output of plagiarism percentage found with an image using vectored TF-IDF is as shown in Figure 3. Figure 1: Plagiarism detection for Images and Tables. Table 1 represents all such scores that we had tested against various images. The plagiarism percentage using Cosine and Vectored TF-IDF are listed in the given table. Table 2 shows the effect of plagiarism results when a suspicious image is a totally novel image. The input image is not plagiarized at all and still we are checking it against all the images in the corpus then the same methods can show the less percentage as there is as good as no plagiarized contents in the input image. After implementing the two methods such as Cosine and Vectored TF-IDF, the system is enhanced with introducing semantic algorithms such as LSA, BERT, and WordNet etc. to obtain more accurate results in plagiarism detection. To perform semantic analysis the system has to preprocess the input image file. Preprocessing steps, including tokenization, stop word analysis, lemmatization, NER, and reference removal, are applied. When given an input image file, the system compares the extracted text with trained image contents to detect plagiarism. Table 3 shows the plagiarism percentages for sample image files. \begin{table} \begin{tabular}{|c|c|c|} \hline **Input** & **Plagiarism \% using** & **Plagiarism \% using** \\ & **Cosine Method** & **Vectored TF-IDF Method** \\ \hline Drawing 1 & 41.59 & 72.00 \\ \hline Drawing 2 & 32.40 & 54.57 \\ \hline Drawing 3 & 30.84 & 49.71 \\ \hline Drawing 4 & 32.40 & 54.57 \\ \hline Table 1 & 45.97 & 79.86 \\ \hline Table 2 & 45.12 & 79.14 \\ \hline Table 3 & 40.46 & 69.57 \\ \hline Table 4 & 36.36 & 65.57 \\ \hline Graph 1 & 42.22 & 75.29 \\ \hline Graph 2 & 41.87 & 74.00 \\ \hline Graph 3 & 40.60 & 73.43 \\ \hline Graph 4 & 40.74 & 73.86 \\ \hline \end{tabular} \end{table} Table 1: Plagiarism % of sample test images/drawings \begin{table} \begin{tabular}{|c|c|c|} \hline **Input** & **Plagiarism \% using** & **Plagiarism \% using** \\ & **Cosine Method** & **Vectored TF-IDF Method** \\ \hline Drawing 1 & 1.92 & 0.67 \\ \hline Drawing 2 & 1.62 & 2.27 \\ \hline Table 1 & 3.76 & 1.32 \\ \hline Table 2 & 7.39 & 1.48 \\ \hline Graph 1 & 7.52 & 1.50 \\ \hline Graph 2 & 5.44 & 1.09 \\ \hline \end{tabular} \end{table} Table 2: Result analysis of Plagiarism check for Dis-similar Images/Graphs/Tables Named Entity Recognition (NER) is a vital technique in NLP with applications in information retrieval and plagiarism detection. NER identifies and classifies named entities like person names, organization names, locations, and dates in text data. In plagiarism detection, NER is particularly useful for detecting disguised plagiarism, where content is modified. By leveraging NER, researchers can accurately identify disguised plagiarism by comparing named entities across documents. In the analysis of the sample input image, it becomes evident that excluding named entities provides satisfactory results for plagiarism detection. Despite the possibility of a slightly higher percentage of detected plagiarism when excluding named entities, it is still advisable to adopt this approach due to its effectiveness in removing potential false positives. The sample results using all the algorithms utilized, providing the plagiarism percentage with and without named entity recognition (NER) inclusion. This allows for a comprehensive understanding of the extent of plagiarism detected in figures, tables, and graphs. The system has been tested with multiple image files such as figures, flowcharts, graphs, tables etc. to check the efficiency of the algorithms in detecting the accurate plagiarism. This plagiarism check is performed after removing named entities as it is already provided that exclusion of named entities gives more accurate and efficient results in plagiarism detection. Table 4 allows for a comparison of the percentage of plagiarism of figure/flowcharts images given by algorithms when named entities are excluded. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Input** & **Jaccard** & **Cosine** & **LSA** & **BERT** & **WordNet** \\ \hline Figure1 & 33.07 & 18.2 & 49.89 & 22.47 & 25.83 \\ \hline Figure2 & 27.24 & 16.52 & 27.98 & 24.79 & 21.04 \\ \hline Figure3 & 21.36 & 14.02 & 37.46 & 19.91 & 22.96 \\ \hline Figure4 & 36.89 & 18.76 & 23.42 & 21.13 & 26.2 \\ \hline Figure5 & 25.24 & 15.34 & 18.26 & 18.44 & 18.8 \\ \hline \end{tabular} \end{table} Table 4: Plagiarism Percentage with Exclusion of Named Entities for Sample Input Figures \begin{table} \begin{tabular}{|c|c|c|c|} \hline Sr.No. & Algorithm & Plagiarism \% when Named & Plagiarism \% when Named \\ & & Entities Included & Entities Excluded \\ \hline 1 & Jaccard & 31.99 & 25.24 \\ \hline 2 & Cosine & 17.35 & 15.34 \\ \hline 3 & LSA & 27.84 & 18.26 \\ \hline 4 & BERT & 18.81 & 18.44 \\ \hline 5 & WordNet & 14.82 & 18.8 \\ \hline \end{tabular} \end{table} Table 3: Image Content Plagiarism Detection: Analysis of Sample Input Table 5 shows the plagiarism percentage obtained for other image types such as Graphs and Tables. By observing the outcomes of each algorithm, it can be clearly seen that semantic algorithms outperforms in detecting accurate plagiarism. The overall system shows the plagiarism detection from images such as figures, graphs, tables etc. At all times it has been observed that by introducing semantic algorithms, the system outperforms in detecting the accurate plagiarized contents from the images, drawings and tables if any. LSA and BERT algorithms have shown superior performance in graph plagiarism detection, especially when named entities are excluded. Despite a slightly higher plagiarism percentage in some cases, excluding named entities helps reduce false positives. ## 5 Conclusion This research covers image content plagiarism detection in which multiple algorithms such as Vectored TF-IDF, Jaccard, Cosine, LSA, BERT, WordNet, etc., were implemented to compare plagiarism detection results. The emphasis was on semantic analysis and resolving named entities. The system works with diverse image formats such as.jpg,.png,.bmp etc. as part of the Figure 4: Visualizing Plagiarism Percentage for Sample Input Figures with Named Entities Excluded \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Input** & **Jaccard** & **Cosine** & **LSA** & **BERT** & **WordNet** \\ \hline Table1 & 77.67 & 27.58 & 25.2 & 29.9 & 17.23 \\ \hline Table2 & 42.8 & 20.71 & 12.23 & 26.47 & 20.67 \\ \hline Table3 & 46.6 & 21.19 & 13.69 & 27.68 & 25.86 \\ \hline Table4 & 11.07 & 33.04 & 17.24 & 29.94 & 21.04 \\ \hline Table5 & 44.66 & 20.73 & 13.72 & 30.51 & 25.86 \\ \hline Graph1 & 27.18 & 15.96 & 27.4 & 40.91 & 17.23 \\ \hline Graph2 & 23.35 & 15.29 & 27.11 & 40.74 & 17.23 \\ \hline Graph3 & 23.35 & 15.29 & 13.39 & 33.83 & 17.23 \\ \hline Graph4 & 33.07 & 18.2 & 15.81 & 31.18 & 17.23 \\ \hline Graph5 & 28.36 & 12.09 & 18.65 & 27.21 & 25.86 \\ \hline \end{tabular} \end{table} Table 5: Plagiarism Percentage with Inclusion of Named Entities for Sample Input Tables
2306.13028
Transferable Curricula through Difficulty Conditioned Generators
Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality.
Sidney Tio, Pradeep Varakantham
2023-06-22T16:45:45Z
http://arxiv.org/abs/2306.13028v1
# Transferable Curricula through Difficulty Conditioned Generators ###### Abstract Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named _Parameterized Environment Response Model_ (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality. ## 1 Introduction Consider the education of calculus. We know that there is a logical progression in terms of required knowledge before mastery in calculus can be achieved: knowledge of algebra is required, and before that, knowledge of arithmetic is required. While established progressions for optimal learning exists in education, they often require extensive human experience and investment in curriculum design. Conversely, in modern video games, mastery requires hours of playthroughs and deliberate learning with no clear pathways to progression. In both cases, a coach or a teacher, usually an expert, is required to design such a curriculum for optimal learning. Scaffolding this curriculum can be tedious and in some cases, intractable. More importantly, it requires deep and nuanced knowledge of the subject matter, which may not always be accessible. The past decade has seen an explosion of Reinforcement Learning (RL, [12]) methods that achieve superhuman performance in complex tasks such as DOTA2, Starcraft, Go, Chess, etc. [1], [13], [14] Given the state-of-the-art RL methods, we propose to explore methods that exploit expert-level RL agents for knowledge transfer to humans and to help shortcut the learning process. One possible avenue for such a transfer to take place would be the use of curricula. Recent methods in curricula generation explores designing a curricula through _Unsupervised Environment Design_ (UED, [1]). UED formalizes the problem of finding adaptive curricula in a teacher-student paradigm, whereby a teacher finds useful environments that optimizes student learning, while considering student's performance as feedback. While prior work in UED (e.g. [15], [16], [17]) has trained high-performing RL students on the respective environments, these method rely on surrogate objectives to track student progress, or co-learn with another RL agent ([1], [18], [15]), both of which would be impractical for transfer between students (artificial or agents in real world settings alike). For transfer between students, We require methods that do not use additional RL students, or are able to directly track student's learning progress. In this work, we introduce the Item Response Theory (IRT, [1]) as a possible solution to this problem. The IRT was developed as a mathematical framework to reason jointly about a student's ability and the questions which they respond to. Considered to be ubiquitous in the field of standardized testing, it is largely used in the design, analysis, and scoring of tests, questionnaires([1], [1], [1]), and instruments that measure ability, attitudes, or other latent variables. The IRT allows educators to quantify the "difficulty" of a given test item by modelling the relationship between a test taker's response to the test item and the test taker's overall ability. In the context of UED, we then see that the IRT provides a useful framework for us to understand the difficulty of a parameterized environment with regards to the ability of the student, of which we aim to maximize. Our current work proposes a new algorithm, called _Parameterized Environment Response Model_, or PERM. PERM applies the IRT to the UED context and generates curricula by matching environments to the ability of the student. Since we do not use a RL-based teacher or regret as a feedback mechanism, our method is transferable across students, regardless of artificial or human student. Our main contributions are as follows: 1. We propose PERM, a novel framework to assess student ability and difficulty of parameterized environments. 2. PERM produces curricula by generating environments that matches the ability of the student. 3. We investigate PERM's capabilities in modelling the training process with latent representations of difficulty and ability. 4. We compare agents trained with PERM with other UED methods in parameterized environments. ## 2 Related Work ### Item Response Theory In Psychology and Education, the IRT is a method used to model interactions between a test taker's ability and a certain characteristic of a question, usually difficulty. The goal here is to gauge a student's ability based on their response to items of varying difficulties. The IRT has many forms, but for the purposes of this paper, we focus on the most standard form: the 1-Parameter Logistic (1PL) IRT, also known as the Rasch model [12], and a continuous variant. The Rasch model is given in Eq. 1, \[p(r_{i,j}=1|a_{i},d_{j})=\frac{1}{1+\exp^{-(a_{i}-d_{j})}} \tag{1}\] where \(r_{i,j}\) is the response by the \(i\)-th person, with an ability measure \(a_{i}\), to the \(j\)-th item, with a difficulty measure \(d_{j}\). The graph of the 1PL IRT can be seen in Figure 0(a). We see that the Rasch model is equivalent to a logistic function. Therefore, the probability that a student answers the item correctly is a function of the difference between student ability \(a_{i}\) and item difficulty \(d_{j}\). In RL settings, interactions between agent and environment is summarized by cumulative rewards achieved. For us to adopt the IRT for the environment training scenario, we can then replace the logistic function with a normal ogive model [13], or the cumulative distribution of a standard normal distribution. Eq 1 then becomes: \[p(Z\leq r_{i,j}|a_{i},d_{j})=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{a_{i}-d_{j} }\exp\{-\frac{u^{2}}{2}\}du \tag{2}\] To our knowledge, the IRT has not used to train RL agents, nor used as a knowledge transfer method from an RL agent to another student. While earlier works have used different methods to perform inference for IRT, a recent method, VIBO [23], introduces a variational inference approach to estimate IRT. More critically, the formulation of the IRT as a variational inference problem allows us to exploit the learned representation to generate new items. We discuss our modifications to VIBO in Section 3. ### Zone of Proximal Development Prior work in UED discusses the zone of proximal development [20], loosely defined as problems faced by the student that are not too easy (such that there is no learning value for the student) and not too difficult (such that it is impossible for the student). PAIRED [10] features an adversarial teacher whose task is to generate environments that maximizes the regret between the protagonist student and another antagonist agent. To apply our method to human training, a human-RL pairing would be necessary, but the difference in learning rates and required experiences could create bottlenecks for the human student (i.e. Moravec's Paradox [15]). PLR [11], and its newer variants ([13], [14]), maintains a store of previously seen levels and prioritizes the replay levels where the average Generalized Advantage Estimate (GAE, [12]) is large. The use of GAE requires access to the value function of the student, a feature that is currently not operationalized for human subjects. In summary, teacher-student curriculum generation approaches have predominantly focused on the zone of proximal development but have relied on surrogate objectives to operationalize it, without directly measuring difficulty or student ability. However, these surrogate objectives are often non-stationary and not easily transferable between students. Figure 1: Graphical representation of IRT and PERM. \(\lambda\), \(a\), \(d\), \(r\) represents environment parameters, ability, difficulty, and response respectively. White nodes depict latent variables, while tan colored nodes represent observable variables. Moreover, resulting curricula may not adequately accommodate large changes in student ability, which is a critical limitation for human subjects. ### Task Sequencing Related to UED are previous works in Task Generation and Sequencing for Curriculum Learning, which aim to generate and assign tasks to a student agent in a principled order to optimize performance on a final objective task [15]. Most of the literature in Task Generation focuses on modifying the student agent's MDP to generate different tasks (e.g. [13], [15], [14]. For example, _promising initialization_[15] modifies the set of initial states and generates a curriculum by initializing agents in states close to high rewards. On the other hand, _action simplification_[15] seeks to prune the action set of the student agent to reduce the likelihood of making mistakes. In contrast to Task Generation, the UED framework investigates domains where there is no explicit representation of tasks. Here, the student agent must learn to maximize rewards across a variety of environment parameters in open-ended domains without a target "final task" to learn. In the UED framework, the teacher algorithms only influences the environment parameters \(\lambda\), while other features of the student's MDP remains relatively consistent across training. While prior works in task sequencing generate different tasks by directly modifying the student agent's MDP, we leave the curricula generation for such domains for future work. ## 3 Method In this section we introduce a novel algorithm to train students, combining the training process with an IRT-based environment generator that acts as a teacher. The teacher's goal is to train a robust student agent that performs well across different environment parameters, while the student's goal is to maximize its rewards in a given environment. Unlike previous UED methods which relies on proxy estimates of difficulty and environment feasibility (e.g. Regret [1]), we propose to directly estimate difficulty by formulating the training process as a student-item response process and model the process with IRT. Given a teacher model that is able to estimate both ability of the student and difficulty of an item, we are able to present a sequence of environments that are within the student's zone of proximal development. By providing environments within the zone, we are unlikely to generate infeasible environments that are impossible or too difficult for the current student, while also avoiding trivial environments that provide little to no learning value. As our method does not rely on non-stationary objectives such as regret, we are able to train PERM offline and transfer knowledge to any student agent, including human students. Lastly, because our method only relies on environment parameters and student response, it works in virtually any environment without requiring any expert knowledge. We show in later sections that PERM serves as a good exploration mechanism to understand the parameter-response relation given any environment. PERM can be separated into two components: (i) learning latent representations of ability and difficulty; (ii) generating a curriculum during agent training. The algorithm is summarized in Algorithm 1. ### Preliminaries We draw parallels from UED to IRT by characterizing each environment parameter \(\lambda\) as an item which the student agent with a policy \(\pi_{t}\)'responds' to by interacting and maximizes its own reward \(r\). Specifically, each student interaction with the parameterized environment yields a tuple \((\pi_{t},\lambda_{t},r_{t})\), where \(\pi_{t}\) represents the student policy at \(t\)-th interaction, and achieves reward \(r_{t}\) during its interaction with the environment parameterized by \(\lambda_{t}\). We then use a history of such interac Figure 2: Analysis of PERM’s reconstruction capabilities on LunarLander. Blue and orange plots represent ability and difficulty estimates against actual rewards achieved by agent; latent variables learned by PERM correspond to actual reward accordingly. Green plots visualizes the real environment parameters against parameters recovered by PERM. PERM is able to reconstruct the environment parameters from difficulty. Similar results are obtained in BipedalWalker, as seen in Figure 3 tions to learn latent representations of student ability \(a\in\mathbb{R}^{n}\) and item difficulty \(d\in\mathbb{R}^{n}\), where \(a\propto r\) and \(d\propto\frac{1}{r}\). In this formulation, \(\pi_{t}\) at different timesteps are seen as students independent of each other. ### Learning Latent Representations of Ability and Difficulty Following from Wu et al's Variational Item Response Theory (VIBO, [2020]), we use a Variational Inference problem [Kingma and Welling, 2013] formulation to learn latent representation of any student interaction with the environment. More critically, VIBO proposes the amortization of the item and student space, which allows it to scale from discrete observations of items, to a continuous parameter space such as the UED. From here, we drop the subscript for the notations \(a\), \(d\), and \(r\) to indicate our move away from discretized items and students. ``` 1:Input: Environment \(E\), Environment parameters \(\lambda\), Student Agent \(\pi\) 2:Parameter: \(k\) episode frequency before update 3:Output: Trained Student Agent, Trained PERM 4: 5:Let \(t=0\), \(\lambda_{0}\sim\text{Uniform}(\lambda)\) 6:while not converged do 7:for\(k\) episodes do 8: Collect Reward \(r_{t}\) from agent \(\pi\) playthrough of \(E(\lambda_{t})\). 9: Estimate current ability \(\mu_{a_{t}},\sigma_{a_{t}}\) by computing \(q_{\phi}(a|d,r,\lambda)\) 10: Sample current ability \(a_{t}\sim N(\mu_{a_{t}},\sigma_{a_{t}})\) 11: Get next difficulty \(d_{t+1}\gets a_{t}\) 12: Generate next parameters \(\lambda_{t+1}\gets p_{\theta}(\lambda|d_{t+1})\) 13:\(t\gets t+1\) 14:endfor 15: Update PERM with \(\mathcal{L}_{PERM}\) 16: RL Update on Student Agent 17:endwhile 18:return trained student agent \(\pi\), trained PERM ``` **Algorithm 1** Curriculum Generation for RL Agents with PERM We state and prove the revised PERM objective based on Variational Inference in the following theorem. We use notation consistent with the Variational Inference literature, and refer the motivated reader to [Kingma and Welling, 2013] for further reading. **Theorem 1**.: _Let \(a\) be the ability for any student, and \(d\) be the difficulty of any environment parameterized by \(\lambda\). Let \(r\) be the continuous response from the student on the environment. If we define the PERM objective as_ \[\mathcal{L}_{PERM}\triangleq\mathcal{L}_{recon_{r}}+\mathcal{L}_{recon_{ \lambda}}+\mathcal{L}_{A}+\mathcal{L}_{D} \tag{3}\] _where_ \[\mathcal{L}_{recon_{r}} =\mathbb{E}_{q_{\phi}(a,d|r,\lambda)}[\log p_{\theta}(r|a,d)]\] \[\mathcal{L}_{recon_{\lambda}} =\mathbb{E}_{q_{\phi}(a,d|r,\lambda)}[\log p_{\theta}(\lambda|d)]\] \[\mathcal{L}_{A} =\mathbb{E}_{q_{\phi}(a,d|r,\lambda)}[\log\frac{p(a)}{q_{\phi}(a |d,r,\lambda)}]\] \[=\mathbb{E}_{q_{\phi}(d|r,\lambda)}[D_{KL}((q_{\phi}(a|d,r)\|p(a))]\] \[\mathcal{L}_{D} =\mathbb{E}_{q_{\phi}(a,d|r,\lambda)}[\log\frac{p(d)}{q_{\phi}(d |r,\lambda)}]\] \[=D_{KL}((q_{\phi}(d|r,\lambda)\|p(d)) \tag{4}\] _and assume the joint posterior factorizes as follows:_ \[q_{\phi}(a,d|r,\lambda)=q_{\phi}(a|d,r,\lambda)q_{\phi}(d|r,\lambda) \tag{5}\] _then \(\log p(r)+\log p(\lambda)\geq\mathcal{L}_{PERM}\): \(\mathcal{L}_{PERM}\) is a lower bound of the log marginal probability of a response \(r\)._ Proof.: Expand the marginal and apply Jensen's inequality: \[\log p_{\theta}(r)+ \log p_{\theta}(\lambda)\geq\mathbb{E}_{q_{\phi}(a,d|r)}[\log \frac{p_{\theta}(r,a,d,\lambda)}{q_{\phi}(a,d|r,\lambda)}]\] \[=\mathbb{E}_{q_{\phi}(a,d|r)}[\log p_{\theta}(r|a,d)]\] \[+\mathbb{E}_{q_{\phi}(a,d|r)}[\log p_{\theta}(\lambda|d)]\] \[+\mathbb{E}_{q_{\phi}(a,d|r)}[\log\frac{p_{(}a)}{q_{\phi}(a|d,r, \lambda)}]\] \[+\mathbb{E}_{q_{\phi}(a,d|r)}[\log\frac{p_{(}d)}{q_{\phi}(d|r, \lambda)}]\] \[=\mathcal{L}_{recon_{r}}+\mathcal{L}_{recon_{\lambda}}+\mathcal{ L}_{A}+\mathcal{L}_{D}\] Since \(\mathcal{L}_{PERM}=\mathcal{L}_{recon_{r}}+\mathcal{L}_{recon_{\lambda}}+ \mathcal{L}_{A}+\mathcal{L}_{D}\) and KL divergences are non-negative, we have shown that \(\mathcal{L}_{PERM}\) is a lower bound on \(\log p_{\theta}(r)+\log p_{\theta}(\lambda)\). For easy reparameterization, all distributions \(q_{\phi}(.|.)\) are defined as Normal distributions with diagonal covariance. ### Generating Environments for Curricula Our method makes a core assumption that optimal learning takes place when the difficulty of the environment matches the ability of the student. In the continuous response model given in Eq. 2, we see that when ability and difficulty is matched (i.e. \(a_{i}=d_{j}\)), the probability which the student achieves a normalized average score \(r_{i,j}=0\) is 0.5. This is a useful property to operationalize the zone of proximal development, as we can see that the model estimates an equal probability of the student overperforming or underperforming. Training is initialized by uniformly sampling across the range of environment parameters. After each interaction between the student and the environment, PERM estimates the ability \(a_{t}\) of the student given the episodic rewards and parameters of the environment. PERM then generates the parameters of the next environment \(\lambda_{t+1}\sim p_{\theta}(\lambda|d_{t+1})\) where \(d_{t+1}=a_{t}\). ## 4 Experiments In our experiments, we seek to answer the following research questions (RQ): **RQ1**: How well does PERM represent the environment parameter space with ability and difficulty measures? **RQ2**: How do RL Agents trained by PERM compare to other UED baselines? We compare two variants of PERM, PERM-Online and PERM-Offline, with the following baselines: \(\text{PLR}^{\perp}\)(Robust Prioritized Replay, [10]), PAIRED [1], Domain Randomization(DR, [12]). PERM-Online is our method that is randomly initialized and trained concurrently with the student agent, as described in Algorithm 1; PERM-Offline is trained separately from the student agent, and remains fixed throughout the student training. PERM-Offline is used to investigate its performance when used in an offline manner, similar to how we propose to use it for human training. For all experiments, we train a student PPO agent [15] in OpenAI Gym's _LunarLander_ and _BipedalWalker_[1]. We first evaluate PERM's effectiveness in representing the parameter space on both OpenAI environments. Specifically, we evaluate how the latent variables ability \(a\) and difficulty \(d\) correlate to the rewards obtained in each interaction, as well as its capability in generating environment parameters. We then provide a proof-of-concept of PERM's curriculum generation on the _LunarLander_ environment, which has only two environment parameters to tune. Lastly, we scale to the more complex _BipedalWalker_ environment that has eight environment parameters, and compare the performance of the trained agent against other methods using the same evaluation environment parameters as in Parker-Holder et al [2022]. ### Analyzing PERM's Representation of Environment Parameters We begin by investigating PERM's capabilities in representing and generating the environment parameters. In order to establish PERM's capabilities for curricula generation purposes, PERM needs to demonstrate the following: i) the latent representations ability \(a\) and difficulty \(d\) needs to conform to proposed relationship with response \(r\) (i.e. \(a\propto r\) and \(d\propto\frac{1}{r}\)); ii) given input environment parameters \(\lambda\) and response \(r\), the reconstructed environment parameters \(\lambda^{\prime}\) and response \(r^{\prime}\) need to match its inputs. For both analyses, we rely on correlation metrics and mean-squared error (MSE) to establish PERM's capabilities. We first train PERM by collecting agent-environment interactions from training a PPO agent under a DR framework until convergence. We then train an offline version of PERM using a subset of data collected and Equation 3. We use the remaining data collected as a holdout set to evaluate PERM's \begin{table} \begin{tabular}{l r r r} \hline Env & Response MSE & \(\lambda\) MSE & R-Squared \\ \hline LunarLander & \(7.8\times 10^{-5}\) & 0.001 & 1.00 \\ BipedalWalker & \(2.5\times 10^{-4}\) & 0.001 & 0.986 \\ \hline \end{tabular} \end{table} Table 1: Analysis of PERM’s recovery capabilities. PERM is able to reconstruct the response and environment parameters with great accuracy. R-squared is obtained by regressing ability and difficulty on response. Figure 3: Analysis of PERM’s reconstruction capabilities on BipedalWalker. Blue and orange plots represent ability and difficulty estimates against actual rewards achieved by agent; latent variables learned by PERM correspond to actual reward accordingly. Green plots visualizes the real environment parameters against parameters recovered by PERM. PERM is able to reconstruct the environment parameters from difficulty. Values presented are normalized. Figure 4: Agents trained by PERM-Online and Offline outperform other methods on LunarLander in both training and evaluation environments. Top: Performance on LunarLander during training; Middle: Performance on selected LunarLander evaluation environments; Bottom: Performance on BipedalWalker evaluation environments. performance. The results are visualized in Figure 2 and Figure 3, and summary statistics are provided in Table 1. As we see in both plots, the latent representations \(a\) (blue) and \(d\) (orange) largely correlates with our expectations of its respective relationships with the response variable \(r\). When both ability and difficulty are regressed against the response variable, we achieve a R-squared of 1.00 and 0.986 for LunarLander and BipedalWalker respectively, indicating that both latent representations are perfect predictors of reward achieved by an agent in a given parameterized environment. Turning to PERM's capability in generating environment parameters (Figure 2 & 3, green), we see that PERM achieves near perfect recovery of all environment parameters on the test set, as indicated by the MSE between input parameters and recovered parameters. Taking the strong results of PERM in recovering environment parameters from the latent variables, we proceed to generate curricula to train RL Agents. ### Training RL Agents with PERM #### 4.2.1 LunarLander We next apply PERM's environment generation capabilities to train an agent in LunarLander. In this domain, student agents control the engine of a rocket-like structure and is tasked to land the vehicle safely. Before each episode, teacher algorithms determine the gravity and wind power present in a given playthrough, which directly effects the difficulty of landing the vehicle safely. We train student agents for \(1e6\) environment timesteps, and periodically evaluate the agent on test environments. The parameters for the test environments are randomly generated and fixed across all evaluations, and are provided in the Appendix. We report the training and evaluation results in Figure 4 top and middle plots respectively. As we see, student agents trained with PERM achieves stronger performance over all other methods, both during training and evaluation environments. More importantly, we note that despite training PERM-Offline on a different student, the RL agent under PERM-Offline still maintains its training effectiveness over other methods. We note that despite a reasonably strong performance of an agent trained under DR, DR has a greater possibility of generating environments that are out of the student's range of ability. We observe that episode lengths for students trained under DR are shorter (mean of 244 timesteps vs 311 timesteps for PERM), indicating a larger proportion of levels where the student agent immediately fails. PERM, by providing environments that are constantly within the student's capabilities, is more sample efficient than DR. #### 4.2.2 BipedalWalker Finally, we evaluate PERM in the modified BipedalWalker from Parker-Holder et al. (2022). In this domain, student agents are required to control a bipedal vehicle and navigate across a terrain. The teacher agent is tasked to select the range of level properties in the terrain, such as the minimum and maximum size of a pit. The environment is then generated by uniformly sampling from the parameters. We train agents for about 3 billion environment steps, and periodically evaluate the agents for about 30 episodes per evaluation environment. The evaluation results are provided in Figure 4, bottom. In the BipedalWalker's evaluation environments, student agent trained by PERM produced mixed results, notably achieving comparable performance to \(\text{PLR}^{\perp}\) in the StumpHeight and PitGap environment, and comparable performance to PAIRED in others. As BipedalWalker environment properties are sampled from the environment parameters generated by the teacher, it is likely that the buffer-based \(\text{PLR}^{\perp}\) that tracks seeds of environments had a superior effect in training our student agents. PERM, on the other hand, is trained to only generate the ranges of the environment properties, which results in non-deterministic environment generation despite the same set of parameters. ## 5 Conclusion and Future Work We have introduced PERM, a new method that characterizes the agent-environment interaction as a student-item response paradigm. Inspired by Item Response Theory, we provide a method to directly assess the ability of a student agent, and the difficulty associated with parameters of a simulated environment. We proposed to generate curricula by evaluating the ability of the student agent, then generating environments that match the ability of the student. Since PERM does not rely on non-stationary measures of ability such as Regret, our method allows us to predict ability and difficulty directly across different students. Hence, our approach is transferable and is able to adapt to learning trajectories of different students. Theoretically, we could use PERM to train humans in similarly parameterized environments. We have demonstrated that PERM produces strong representation of both parameterized environments, and is a suitable approach in generating environment parameters with desired difficulties. Finally, we trained RL agents with PERM in our selected environments, and found that our method outperformed the other methods in the deterministic environment, LunarLander. Most recently, Zhuang et al. (2022) proposed to use a IRT-based model in Computerized Adaptive Testing (CAT) on humans, to some success. The objective of CAT is to accurate predict the student's response to a set of future questions, based on her response to prior questions. We look forward to deploying PERM or IRT-based models in real world settings for training purposes. We hope that our results inspire research into methods that are able to train both humans and RL Agents effectively. ## Acknowledgements This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2022-01-025).
2305.12666
Fast energy decay for wave equation with a monotone potential and an effective damping
We consider the total energy decay of the Cauchy problem for wave equations with a potential and an effective damping. We treat it in the whole one-dimensional Euclidean space. Fast energy decay is established with the help of potential. The proofs of main results rely on a multiplier method and modified techniques adopted in [8].
Xiaoyan Li, Ryo Ikehata
2023-05-22T03:12:13Z
http://arxiv.org/abs/2305.12666v1
# Fast energy decay for wave equation with ###### Abstract We consider the total energy decay of the Cauchy problem for wave equations with a potential and an effective damping. We treat it in the whole one-dimensional Euclidean space \(\mathbf{R}\). Fast energy decay like \(E(t)=O(t^{-2})\) is established with the help of potential. The proofs of main results rely on a multiplier method and modified techniques adopted in [8]. 0 Footnote 0: 2000 Mathematics Subject Classification. Primary 35L05; Secondary 35B40, 35B45. 0 Footnote 0: 2000 Mathematics Subject Classification. Primary 35L05; Secondary 35B40, 35B45. ## 1 Introduction We consider the Cauchy problem for wave equation with a general potential and a space-dependent damping in the one dimensional Euclidean space: \[\begin{cases}u_{tt}(t,x)-u_{xx}(t,x)+V(x)u(t,x)+a(x)u_{t}(t,x)=0,\ \ (t,x)\in(0,\infty)\times\mathbf{R},\\ u(0,x)=u_{0}(x),\ \ x\in\mathbf{R},\\ u_{t}(0,x)=u_{1}(x),\ \ x\in\mathbf{R},\end{cases} \tag{1.1}\] where the initial data \(u_{0}(x)\) and \(u_{1}(x)\) are chosen from \(H^{1}(\mathbf{R})\) and \(L^{2}(\mathbf{R})\) respectively. Both initial values and solutions shall all take real values. In addition, we assume that there exists \(R>0\) such that \[\operatorname{supp}u_{0}\cup\operatorname{supp}u_{1}\subset B_{R}:=\{x:|x| \leq R\}. \tag{1.2}\] Throughout this paper, we define \[u_{t}=\frac{\partial u}{\partial t},\quad u_{tt}=\frac{\partial^{2}u}{ \partial t^{2}},\quad u_{x}=\frac{\partial u}{\partial x},\quad u_{xx}=\frac{ \partial^{2}u}{\partial x^{2}}.\] For convenience, we denote the usual \(L^{q}(\mathbf{R})\)-norm (\(q=2,\infty\)) by \(\|\cdot\|_{q}\). In particular, the \(L^{2}(\mathbf{R})\)-norm is denoted by \(\|\cdot\|\). The total energy \(E(t)\) of the solution \(u(t,x)\) to problem (1.1) is defined by \[E(t)=\frac{1}{2}(\|u_{t}(t,\cdot)\|^{2}+\|u_{x}(t,\cdot)\|^{2}+\|\sqrt{V( \cdot)}u(t,\cdot)\|^{2}). \tag{1.3}\] Furthermore, we define the inner product of \(L^{2}({\bf R})\) by \[(u,v)=\int_{\bf R}u(x)v(x)dx.\] The energy decay of wave equation with local damping has been studied by many scholars. Let us start with the initial-boundary value problem \[\begin{cases}u_{tt}(t,x)-\Delta u(t,x)+a(x)u_{t}(t,x)=0,\ \ (t,x)\in(0,\infty) \times\Omega,\\ u(0,x)=u_{0}(x),\ u_{t}(0,x)=u_{1}(x),\ \ x\in\Omega,\\ u(t,x)=0,\ \ x\in\partial\Omega,\end{cases} \tag{1.4}\] where \(\Omega={\bf R}^{n}\setminus\bar{\cal O}\subset{\bf R}^{n}\) is a smooth exterior domain. For the case of effective damping near infinity, that is, the damper \(a(x)\) behaves as \[a(x)\geq 0\ \ \mbox{in}\ \ \Omega,\ \ \ \ \ a(x)\geq\varepsilon_{1}>0\ \ \mbox{for}\ |x|>\mbox{constant},\] Nakao [21] and Ikehata [6] provided the decay estimates of the total energy. Particularly, in [6], by special multiplier method developed in [11], more faster energy decay such as \(E(t)=O(t^{-2})\) was derived for (1.4) under the star-shaped obstacle \({\cal O}\) (\(\bar{\cal O}:={\bf R}^{n}\setminus\Omega\)) and weighted initial data conditions. Later on, some precise results were obtained in [1] under the so called GCC (Geometric Control Condition) assumption and without assuming star-shaped obstacle \({\cal O}\). The authors in [1] showed that \[E(t)=O(t^{-\min\{\frac{3n}{4},1+\frac{n}{2}\}})\] for \(n\geq 2\). A generalization of [6] was also done in [3] by removing a geometrical condition assumed on the obstacles. Since mentioned results rely on the Poincare inequality and/or the Hardy inequality, only exterior domain and/or high dimension cases can be treated. To overcome these obstacles, a novel method was developed in [10], which employs potential term to compensate for the lack of the Poincare and Hardy inequalities in the whole one dimensional Euclidean space. This idea has its origin in [7]. For the whole high dimensional Euclidean space \({\bf R}^{n}(n\geq 3)\), similar conclusions to [6] about the total energy can be obtained. In addition, there are many interesting results about effective damping near infinity. For example, in [37], the exponential decay of the total energy for Klein-Gordon type wave equation was obtained by weighted energy method. The diffusion phenomena of wave equation with (asymptotically) periodic and/or constant damping was investigated in [24], [14] and [22]. For time-space dependent damper \(a(t,x)\) including a constant damping, the decay properties of energy and \(L^{2}\)-norm of solution were studied in [19], [17] and [18]. Another generalization of [6] in noncompact Riemannian manifold was also considered by [36]. For the case of degenerating damping near infinity, that is \[a(x)\geq 0\ \ \mbox{in}\ \ \Omega,\ \ \ \ \ a(x)\to 0\ \ \mbox{for}\ \ |x|\to\infty,\] there are also many results for (1.4) with \(\Omega={\bf R}^{n}\). The most commonly chosen form of \(a(x)\) is given by \[\frac{a_{1}}{(1+|x|)^{\alpha}}\leq a(x)\leq\frac{a_{2}}{(1+|x|)^{\alpha}},\ \ \ \ (\alpha>0,\ \ x\in{\bf R}^{n}). \tag{1.5}\] When \(\alpha=1\), that is the so-called critical damping, the authors in [13] showed the fact that if \(1<a_{1}<n\), then \(E(t)=O(t^{-a_{1}})\); while if \(a_{1}\geq n\), it holds that \(E(t)=O(t^{-n+\delta})\) with \(\delta>0\) small enough. The authors in [13] derived such results under the compact support condition on the initial data. It should be mentioned that very recently, an interesting result has been introduced by Sobajima [28] in the case of \(\alpha=1\) (critical damping case), which completely remove the compactness of support of the initial data that assumed in [13], and furthermore achieve applications to non-linear problems of (1.4). In [15], the exponential decay of the total energy to problem (1.4) was obtained for critical damping if the initial data were taken as some special form. When \(\alpha\in[0,1)\), (1.5) is called as sub-critical damping. In this case, it was proved in [33] that the energy of solutions to problem (1.4) decays at a polynomial rate \(t^{-(n-\alpha)/(2-\alpha)-1+\delta}\) for small enough \(\delta>0\) (see also [25, 26] for higher order energy decay). It should be mentioned that in the case of \(n=1\), we see \(\frac{n-\alpha}{2-\alpha}+1<2\). As a decay rate 2 is a key number in our paper. In [35], a large time behavior of solutions and total energy to the wave equation with effective damping and absorbing nonlinearity is deeply studied in \({\bf R}^{n}\) with \(n\geq 1\) for some weighted initial data. While, when \(\alpha>1\), we call (1.5) as super-critical damping. Mochizuki [18] showed a non-decay in general property of the total energy for super-critical damping. In the case of super-critical damping with \(\alpha>1\), the total energy is generally non-decaying as is pointed out by [18], but it can be seen from [2] that the local energy does indeed decay with some rate. What's more, there are many results about so-called diffusion phenomena for an effective damping near infinity, that is, the solution to (1.4) with \(\Omega={\bf R}^{n}\) is approximated by a solution to the corresponding parabolic problem ([32]). We refer the interested readers to [29, 30, 31, 27, 23, 34] for an additional topic of diffusion phenomenon. In this paper, "effective" damping is taken to mean the case where energy decay always occurs and therefore \(\alpha\in[0,1]\). Synthesis of the above researches, it seems that few results, such as \(E(t)=O(t^{-2})\), are established for \(n=1\). It should be mentioned that the energy decay like \(E(t)=O(t^{-2})\) for \(n=1\) was obtained in [9] to problem (1.4). However, only half space case, that is \(\partial\Omega=(0,\infty)\) in (1.4), is treated. So, whether one can obtain fast energy decay like \(E(t)=O(t^{-2})\) for the whole one dimensional Euclid space seems like an open problem. In this paper, with the help of potential term \(V(x)\), we derive the fast energy decay for effective damping \(a(x)\) near infinity by a spacial multiply method. Basically one-dimensional Cauchy problems seem to be difficult because there are few useful tools compared with the higher dimensional case. In this paper, one breakthrough is presented. In order to derive the main results, the following hypothesis are imposed for \(a(x)\) and \(V(x)\). \(({\bf A.1})\)\(a\in C({\bf R})\), and there exists two positive constants \(a_{1}\) and \(a_{2}\) such that \[\frac{a_{1}}{(1+|x|)^{\alpha}}\leq a(x)\leq\frac{a_{2}}{(1+|x|)^{\alpha}},\ \ x \in{\bf R},\] where \(0\leq\alpha\leq 1\). \(({\bf A.2})\)\(V\in C^{1}({\bf R})\) is a bounded function that satisfies \[V(x)>0,\ \ xV_{x}(x)\leq 0,\ \ \ x\in{\bf R}.\] With above preparations, our main results are stated as follows. First we state fast energy decay property of the total energy appearing in the sub-critical damping case \(\alpha<1\). **Theorem 1.1**: _Assume \(({\bf A.1})\) with \(0\leq\alpha<1\) and \(({\bf A.2})\). If the initial data \([u_{0},u_{1}]\in H^{1}({\bf R})\times L^{2}({\bf R})\) satisfies (1.2), there exists a unique weak solution \(u\in C([0,\infty);H^{1}({\bf R}))\cap C^{1}([0,\infty);L^{2}({\bf R}))\) to problem (1.1) satisfying \(u(t,x)=0\) for \(|x|>R+t\)\((t\geq 0)\), and_ \[E(t)=O(t^{-2})\ \ \ (t\to\infty).\] Next, we consider the critical damping case \(\alpha=1\). This case is rather complicated. **Theorem 1.2**: _Assume \(({\bf A.1})\) with \(\alpha=1\) and \(({\bf A.2})\). If the initial data \([u_{0},u_{1}]\in H^{1}({\bf R})\times L^{2}({\bf R})\) satisfies (1.2), there exists a unique weak solution \(u\in C([0,\infty);H^{1}({\bf R}))\cap C^{1}([0,\infty);L^{2}({\bf R}))\) to problem (1.1) satisfying \(u(t,x)=0\) for \(|x|>R+t\)\((t\geq 0)\), and the following properties._ \((1)\) _If \(0<a_{1}\leq 2\), it holds that_ \[E(t)=O(t^{-a_{1}+\delta})\ \ \ (t\to\infty) \tag{1.6}\] _with small enough \(\delta>0\)._ \((2)\) _If \(a_{1}>2\), it holds that_ \[E(t)=O(t^{-2})\ \ \ (t\to\infty). \tag{1.7}\] **Example.** Let \(V_{0}>0\). One can present \(V(x):=V_{0}e^{-x^{2}}\), \(V(x):=V_{0}(1+x^{2})^{-\frac{\mu}{2}}\)\((\mu>0)\) and \(V(x):=V_{0}\) as examples. The last one corresponds to the so-called Klein-Gordon equation (cf. [37]). **Remark 1.1**: As long as we properly check the unique existence of the weak solution, our result holds formally for the case \(\alpha<0\). We note that the coefficient \(a(x)\) of the damping term is spatially unbounded in the negative \(\alpha\) case (cf. [12]). **Remark 1.2**: In the case of \(a(x)=a_{1}>0\) and \(V(x)=0\), from [17] one can know the total energy decay such that \(E(t)=O(t^{-\frac{3}{2}})\) (\(t\to\infty\)). When we compare Theorem 1.1 with Matsumura's estimate, one can get faster decay rate such as \(E(t)=O(t^{-2})\) (\(t\to\infty\)). The influence of potential \(V(x)\) is strongly effective in our theory even in the case of weakly effective potential (rapidly decay potential) such as \(V(x):=V_{0}e^{-x^{2}}\). It is still open to get faster decay \(E(t)=O(t^{-2})\) (\(t\to\infty\)) in the case of \(V(x)=0\). **Remark 1.3**: A found number 2 in Theorem 1.2 about \(a_{1}\) seems to be a threshold which divides the decay property into two parts (1.6) and (1.7). (1.6) may express a wave like property of the solution, and (1.7) may imply diffusive aspect of the solution. These two types of properties of the solution are closely related to that of [25] which treated higher dimensional case. From the observation in [9, Remark 1.2] the critical number 2 on the coefficient \(a_{1}\) seems reasonable. **Remark 1.4**: In previous studies [4, 16], similar model to (1.1) was studied from the view point of critical exponent of the power of the nonlinearity, however, the authors in [4, 16] did not treat the one dimensional case. Although we are dealing with a linear problem, it could be a milestone when dealing with future one-dimensional non-linear problems. **Remark 1.5**: We derive our results assuming condition (1.2), but the consideration of the case without that condition (1.2) is so far unknown. The remainder of this paper is organized as follows. In Sections 2 and 3 we give proofs of Theorems 1.1 and 1.2 by modifying a method, which has its origin in [8]. ## 2 Proof of main results In this section, we prove our main results. Since the unique existence of the weak solution \(u(t,x)\) satisfying the finite speed of propagation property to problem (1.1) is a standard argument (cf. [5]) it suffices to get only the desired decay estimates in each Theorems 1.1 and 1.2. The argument developed in [8, 9] is useful again. A next energy identity will play a crucial role in our argument. **Lemma 2.1**: _For the solution \(u(t,x)\) to Cauchy problem (1.1), it holds that_ \[\frac{1}{2}\frac{d}{dt}G(t)+\frac{1}{2}\int_{\mathbf{R}}F_{1}(t,x )|u_{t}(t,x)|^{2}dx+\frac{1}{2}\int_{\mathbf{R}}F_{2}(t,x)|u_{x}(t,x)|^{2}dx\] \[+\frac{1}{2}\int_{\mathbf{R}}F_{3}(t,x)|u(t,x)|^{2}dx+\int_{ \mathbf{R}}F_{4}(t,x)u_{x}(t,x)u_{t}(t,x)dx\] \[+\frac{1}{2}\int_{\mathbf{R}}\frac{\partial}{\partial x}K(t,x)dx =0, \tag{2.1}\] _where_ \[G(t)= \int_{\mathbf{R}}f(t)\big{(}|u_{t}(t,x)|^{2}+|u_{x}(t,x)|^{2}+V( x)|u(t,x)|^{2}\big{)}+2g(t)u(t,x)u_{t}(t,x)\] \[+\big{(}g(t)a(x)-g_{t}(t)\big{)}|u(t,x)|^{2}+2h(t,x)u_{t}(t,x)u_{ x}(t,x)dx,\] \[F_{1}(t,x)=2f(t)a(x)-f_{t}(t)-2g(t)+h_{x}(t,x),\] \[F_{2}(t,x)=2g(x)-f_{t}(t)+h_{x}(t,x),\] \[F_{3}(t,x)=g_{tt}(t)-g_{t}(t)a(x)-V(x)f_{t}(t)+2V(x)g(t)-V_{x}(x )h(t,x)-V(x)h_{x}(t,x),\] \[F_{4}(t,x)=h(t,x)a(x)-h_{t}(t,x),\] \[K(t,x)=-2f(t,x)u_{t}(t,x)u_{x}(t,x)-2g(t)u(t,x)u_{x}(t,x)-h(t,x)|u _{t}(t,x)|^{2}\] \[\qquad-h(t,x)|u_{x}(t,x)|^{2}+V(x)h(t,x)|u(t,x)|^{2}.\] _Here, \(f(t)\), \(g(t)\) and \(h(t,x)\) are all smooth functions, which will be determined later on._ Proof.: The proof can be done for the smooth solution \(u(t,x)\) by density. The argument developed in [20, 21] is helpful. To make the proof more clear, we divide the following statement into four steps. _step 1._ Multiplying the both sides of (1.1) by \(f(t)u_{t}(t,x)\) yields \[0= fu_{t}u_{tt}-fu_{t}u_{xx}+fVuu_{t}+fa|u_{t}|^{2}\] \[= \frac{1}{2}f\frac{\partial}{\partial t}|u_{t}|^{2}-f\frac{\partial }{\partial x}(u_{t}u_{x})+\frac{1}{2}\frac{\partial}{\partial t}(f|u_{x}|^{2} )-\frac{1}{2}f_{t}|u_{x}|^{2}+\frac{1}{2}fV\frac{\partial}{\partial t}|u|^{2}+ fa|u_{t}|^{2}\] \[= \frac{1}{2}\frac{\partial}{\partial t}f|u_{t}|^{2}-\frac{1}{2}f_{ t}|u_{t}|^{2}-\frac{\partial}{\partial x}(fu_{t}u_{x})+\frac{1}{2}\frac{ \partial}{\partial t}(f|u_{x}|^{2})-\frac{1}{2}f_{t}|u_{x}|^{2}\] \[+\frac{1}{2}\frac{\partial}{\partial t}(fV|u|^{2})-\frac{1}{2}Vf _{t}|u|^{2}+fa|u_{t}|^{2}\] \[= \frac{1}{2}\frac{\partial}{\partial t}(f|u_{t}|^{2}+f|u_{x}|^{2} +fV|u|^{2})+(fa-\frac{1}{2}f_{t})|u_{t}|^{2}\] \[-\frac{1}{2}f_{t}|u_{x}|^{2}-\frac{1}{2}f_{t}V|u|^{2}-\frac{ \partial}{\partial x}(fu_{t}u_{x}) \tag{2.2}\] _step 2._ Multiplying the both sides of (1.1) by \(g(t)u(t,x)\), we obtain \[0= guu_{tt}-guu_{xx}+gV|u|^{2}+gauu_{t}\] \[= g\frac{\partial}{\partial t}(uu_{t})-g|u_{t}|^{2}-g\frac{\partial }{\partial x}(uu_{x})+g|u_{x}|^{2}+gV|u|^{2}+\frac{1}{2}ga\frac{\partial}{ \partial t}|u|^{2}\] \[= \frac{\partial}{\partial t}(guu_{t})-\frac{1}{2}\frac{\partial}{ \partial t}(g_{t}|u|^{2})+\frac{1}{2}g_{tt}|u|^{2}-g|u_{t}|^{2}-\frac{\partial }{\partial x}(guu_{x})\] \[+g|u_{x}|^{2}+gV|u|^{2}+\frac{1}{2}\frac{\partial}{\partial t}(ga| u|^{2})-\frac{1}{2}g_{t}a|u|^{2}\] \[= \frac{1}{2}\frac{\partial}{\partial t}(2guu_{t}-g_{t}|u|^{2}+ga| u|^{2})-g|u_{t}|^{2}+g|u_{x}|^{2}\] \[+(\frac{1}{2}g_{tt}+gV-\frac{1}{2}g_{t}a)|u|^{2}-\frac{\partial}{ \partial x}(guu_{x}). \tag{2.3}\] _step 3._ Multiplying the both sides of (1.1) by \(h(t,x)u_{x}(t,x)\), we have \[0= hu_{x}u_{tt}-hu_{x}u_{xx}+hVu_{x}u+hau_{x}u_{t}\] \[= h\frac{\partial}{\partial t}(u_{x}u_{t})-hu_{xt}u_{t}-\frac{1}{2 }h\frac{\partial}{\partial x}|u_{x}|^{2}+\frac{1}{2}hV\frac{\partial}{ \partial x}|u|^{2}+hau_{x}u_{t}\] \[= \frac{\partial}{\partial t}(hu_{x}u_{t})-h_{tt}u_{x}u_{t}-\frac{1} {2}\frac{\partial}{\partial x}(h|u_{t}|^{2})+\frac{1}{2}h_{x}|u_{t}|^{2}-\frac {1}{2}\frac{\partial}{\partial x}(h|u_{x}|^{2})\] \[+\frac{1}{2}h_{x}|u_{x}|^{2}+\frac{1}{2}\frac{\partial}{\partial x }(hV|u|^{2})-\frac{1}{2}(h_{x}V+hV_{x})|u|^{2}+hau_{x}u_{t}\] \[= \frac{\partial}{\partial t}(hu_{x}u_{t})+\frac{1}{2}h_{x}|u_{t}|^ {2}+\frac{1}{2}h_{x}|u_{x}|^{2}-\frac{1}{2}(h_{x}V+hV_{x})|u|^{2}\] \[+\frac{1}{2}\frac{\partial}{\partial x}(-h|u_{t}|^{2}-h|u_{x}|^{2} +hV|u|^{2})+(ha-h_{t})u_{x}u_{t}. \tag{2.4}\] _step 4._ Adding these identities from (2.2) to (2.4) all together, and integrating it over \({\bf R}\), we will arrive at the desired identity (2.1). \(\Box\) Due to compactness of support of the initial data and the finite speed of propagation property of waves, the solution to problem (1.1) vanishes for large \(|x|\gg 1\). Therefore, we have \[\int_{\bf R}\frac{\partial}{\partial x}K(t,x)dx=K(t,+\infty)-K(t,-\infty)=0. \tag{2.5}\] While, it follows from the Young inequality that \[\int_{\bf R}F_{4}(t,x)u_{x}(t,x)u_{t}(t,x)dx\geq -\frac{k}{2}\int_{\bf R}|h(t,x)|a(x)|u_{t}(t,x)|^{2}dx-\frac{1}{2k} \int_{\bf R}|h(t,x)|a(x)|u_{x}(t,x)|^{2}dx\] \[-\frac{1}{2}\int_{\bf R}|h_{t}(t,x)||u_{t}(t,x)|^{2}dx-\frac{1}{2 }\int_{\bf R}|h_{t}(t,x)||u_{x}(t,x)|^{2}dx. \tag{2.6}\] Substituting (2.5) and (2.6) to (2.1), we get \[\frac{d}{dt}G(t)+\int_{\mathbf{R}}K_{1}(t,x)|u_{t}(t,x)|^{2}dx+\int_{\mathbf{R}}K_ {2}(t,x)|u_{x}(t,x)|^{2}dx+\int_{\mathbf{R}}F_{3}(t,x)|u(t,x)|^{2}dx\leq 0, \tag{2.7}\] where \[K_{1}(t,x)=F_{1}(t,x)-k|h(t,x)|a(x)-|h_{t}(t,x)|,\] \[K_{2}(t,x)=F_{2}(t,x)-\frac{1}{k}|h(t,x)|a(x)-|h_{t}(t,x)|.\] Next, we specify the expressions of \(f(t)\), \(g(t)\) and \(h(t,x)\) as \[f(t)=\varepsilon_{1}(1+t)^{2},\hskip 14.226378ptg(t)=\varepsilon_{2}(1+t), \hskip 14.226378pth(t,x)=\varepsilon_{3}(1+t)x\phi(x)\] where \[\phi(x)=\left\{\begin{array}{ll}1,&|x|\leq 1,\\ \frac{1}{|x|},&|x|\geq 1,\end{array}\right.\] and \(\varepsilon_{1}\), \(\varepsilon_{2}\) and \(\varepsilon_{3}\) are some positive constants, which are determined later on. Note that the function \(\phi(x)\) is Lipschitz continuous on \(\mathbf{R}\). With above preparations, we provide the estimates for \(K_{1}(t,x)\) and \(K_{2}(t,x)\) by the next lemma. It should be noted that these three positive constants \(\varepsilon_{1}\), \(\varepsilon_{2}\) and \(\varepsilon_{3}\) play important roles in the proof of following lemmas. **Lemma 2.2**: _Suppose \(a_{1}>2\) for \(\alpha=1\) and \(a_{1}>0\) for \(0\leq\alpha<1\) in (1.5). If all parameters \(\varepsilon_{1}\), \(\varepsilon_{2}\) and \(\varepsilon_{3}\) are well-chosen, for \(t>t_{0}\gg 1\), it holds that_ \[\mbox{(i)}\ K_{1}(t,x)\geq 0,\hskip 14.226378ptx\in\mathbf{R},\hskip 56.905512pt \mbox{(ii)}\ K_{2}(t,x)\geq 0,\hskip 14.226378ptx\in\mathbf{R}.\] _Proof._ We first divide the integral region into two parts \(|x|\leq 1\) and \(|x|>1\), then we check (i) and (ii). (i) For the case of \(|x|\leq 1\), we have \[K_{1}(t,x)= 2f(t)a(x)-f_{t}(t)-2g(t)+h_{x}(t,x)-k|h(t,x)|a(x)-|h_{t}(t,x)|\] \[= 2\varepsilon_{1}(1+t)^{2}a(x)-2\varepsilon_{1}(1+t)-2\varepsilon _{2}(1+t)+\varepsilon_{3}(1+t)-k\varepsilon_{3}(1+t)|x|a(x)-\varepsilon_{3}|x|\] \[\geq C_{\alpha}\varepsilon_{1}a_{1}(1+t)^{2}-2\varepsilon_{1}(1+t)-2 \varepsilon_{2}(1+t)+\varepsilon_{3}(1+t)-k\varepsilon_{3}(1+t)a(x)-\varepsilon _{3}\] \[= (1+t)^{2}\big{\{}C_{\alpha}\varepsilon_{1}a_{1}-\frac{2 \varepsilon_{1}}{1+t}-\frac{2\varepsilon_{2}}{1+t}+\frac{\varepsilon_{3}}{1+t }-\frac{k\varepsilon_{3}\|a\|_{\infty}}{1+t}-\frac{\varepsilon_{3}}{(1+t)^{2} }\big{\}} \tag{2.8}\] with some \(\alpha\)-dependent constant \(C_{\alpha}>0\), where one has just used the assumption **(A.1)**. For the case of \(|x|>1\), assumption **(A.1)** and the finite speed of propagation property of the solution lead to \[K_{1}(t,x)= 2f(t)a(x)-f_{t}(t)-2g(t)+h_{x}(t,x)-k|h(t,x)|a(x)-|h_{t}(t,x)|\] \[= 2\varepsilon_{1}(1+t)^{2}a(x)-2\varepsilon_{1}(1+t)-2\varepsilon _{2}(1+t)-k\varepsilon_{3}(1+t)a(x)-\varepsilon_{3}\] \[\geq 2\varepsilon_{1}(1+t)^{2}\frac{a_{1}}{(1+|x|)^{\alpha}}-2 \varepsilon_{1}(1+t)-2\varepsilon_{2}(1+t)-k\varepsilon_{3}(1+t)\frac{a_{2}}{ (1+|x|)^{\alpha}}-\varepsilon_{3}\] \[= \frac{(1+t)^{2}}{(1+|x|)^{\alpha}}\big{\{}2\varepsilon_{1}a_{1}- \frac{2\varepsilon_{1}(1+|x|)^{\alpha}}{1+t}-\frac{2\varepsilon_{2}(1+|x|)^{ \alpha}}{1+t}-\frac{k\varepsilon_{3}a_{2}}{1+t}-\frac{\varepsilon_{3}(1+|x|)^ {\alpha}}{(1+t)^{2}}\big{\}}\] \[\geq \frac{(1+t)^{2}}{(1+|x|)^{\alpha}}\big{\{}2\varepsilon_{1}a_{1}- \frac{2\varepsilon_{1}(1+R+t)^{\alpha}}{1+t}-\frac{2\varepsilon_{2}(1+R+t)^{ \alpha}}{1+t}\] \[-\frac{k\varepsilon_{3}a_{2}}{1+t}-\frac{\varepsilon_{3}(1+R+t)^{ \alpha}}{(1+t)^{2}}\big{\}} \tag{2.9}\] for large \(t\gg 1\). (ii) For the case of \(|x|\leq 1\), \(K_{2}(t,x)\) satisfies \[K_{2}(t,x)= 2g(t)-f_{t}(t)+h_{x}(t,x)-\frac{1}{k}|h(t,x)|a(x)-|h_{t}(t,x)|\] \[\geq 2\varepsilon_{2}(1+t)-2\varepsilon_{1}(1+t)+\varepsilon_{3}(1+t )-\frac{1}{k}\varepsilon_{3}(1+t)a(x)-\varepsilon_{3}\] \[= (1+t)\big{\{}2\varepsilon_{2}-2\varepsilon_{1}+\varepsilon_{3}- \frac{\varepsilon_{3}\|a\|_{\infty}}{k}-\frac{\varepsilon_{3}}{1+t}\big{\}}. \tag{2.10}\] For the case of \(|x|>1\), we have \[K_{2}(t,x)= 2g(t)-f_{t}(t)+h_{x}(t,x)-\frac{1}{k}|h(t,x)|a(x)-|h_{t}(t,x)|\] \[= 2\varepsilon_{2}(1+t)-2\varepsilon_{1}(1+t)-\frac{1}{k} \varepsilon_{3}(1+t)a(x)-\varepsilon_{3}\] \[\geq (1+t)\big{\{}2\varepsilon_{2}-2\varepsilon_{1}-\frac{\varepsilon _{3}\|a\|_{\infty}}{k}-\frac{\varepsilon_{3}}{1+t}\big{\}}. \tag{2.11}\] To guarantee the positivity of \(K_{1}(t,x)\) and \(K_{2}(t,x)\), our next task is to choose reasonable positive constants \(k,\ \varepsilon_{1},\ \varepsilon_{2},\) and \(\varepsilon_{3}\), which depend on the value of \(\alpha\in[0,1]\). In this case, it is important to notice the fact that \[\lim_{t\to\infty}\frac{(1+R+t)^{\alpha}}{1+t}=0\quad(\alpha<1),\quad\lim_{t \to\infty}\frac{1+R+t}{1+t}=1.\] Case for \(0\leq\alpha<1\). For large \(t\geq t_{0}\gg 1\), the following conditions are needed: \[C_{\alpha}\varepsilon_{1}a_{1}>0, \tag{2.12}\] \[2\varepsilon_{2}-2\varepsilon_{1}+\varepsilon_{3}-\frac{ \varepsilon_{3}\|a\|_{\infty}}{k}>0,\] (2.13) \[2\varepsilon_{2}-2\varepsilon_{1}-\frac{\varepsilon_{3}\|a\|_{ \infty}}{k}>0. \tag{2.14}\] In fact, it is sufficient to choose \(\varepsilon_{1}\) and \(\varepsilon_{2}\) and large \(k>0\) satisfying \[\begin{cases}\varepsilon_{2}>\varepsilon_{1}>0,\\ k>\frac{\varepsilon_{3}\|a\|_{\infty}}{2(\varepsilon_{2}-\varepsilon_{1})}. \end{cases}\] In this case \(\varepsilon_{3}>0\) can be chosen arbitrarily. Case for \(\alpha=1\). Additional conditions are needed. \[C_{\alpha}\varepsilon_{1}a_{1}>0, \tag{2.15}\] \[2a_{1}\varepsilon_{1}-2\varepsilon_{1}-2\varepsilon_{2}>0,\] (2.16) \[2\varepsilon_{2}-2\varepsilon_{1}+\varepsilon_{3}-\frac{ \varepsilon_{3}\|a\|_{\infty}}{k}>0,\] (2.17) \[2\varepsilon_{2}-2\varepsilon_{1}-\frac{\varepsilon_{3}\|a\|_{ \infty}}{k}>0. \tag{2.18}\] To guarantee (2.17) and (2.18), it is necessary to set \[\begin{cases}\varepsilon_{2}>\varepsilon_{1}>0,\\ k>\frac{\varepsilon_{3}\|a\|_{\infty}}{2(\varepsilon_{2}-\varepsilon_{1})}. \end{cases}\] Under this situation, to realize (2.16) such that \[(a_{1}-1)\varepsilon_{1}>\varepsilon_{2}>\varepsilon_{1}, \tag{2.19}\] we must choose \(a_{1}>2\) in the case of \(\alpha=1\). In this case \(\varepsilon_{3}>0\) can be also chosen arbitrarily. We look at the bound for \(F_{3}(t,x)\) in the next lemma. \(\Box\) **Lemma 2.3**: _Suppose \(a_{1}>2\) for \(\alpha=1\) and \(a_{1}>0\) for \(0\leq\alpha<1\). If the parameters \(\varepsilon_{1}\), \(\varepsilon_{2}\) and \(\varepsilon_{3}\) are well-chosen, for \(t>t_{0}\gg 1\), it holds that_ \[-F_{3}(t,x)\leq Ca(x),\quad x\in{\bf R},\] _where \(C>0\) is a generous constant._ _Proof._ By assumption **(A.2)**, we have for a.e. \(x\in{\bf R}\) \[-F_{3}(t,x)= -g_{tt}(t)+g_{t}(t)a(x)+V(x)f_{t}(t)-2V(x)g(t)+V_{x}(x)h(t,x)+V(x)h _{x}(t,x),\] \[\leq \varepsilon_{2}a(x)+2\varepsilon_{1}V(x)(1+t)-2\varepsilon_{2}V( x)(1+t)+\varepsilon_{3}V_{x}(x)x(1+t)\phi(x)+\varepsilon_{3}V(x)(1+t)\] \[\leq \varepsilon_{2}a(x)+V(x)(1+t)(2\varepsilon_{1}-2\varepsilon_{2}+ \varepsilon_{3}). \tag{2.20}\] Let us choose parameters such as \(2\varepsilon_{1}-2\varepsilon_{2}+\varepsilon_{3}\leq 0\), that is, \(\varepsilon_{3}\leq 2\varepsilon_{2}-2\varepsilon_{1}\) to get the desired estimate. Here, it should be noted that we must choose additionally \(2\varepsilon_{2}>2\varepsilon_{1}\). By the way, in the case of \(\alpha=1\), by considering (2.19) we must choose simultaneously \[(a_{1}-1)\varepsilon_{1}>\varepsilon_{2}>\varepsilon_{1}.\] Consequently, \(a_{1}>2\) is again required. \(\Box\) _Proof of Lemmas 2.2 and 2.3 completed._ By summarizing the proof of Lemmas 2.2 and 2.3, let us choose unified parameters \(\varepsilon_{j}>0\) (\(j=1,2,3\)) and \(k\gg 1\) for each \(0\leq\alpha\leq 1\). (1) _In the case \(0\leq\alpha<1\), we select \(\mu>0\) such that_ \[\varepsilon_{1}=\mu,\quad\varepsilon_{2}=3\mu,\quad\varepsilon_{3}=\frac{1}{2 }\mu,\quad k>\frac{\|a\|_{\infty}}{8}.\] (2) _In the case \(\alpha=1\), we select \(\lambda=a_{1}-2>0\) and \(\mu>0\) such that_ \[\varepsilon_{1}=\mu,\quad\varepsilon_{2}=(1+\frac{\lambda}{2})\mu,\quad \varepsilon_{3}=\frac{1}{4}\lambda\mu,\quad k>\frac{\|a\|_{\infty}}{4}.\] _With above parameters, the statements of Lemmas 2.2 and 2.3 are true. \(\Box\)_ Thanks to Lemmas 2.2 and 2.3, (2.7) can be rewritten as \[\frac{d}{dt}G(t)\leq C\int_{\bf R}a(x)|u(t,x)|^{2}dx\quad(t\geq t_{0}\gg 1).\] Integrating it over \([t_{0},t]\) yields \[G(t)\leq G(t_{0})+C\int_{t_{0}}^{t}\int_{\bf R}a(x)|u(s,x)|^{2}dxds. \tag{2.21}\] We can estimate \(G(t_{0})\) by the Cauchy-Schwarz inequality \[G(t_{0})\leq 2\varepsilon_{1}(1+t_{0})^{2}E(t_{0})+2\varepsilon_{2}(1+t_{0}) \|u(t_{0},\cdot)\|\|u_{t}(t_{0},\cdot)\|+\varepsilon_{2}\|a(x)\|_{\infty}(1+t _{0})\|u(t_{0},\cdot)\|^{2}\] \[+2\varepsilon_{3}(1+t_{0})\|u_{t}(t_{0},\cdot)\|\|u_{x}(t_{0}, \cdot)\|:=C_{t_{0}}>0. \tag{2.22}\] It follows from (2.21) and the definition of \(G(t)\) that \[f(t)E(t)+g(t)(u(t,\cdot),u_{t}(t,\cdot))+(h(t,\cdot)u_{t}(t,\cdot),u_{x}(t,\cdot))\] \[\leq \frac{C_{t_{0}}}{2}+\frac{1}{2}\int_{\mathbf{R}}(g_{t}(t)-g(t)a(x) )|u(t,x)|^{2}dx+\frac{C}{2}\int_{t_{0}}^{t}\int_{\mathbf{R}}a(x)|u(s,x)|^{2}dxds\] \[\leq C_{t_{0}}+\varepsilon_{2}\|u(t,\cdot)\|^{2}+C\int_{t_{0}}^{t} \int_{\mathbf{R}}a(x)|u(s,x)|^{2}dxds. \tag{2.23}\] Next, we need a crucial lemma to obtain main results, which can be derived by Ikehata [7]. A role of potential \(V(x)\) is crucial. We only write down its statement without proof. **Lemma 2.4**_Under the assumptions of Theorems 1.1 and 1.2 on the initial data, for the corresponding weak solution \(u(t,x)\) to problem (1.1)-(1.2), it holds that_ \[\|u(t,\cdot)\|^{2}+\int_{0}^{t}\int_{\mathbf{R}}a(x)|u(s,x)|^{2} dxds\leq C\left(\|u_{0}\|^{2}+\int_{\mathbf{R}}\frac{|u_{1}(x)+a(x)u_{0}(x)|^{2}}{ V(x)}dx\right)=:CJ_{0}^{2},\ \ \ \ t\geq 0,\] _where \(C>0\) is a generous constant._ **Lemma 2.5**_Under the assumptions of Theorems 1.1 and 1.2, for \(t\geq t_{0}\gg 1\) it holds that_ \[f(t)E(t)+(h(t,\cdot)u_{x}(t,\cdot),u_{t}(t,\cdot))\geq\frac{1}{2 }f(t)E(t).\] _Proof_. By the Young inequality, we have \[f(t)E(t)+(h(t,\cdot)u_{x}(t,\cdot),u_{t}(t,\cdot))\] \[\geq \frac{1}{2}\int_{\mathbf{R}}f(t)(|u_{t}(t,x)|^{2}+|u_{x}(t,x)|^{2 }+V(x)|u(t,x)|^{2})dx-\frac{1}{2}\int_{\mathbf{R}}|h(t,x)|(|u_{t}(t,x)|^{2}+| u_{x}(t,x)|^{2})dx\] \[\geq \frac{1}{2}\int_{\mathbf{R}}(f(t)-|h(t,x)|)(|u_{t}(t,x)|^{2}+|u_ {x}(t,x)|^{2})dx+\frac{1}{4}\int_{\mathbf{R}}f(t)V(x)|u(t,x)|^{2}dx. \tag{2.24}\] By choosing \(t\geq t_{0}\gg 1\) large enough such that \[\frac{\varepsilon_{3}}{1+t}\leq\frac{\varepsilon_{1}}{2},\] we see \[f(t)-|h(t,x)|\geq(1+t)^{2}(\varepsilon_{1}-\frac{\varepsilon_{3 }}{1+t})\geq\frac{\varepsilon_{1}}{2}(1+t)^{2}=\frac{1}{2}f(t). \tag{2.25}\] Substituting (2.25) into (2.24), we can obtain the desired result. \(\Box\) Finally, let us prove main results with the tools prepared above. _Proof of Theorem 1.1_ Under the assumption of Theorem 1.1, we can easily check \[\int_{\mathbf{R}}\frac{|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V(x)}dx<\infty.\] It follows from (2.23), Lemmas 2.4 and 2.5 that \[f(t)E(t)\leq 2g(t)\|u(t,\cdot)\|\|u_{t}(t,\cdot)\|+C+J_{0}^{2}\] \[\leq Cg(t)\sqrt{E(t)}J_{0}+C+CJ_{0}^{2}\] \[\leq \frac{CJ_{0}g(t)}{\sqrt{f(t)}}\sqrt{f(t)E(t)}+C+CJ_{0}^{2}\] \[\leq \frac{C^{2}}{2}J_{0}^{2}\frac{g(t)^{2}}{f(t)}+\frac{1}{2}f(t)E(t )+C+CJ_{0}^{2} \tag{2.26}\] with some generous constant \(C>0\). Therefore, \[f(t)E(t)\leq C^{2}J_{0}^{2}\frac{g(t)^{2}}{f(t)}+C+CJ_{0}^{2},\quad t\geq t_{0},\] that is \[E(t)\leq C^{2}J_{0}^{2}\frac{g(t)^{2}}{f(t)^{2}}+\frac{C}{f(t)}+\frac{CJ_{0}^{2} }{f(t)},\quad t\geq t_{0},\] which completes the proof. \(\Box\) _Proof of the case (2) of Theorem 1.2_ By similar arguments to Theorem 1.1, we will get (1.7) of Theorem 1.2 by assuming \(a_{1}>2\). \(\Box\) ## 3 Proof of the case (1) of Theorem 1.2 In a new chapter, we now prove the case (1) of Theorem 1.2. The proof of the case (1) of Theorem 1.2 is similar to the previous part basically. So, we sketch out the main points in the following statements. This part starts with new definitions of \(f(t)\), \(g(t)\) and \(h(t,x)\): \[f(t)=\varepsilon_{1}(1+t)^{\theta},\quad\ g(t)=\varepsilon_{2}(1+t)^{\theta-1 },\quad\ h(t,x)=\varepsilon_{3}(1+t)^{\theta-1}x\phi(x),\] where \(\theta>0\) will be fixed later on. Under these new definitions, we need to check Lemmas 2.2 and 2.3 with \(0<a_{1}\leq 2\) and \(\alpha=1\) for large \(t>0\). **Check of Lemma 2.2**: If \(|x|\leq 1\), \(K_{1}(t,x)\) satisfies \[K_{1}(t,x)= 2f(t)a(x)-f_{t}(t)-2g(t)+h_{x}(t,x)-k|h(t,x)|a(x)-|h_{t}(t,x)|\] \[= 2\varepsilon_{1}(1+t)^{\theta}a(x)-\varepsilon_{1}\theta(1+t)^{ \theta-1}-2\varepsilon_{2}(1+t)^{\theta-1}+\varepsilon_{3}(1+t)^{\theta-1}\] \[-k\varepsilon_{3}(1+t)^{\theta-1}|x|a(x)-\varepsilon_{3}(\theta- 1)(1+t)^{\theta-2}|x|\] \[\geq (1+t)^{\theta}\big{\{}\varepsilon_{1}a_{1}C_{\alpha}-\frac{ \theta\varepsilon_{1}}{1+t}-\frac{2\varepsilon_{2}}{1+t}+\frac{\varepsilon_ {3}}{1+t}-\frac{k\varepsilon_{3}\|a\|_{\infty}}{1+t}-\frac{\varepsilon_{3}( \theta-1)}{(1+t)^{2}}\big{\}} \tag{3.1}\] with some constant \(C_{\alpha}>0\). So, one can realize the positivity of \(K_{1}(t,x)\) in \(|x|\leq 1\) by taking large \(t>0\) for any \(\theta>0\). If \(|x|\geq 1\), the finite speed of propagation property of the solution yields \[K_{1}(t,x)= 2f(t)a(x)-f_{t}(t)-2g(t)+h_{x}(t,x)-k|h(t,x)|a(x)-|h_{t}(t,x)|\] \[= 2\varepsilon_{1}(1+t)^{\theta}a(x)-\theta\varepsilon_{1}(1+t)^{ \theta-1}-2\varepsilon_{2}(1+t)^{\theta-1}-k\varepsilon_{3}(1+t)^{\theta-1}a( x)-\varepsilon_{3}(\theta-1)(1+t)^{\theta-2}\] \[\geq \frac{(1+t)^{\theta}}{(1+|x|)^{\alpha}}\big{\{}2\varepsilon_{1}a _{1}-\frac{\theta\varepsilon_{1}(1+|x|)^{\alpha}}{1+t}-\frac{2\varepsilon_{2} (1+|x|)^{\alpha}}{1+t}-\frac{k\varepsilon_{3}a_{2}}{1+t}-\frac{\varepsilon_{ 3}(\theta-1)(1+|x|)^{\alpha}}{(1+t)^{2}}\big{\}}\] \[\geq \frac{(1+t)^{\theta}}{(1+|x|)^{\alpha}}\big{\{}2\varepsilon_{1}a _{1}-\frac{\theta\varepsilon_{1}(1+R+t)^{\alpha}}{1+t}-\frac{2\varepsilon_{2} (1+R+t)^{\alpha}}{1+t}\] \[-\frac{k\varepsilon_{3}a_{2}}{1+t}-\frac{\varepsilon_{3}(\theta- 1)(1+R+t)^{\alpha}}{(1+t)^{2}}\big{\}} \tag{3.2}\] with \(\alpha=1\). For \(K_{2}(t,x)\), in the case of \(|x|\leq 1\) it holds that \[K_{2}(t,x)= 2g(t)-f_{t}(t)+h_{x}(t,x)-\frac{1}{k}|h(t,x)|a(x)-|h_{t}(t,x)|\] \[\geq 2\varepsilon_{2}(1+t)^{\theta-1}-\theta\varepsilon_{1}(1+t)^{ \theta-1}+\varepsilon_{3}(1+t)^{\theta-1}-\frac{1}{k}\varepsilon_{3}(1+t)^{ \theta-1}a(x)-\varepsilon_{3}(\theta-1)(1+t)^{\theta-2}\] \[\geq (1+t)^{\theta-1}\big{\{}2\varepsilon_{2}-\theta\varepsilon_{1}+ \varepsilon_{3}-\frac{\varepsilon_{3}\|a\|_{\infty}}{k}-\frac{\varepsilon_{3} (\theta-1)}{1+t}\big{\}}. \tag{3.3}\] For \(K_{2}(t,x)\), in the case of \(|x|\geq 1\) it holds that \[K_{2}(t,x)= 2g(t)-f_{t}(t)+h_{x}(t,x)-\frac{1}{k}|h(t,x)|a(x)-|h_{t}(t,x)|\] \[\geq 2\varepsilon_{2}(1+t)^{\theta-1}-\theta\varepsilon_{1}(1+t)^{ \theta-1}-\frac{1}{k}\varepsilon_{3}(1+t)^{\theta-1}a(x)-\varepsilon_{3}( \theta-1)(1+t)^{\theta-2}\] \[\geq (1+t)^{\theta-1}\big{\{}2\varepsilon_{2}-\theta\varepsilon_{1}- \frac{\varepsilon_{3}\|a\|_{\infty}}{k}-\frac{\varepsilon_{3}(\theta-1)}{1+t} \big{\}}. \tag{3.4}\] To guarantee the positivity of \(K_{1}(t,x)\) and \(K_{2}(t,x)\), for large \(t\geq t_{0}\gg 1\), the following conditions must be imposed: \[C_{\alpha}\varepsilon_{1}a_{1}>0,\] \[2a_{1}\varepsilon_{1}-\varepsilon_{1}\theta-2\varepsilon_{2}>0,\] \[2\varepsilon_{2}-\theta\varepsilon_{1}+\varepsilon_{3}(1-\frac{ \|a\|_{\infty}}{k})>0,\] \[2\varepsilon_{2}-\theta\varepsilon_{1}-\frac{\varepsilon_{3}\|a \|_{\infty}}{k}>0,\] that is, \[k\geq\frac{\varepsilon_{3}\|a\|_{\infty}}{2\varepsilon_{2}-\theta\varepsilon_ {1}},\ \ k>\|a\|_{\infty},\ \ \frac{2a_{1}-\theta}{2}\varepsilon_{1}>\varepsilon_{2}>\frac{\theta}{2} \varepsilon_{1}, \tag{3.5}\] which implies \[\frac{2a_{1}-\theta}{2}>\frac{\theta}{2}\ \ \Rightarrow\ \ \theta<a_{1}. \tag{3.6}\] \(\Box\) **Check of Lemma 2.3**: It follows from assumption **(A.2)** that \[-F_{3}(t,x)= -g_{tt}(t)+g_{t}(t)a(x)+V(x)f_{t}(t)-2V(x)g(t)+V_{x}(x)h(t,x)+V(x) h_{x}(t,x)\] \[\leq -\varepsilon_{2}(\theta-1)(\theta-2)(1+t)^{\theta-3}+\varepsilon_ {2}a(x)(\theta-1)(1+t)^{\theta-2}+\theta\varepsilon_{1}V(x)(1+t)^{\theta-1}\] \[-2\varepsilon_{2}V(x)(1+t)^{\theta-1}+\varepsilon_{3}V_{x}(x)x(1 +t)^{\theta-1}\phi(x)+\varepsilon_{3}V(x)(1+t)^{\theta-1}\] \[\leq \varepsilon_{2}a(x)(1+t)^{\theta-2}\big{\{}\frac{1+R+t}{a_{1}(1+ t)}|(\theta-1)(\theta-2)|+|\theta-1|\big{\}}\] \[+V(x)(1+t)^{\theta-1}\big{\{}\theta\varepsilon_{1}-2\varepsilon_ {2}+\varepsilon_{3}\big{\}}. \tag{3.7}\] Let \(\theta\varepsilon_{1}-2\varepsilon_{2}+\varepsilon_{3}\leq 0\), that is, \(\varepsilon_{3}\leq 2\varepsilon_{2}-\theta\varepsilon_{1}\). At this stage, it must be \(2\varepsilon_{2}>\theta\varepsilon_{1}\) to guarantee the positivity of \(\varepsilon_{3}\). Observing (3.5), it is necessary to choose \[\frac{2a_{1}-\theta}{2}\varepsilon_{1}>\varepsilon_{2}>\frac{\theta \varepsilon_{1}}{2}\ \ \Rightarrow\ \ \theta<a_{1}. \tag{3.8}\] Note that from the assumption \(0<a_{1}\leq 2\), and (3.8) we find that \(\theta<2\). Therefore, by (3.7), Lemma 2.3 can be obtained when \(t\gg 1\). (3.6) and (3.8) implies \[\theta<a_{1}.\] Therefore, we can choose \(\theta=a_{1}-\delta\) for any \(\delta>0\). \(\Box\) From the above discussion, in order to see that Lemmas 2.2 and 2.3 hold true for \(\alpha=1\) and \(0<a_{1}\leq 2\), it suffices to choose \(\varepsilon_{1}\), \(\varepsilon_{2}\), \(\varepsilon_{3}\) and \(k\) as following: \[\varepsilon_{1}=\mu,\ \ \ \varepsilon_{2}=\frac{a_{1}\mu}{2},\ \ \ \varepsilon_{3}=\frac{\gamma\mu}{2},\ \ \ k>\max\big{\{}\frac{ \varepsilon_{3}\|a\|_{\infty}}{2\varepsilon_{2}-\theta\varepsilon_{1}},\ \|a\|_{\infty}\big{\}}=\|a\|_{\infty},\] where \(\gamma=a_{1}-\theta>0\) and \(\mu>0\). On the other hand, for Lemma 2.5, we need to change (2.25) slightly by \[f(t)-|h(t,x)|\geq(1+t)^{\theta}(\varepsilon_{1}-\frac{\varepsilon_{3}}{1+t}) \geq\frac{\varepsilon_{1}}{2}(1+t)^{\theta}=\frac{1}{2}f(t).\] The rest of the proof is similar to the parts previously done. By proceeding similar arguments, we will obtain the results of the case (1) of Theorem 1.2 for \(0<a_{1}\leq 2\) and \(\alpha=1\). _Acknowledgement._ This paper was written during Xiaoyan Li's stay as an overseas researcher at Hiroshima University from 12 December, 2022 to 11 December, 2023 under Ikehata's supervision as a host researcher. This work of the first author (Xiaoyan Li) was financially supported in part by Chinese Scholarship Council (Grant No. 202206160071). The work of the second author (Ryo Ikehata) was supported in part by Grant-in-Aid for Scientific Research (C) 20K03682 of JSPS.
2301.11840
Features of the Domain Boundaries of a Highly Anisotropic (S = 1) Antiferromagnet near the Transition to the Quantum Paramagnet Phase
It is shown that the structure of antiphase domain boundaries in the antiferromagnetic (AFM) phase of a highly anisotropic magnet with S = 1 on a two-dimensional square lattice depends greatly on single-ion anisotropy parameter D. Computer modeling on large square lattices illustrates the changes in the boundary structure from the quantum paramagnet (QP) to the XY phase, including the intermediate QP-XY phase at fairly small variations in positive D.
V. V. Konev, V. A. Ulitko, D. N. Yasinskaya, Y. D. Panov, A. S. Moskvin
2023-01-27T16:45:54Z
http://arxiv.org/abs/2301.11840v1
# Features of the Domain Boundaries ###### Abstract It is shown that the structure of antiphase domain boundaries in the antiferromagnetic (AFM) phase of a highly anisotropic magnet with \(S=1\) on a two-dimensional square lattice depends greatly on single-ion anisotropy parameter \(D\). Computer modeling on large square lattices illustrates the changes in the boundary structure from the quantum paramagnet (QP) to the \(XY\) phase, including the intermediate QP-\(XY\) phase at fairly small variations in positive \(D\). ## Introduction In contrast to quantum magnets with \(S=1/2\) spin, systems with \(S=1\) spin are characterized by more complex Hamiltonian, single-ion anisotropy, biquadratic intercentric interactions, and totally new phase states of the quantum paramagnet (QP) type corresponding to an easy-plane phase in the classical approach. The interest in these systems is due to both highly anisotropic magnets based on Ni\({}^{2+}\) (\(S=1\)) (e.g., Y\({}_{2}\)BaNiO\({}_{5}\) [YBNO], Ni(C\({}_{2}\)H\({}_{8}\)N)\({}_{2}\)NO\({}_{2}\)(ClO\({}_{4}\)) [NENP]) [1] and the so-called pseudo-spin systems of the semi-hard core boson type with constraints on filling lattice sites \(n=\) (0, 1, 2), or mixed valence ion systems of the triplet type: Cu\({}^{(1+,\ 2+,\ 3+)}\) in cuprates La\({}_{(2-xy)}\)Sr\({}_{x}\)CuO\({}_{4}\) and Bi\({}^{(3+,4+,5+)}\) in bismuthates [2, 3]. In all cases, the phase diagrams of spin or pseudo-spin systems with \(S=1\) is considerably richer than those of similar systems with \(S=1/2\) quantum (pseudo)spin, due primarily to the emergence in the Hamiltonian of addends of the single-ion anisotropy and biquadratic interaction types, plus ones of the quantum paramagnet and spin-nematic phase types. ## Model Let bus consider a model cuprate that is a \(2D\) system of Cu centers in a CuO\({}_{2}\) plane of cuprates that can be in three different valence charge states: Cu\({}^{(1+,\ 2+,\ 3+)}\). We associate this charge triplet with three states of \(S=1\) pseudo-spin as Cu\({}^{1+}\rightarrow\) M\({}_{S}=-1\), Cu\({}^{2+}\rightarrow\) M\({}_{S}=0\), Cu\({}^{3+}\rightarrow\) M\({}_{S}=1\), and use the familiar ways of describing spin systems. The spin algebra of systems with \(S=1\) (M\({}_{S}=0\), \(\pm 1\)) includes eight independent nontrivial (three dipole and five quadrupole) functionals: \(S_{\dot{\varphi}}\); \(S_{\dot{\pm}}=\pm(S_{x}\pm iS_{y})\); \(S_{z}^{2}\); \(T_{\pm}=\{S_{z}\), \(S_{\dot{\pm}}\}=S_{z}S_{\dot{\pm}}+S_{\dot{\pm}}S_{\dot{\varphi}}\); and \(S_{\dot{\pm}}^{2}\). Incremental/decremental functionals \(S_{\dot{\pm}}\) and \(T_{\pm}\) change the (pseudo)spin projection to \(\pm 1\), but in different ways: \(\left\langle 0|S_{\dot{\pm}}|\mp 1\right\rangle=\left\langle\pm 1|S_{\dot{ \pm}}|0\right\rangle=\mp 1\), and \(\left\langle 0|T_{\dot{\pm}}|\mp 1\right\rangle=-\left\langle\pm 1|T_{\dot{\pm}}|0 \right\rangle=+1\). Incremental/decremental functionals \(S_{\dot{\pm}}^{2}\) describe transitions \(\left|-1\right\rangle\rightarrow\left|+1\right\rangle\); i.e., they generate on a site either a hole \(\left(S_{z}^{2}\right)\) or an electron \(\left(S_{-}^{2}\right)\) pair that is a composite local boson with kinematic constraint \(S_{\dot{\pm}}^{2}=0\), emphasizing its nature as a hard-core boson. Local (on-site) nondiagonal parameter \(XY\) of the order of \(\left\langle S_{\dot{\pm}}^{2}\right\rangle\), which is actually a parameter of the local superconducting order, is nonzero only when the site hosts a quantum superposition of states \(\left|-1\right\rangle\) and \(\left|+1\right\rangle\). We write the effective Hamiltonian that commutates with the \(z\)-component of the total spin \(n=\frac{1}{N}\sum_{i}S_{i\dot{\pm}}\) and thus maintains the system's magnetization as the sum of potential and kinetic energies: \(H=H_{\rm pot}+H_{\rm kin}\): \[H_{\rm pot}=D\sum_{i}S_{i\dot{\pm}}^{2}+J\sum_{\langle\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! (i.e., the single-ion anisotropy) describes the density-density correlation effects on the sites, while the second term describes inter-site interactions (correlations) of the density-density type. Below, we consider only the interactions between nearest neighbors with positive (antiferromagnetic) signs of inter-center correlation parameter \(J\). Depending on the relationship between the parameters of Hamiltonian (1) and magnetization (\(n\)), the system ground state corresponds either to the homogeneous phase of the quantum paramagnet type with \(\left\langle S_{y}\right\rangle=\left\langle S_{z}^{2}\right\rangle=0\), which is attained at high positive values of parameter \(D\) (a large \(D\) phase); or to the antiferromagnetic (AFM) phase along the \(z\)-axis; or to the \(XY\) phase with a nonzero parameter on the order of \(\left\langle S_{z}^{2}\right\rangle\). ## 3 Results and Discussion We used an NVidia graphical processing unit for the Monte Carlo modeling of the antiferromagnet phase transition of highly anisotropic magnet \(S=1\) in the two-sublattice approximation on a square lattice of \(256\times 256\) with periodic boundary conditions at selected parameters \(t=\)1, \(J=0.75\), \(n=0.04\), which ensured a ground state of the antiferromagnet ordering type in a rather wide range of variations of single-ion anisotropy parameter \(D\). At \(D=-5\), a stripe domain structure formed during rapid thermalization (annealing). At low temperatures, a strongly pronounced filamentary \(XY\) phase emerged at the center of the antiphase domain boundaries of the AFM phase, which was characterized primarily by a nonzero module of the local parameter of the order \(XY\). Upon an increase in double-ionic biquadratic anisotropy \(t\), the domain boundary gradually broadened and the volume of the \(XY\)state grew up to the total displacement of the AFM phase and the transition to the inhomogeneous \(XY\)state. It is interesting that both the AFM phase and the \(XY\)structure of the domain boundary proved to be stable in relation to variations in local correlation parameter \(D\) over a wide range up to \(D\sim 1.0\). Upon further growth of local correlations, however, the domain boundary structure reorganized radically. The evolution of the antiphase domain boundary upon an increase in parameter \(D\) is shown in Fig. 1. As \(D\) grows gradually, the regular structure of the filamentary \(XY\) phase on the edges of the antiphase domain boundary is broken, while the QP phase emerges and grows to completely displace the filamentary \(XY\) phase at \(D\sim 1.2\), accelerating the boundary transition to QP. With further growth of local correlations \(D>1.5\), the domain boundary broadens and gradually displaces the AFM order. In other words, the AFM \(\rightarrow\) QP phase transition (the large \(D\) phase) occurs with an increase in the local correlation parameter, due to expansion of the domain boundaries. It is noteworthy that the QP phase nucleation on the edges of the domain boundary occurs due to the smaller difference between the energies of the QP and \(XY\) phases there (Fig. 2). In other words, the emergence of the QP phase on the edges is energetically more advantageous than at the center. In Fig. 2, we can see that the difference between the energies of phases in the domain and at the center of the domain boundary is much smaller when the QP phase emerges at the center of the domain boundary (at \(D=1.2\)) than with the \(XY\)phase (\(D=-5\)). Upon the further growth of \(D\), the AFM phase becomes metastable in the domains, and the QP phase becomes stable at the center of the domain boundary. The study of temperature effects shows that when the temperature in the domain walls of the AFM phase rises at \(D=1.0\), the system moves from the \(XY\)phase Figure 1: Average distribution across the domain boundary at local parameters on the order of AFM, \(XY\), and QP phases marked by solid, dashed-and-dotted, and dashed lines, respectively, on two sublattices \(A\) and \(B\) (the top and bottom parts of the figure, respectively) at different values of parameter \(D\): (a) \(-5.0\), (b) 1.0, (c) 1.1, and (d) 1.2. The values along the horizontal axis are presented in terms of the lattice constant. to the QP phase and then to a disordered paramagnetic state. During subsequent cooling to very low temperatures \(T=0.0001\), however, only the QP structure of the domain boundaries is restored; i.e., a temperature hysteresis is observed in the structure of the boundaries. ## 3 Conclusions We studied the effect single-ion anisotropy parameter \(D\) has on the structure of domain boundaries of the antiferromagnetic phase. Using numerical Monte Carlo modeling on large square lattices with rapid annealing, we observed the formation of a stripe domain structure, in whose antiphase domain boundaries a filamentary \(XY\) phase formed stably over a wide interval of \(D\) variations up to positive \(D\sim 1\). Upon further growth of local correlations, however, the \(XY\) phase was broken, and a filamentary QP phase formed in the boundaries separating the domains with antiferromagnetic ordering. Our modeling of temperature effects indicated there was a temperature hysteresis in the structure of the boundaries. ## Funding This work was supported by Program 211 of the Government of the Russian Federation, project no. 02.A03.21.0006; and by the RF Ministry of Science and Higher Education, project nos. 2277 and 5719.
2304.13889
Polarization control of RABBITT in noble gas atoms
The mutual angle formed by the non-collinear polarization axes of two laser pulses is used to control two-photon XUV+IR ionization of noble gas atoms in the process of reconstruction of attosecond bursts by beating of two-photon transitions (RABBITT). The magnitude and the phase of this beating can be controlled very efficiently by the mutual polarization angle. The mechanism of this control can be understood within the lowest order perturbation theory and the soft photon approximation. We offer a very sensitive test on the polarization control of the angular dependent RABBITT process which validates our numerical simulations. We apply this test to the recent theoretical and experimental results of polarization controlled RABBITT on hydrogen and helium by Boll et al., Phys. Rev. A 107, 043113 (2023) and heavier noble gases by Jiang et~al., Nature Comms. 13, 5072 (2022).
Anatoli S. Kheifets, Zhongtao Xu
2023-04-27T00:27:35Z
http://arxiv.org/abs/2304.13889v1
# Polarization control of RABBITT ###### Abstract The mutual angle formed by the non-collinear polarization axes of two laser pulses is used to control two-photon XUV+IR ionization of noble gas atoms in the process of reconstruction of attosecond bursts by beating of two-photon transitions (RABBITT). The magnitude and the phase of this beating can be controlled very efficiently by the mutual polarization angle. The mechanism of this control can be understood within the lowest order perturbation theory and the soft photon approximation. We offer a very sensitive test on the polarization control of the angular dependent RABBITT process which validates our numerical simulations. We apply this test to the recent theoretical and experimental results of polarization controlled RABBITT on hydrogen and helium by Boll _et al._, Phys. Rev. A 107, 043113 (2023) and heavier noble gases by Jiang _et al._, Nature Comms. 13, 5072 (2022). ## 1 Introduction Two-color two-photon extreme ultraviolet and infrared (XUV+IR) photoionization has been applied recently for studying ultrafast electron dynmaics on the attosecond time scale. Reconstruction of attosecond bursts by beating of two-photon transitions (RABBITT) (Paul _et al_2001, Mairesse _et al_2003) is one practical realization of this technique. In RABBITT, two collinearly polarized XUV and IR laser pulses with a variable delay are used to ionize the target atom and to steer emitted photoelectrons. The two-photon ionization yield oscillates with twice the IR photon frequency as the XUV/IR pulse delay varies. The phase of this oscillation encodes the timing of the XUV ionization (Veniard _et al_1996, Dahlstrom _et al_2013). Both the phase and magnitude of the RABBITT oscillation depend sensitively on the photoelectron escape angle relative to the common polarization axis of the XUV and IR pulses (Heuser _et al_2016, Ivanov & Kheifets2017, Bray _et al_2018). An additional control of two-color photoionization can be gained by relaxing the IR polarization direction and allowing its rotation relative to the XUV polarzation axis (O'Keeffe _et al_2004, Meyer _et al_2008, Meyer _et al_2010, Leitner _et al_2015, Boll _et al_2020). Recently, such a polarization control was implemented in RABBITT. Jiang _et al_(2022) demonstrated the so-called "atomic partial wave meter" where non-collinear partial waves with magnetic projections \(M\neq 0\) increase their presence gradually as the mutual polarization axes angle grows. Boll _et al_(2023) demonstrated an appearance of an additional set of angular nodes of the RABBITT amplitude in \(s\)-electron targets (H and He).
2306.14747
Shear viscosity expression for a graphene system in relaxation time approximation
We have gone through the detailed microscopic calculation of the shear viscosity of a 2-dimensional graphene system in the relaxation time approximation-based kinetic theory framework. After getting its final expressions, we compared it with the shear viscosity expressions of other possible 2-dimensional as well as 3-dimensional nonrelativistic and ultra-relativistic fluid systems. The aim of the comparison is to reveal how their different one-body dispersion relations affect their many-body fluid properties like shear viscosity and the viscosity to entropy density ratio. It is also aimed to reveal the 3-dimension to the 2-dimension transformation of their mathematical structures. We have numerically explored the differences in their order of magnitude and dependence on thermodynamical parameters-temperature and chemical potential. Marking two thermodynamical domains-Dirac fluid and Fermi liquid-for a 2-dimensional graphene system, we have noticed that shear viscosity, entropy density as well as their ratios decrease toward saturated values when one goes from Fermi liquid to Dirac fluid domain. When one shifts from mili-electron volt scales of temperature and chemical potential in condensed matter physics location to their mega-electron volt scales in high energy physics location, then the same results may be expected for hot quark matter case, where the transition from the neutron star to early universe domains may be considered as Fermi liquid to Dirac fluid transition.
Cho Win Aung, Thandar Zaw Win, Gaurav Khandal, Sabyasachi Ghosh
2023-06-26T15:03:34Z
http://arxiv.org/abs/2306.14747v3
# Shear Viscosity expression for Graphene system in Relaxation time approximation ###### Abstract We have gone through the detailed microscopic calculation of the shear viscosity of a 2-dimensional graphene system in the relaxation time approximation-based kinetic theory framework. After getting its final expressions, we compared it with the shear viscosity expressions of other possible 2-dimensional as well as 3-dimensional non-relativistic and ultra-relativistic fluid systems. The aim of the comparison is to reveal - how their different one-body dispersion relations affect their many-body fluid properties like shear viscosity and viscosity to entropy density ratio. It is also aimed to reveal the 3-dimension to the 2-dimension transformation of their mathematical structures. We have numerically explored the differences in their order of magnitude and dependence on thermodynamical parameters - temperature and chemical potential. Marking two thermodynamical domains - Dirac fluid and Fermi liquid for a 2-dimensional graphene system, we have noticed that shear viscosity, entropy density as well as their ratios decrease towards saturated values when one goes from Fermi liquid to Dirac fluid domain. When one shifts from mili-electron Volt scales of temperature and chemical potential in condensed matter physics location to their Mega-electron Volt scales in high energy physics location, then the same results may be expected for hot quark matter case, where the transition from the neutron star to early universe domains may be considered as Fermi liquid to Dirac fluid transition. ## I Introduction It is known that the mean free path of charge carriers in metal is generally temperature dependent. The scattering between electron and lattice imperfections (or "disorder") normally dominates at low temperatures, while electron-phonon scatterings dominate at high temperatures. Concerning these two scattering mechanisms, another possible scattering is electron-electron scattering processes, which are generally less effective in many conventional metals. However, its opposite condition is possible in some specific systems under specific conditions, where one can apply the electron hydrodynamic (eHD) theory. For a long time, condensed matter physicists did not aware of such an opposite phase in materials. Therefore, they used to give less attention to the possibilities of the hydrodynamics behavior of electrons. After the experimental observations of eHD in Refs. [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], the situation has drastically changed in recent years. See Refs. [19; 20; 21] for recent reviews. It is graphene [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14], which is identified as the best known such material, where electron hydrodynamics can be observed. Apart from these recently discovered hydrodynamic properties of electrons in graphene, it was quite famous for its massless nature, concluded from the proportional relation between its energy and momentum. Due to the proportional relation between energy and momentum, electron motion in graphene will not be Galilean-invariant. On the other hand, the relativistic effect of electrons can not also be expected because its velocity (\(v_{g}\approx 10^{6}\) m/s) is not very close to the speed of light (\(c\approx 3\times 10^{8}\) m/s). Hence, we can't claim the Lorentz-invariant property of electron motion. It opens "unconventional" hydrodynamics [20] as neither non-relativistic hydrodynamics (NRHD) nor relativistic hydrodynamics (RHD) can be applicable. We may call this "unconventional" hydrodynamics as Graphene hydrodynamics (GHD) by imposing that the graphene (G) case has a unique dispersion or energy-momentum relation and is different from the non-relativistic (NR) and relativistic (R) or ultra-relativistic (UR) cases. Now, whenever fluid dynamics or hydrodynamics comes into the picture, then one dissipation coefficient like the shear viscosity of that fluid, becomes a very important quantity, which is not at all appeared in most of the metals or other condensed matter systems. The present work is aimed at the microscopic calculation of the shear viscosity of this electron fluid in a graphene system, which may be called in short as graphene fluid (GF). When one microscopically calculates the expression of the shear viscosity of GF, it will be different from its standard expression for non-relativistic fluid (NRF) as well as for relativistic fluid (RF) or ultra-relativistic fluid (URF). So far, from the best of our knowledge, experimental measurement of this shear viscosity coefficient for GF is missing although experimental community [1; 9] observed the Poiseuille's flow pattern of electrons in graphene, which indirectly reflects the existence of the non-zero viscosity. From the theoretical side, we get only Refs. [22; 23], where microscopic expressions of shear viscosity have been addressed. In this context, one can get a long list of Refs. [24; 25; 26; 27; 28; 29; 30; 31; 32; 33] (and references therein) for microscopical estimations of shear viscosity for relativistic quark and hadronic matter, expected in high energy heavy ion collision experiments. Grossly two classes of frameworks - (1) Kinetic theory approach with relaxation time approximation (RTA) [24; 25; 26; 27; 28; 29; 30] and Kubo framework [31; 32; 33] are adopted by the heavy ion physics community. Both frameworks have similar structure at the final level expressions for shear viscosity coefficients with two main components. One carries interaction information, called relaxation time, and the remaining part may be called as thermodynamic phase-space of shear viscosity coefficient, which will be function of temperature and chemical potential. If we analyze the shear viscosity expression of graphene also from Ref. [22], then we can identify these two components. Present work has zoomed in this structure via a systematic calculation of this shear viscosity of GF in RTA methods and compared with corresponding structures for NRF and URF. Here, one of our aims is to compare the thermodynamic phase-space component of shear viscosity coefficient for these three cases - G, NR, and UR. After knowing the lower bound conjecture of shear viscosity to entropy density (\(\eta/s\)) as \(\hbar/(4\pi k_{B})\) or \(1/(4\pi)\) (in natural unit) [34], scientific communities are curious to know those strongly coupled systems, which are close to that bounds. Experimentally, the RF, like quark and hadronic matter, produced in high energy heavy ion collision experiments and NRF like cold atom systems [35] are identified as those strongly coupled systems. On the other hand, GF may also belong to that category according to the theoretical prediction from Ref. [22], which is considered as reference point for tuning our results. So, the present article will not intend to add any new content on strongly coupled properties, rather its main goal will intend to find the differences among GF, NRF, URF in terms of expressions and estimations of shear viscosity. We believe that it was missing in the literature and very important to address. The article is organized as follows. In the next section Sec. (II), the RTA calculation of shear viscosity \(\eta\) and entropy density \(s\) calculations of GF for 2D case is addressed in detail by mentioning the other cases like 3D-GF, 3D-NR, 3D-UR, 2D-NR, 2D-UR. In Sec. (III), the comparative results of \(\eta\), \(s\) and \(\eta/s\) of different cases are discussed. At the end, our findings are summarised in Sec. (IV) with some conclusive bullet points. ## II Formalism Let's start our formalism from energy-momentum tensor (\(T^{\mu\nu}\)), as practised for RF like quark and hadronic matter. Here, we will go for GF calculation, so reader should have to be careful on some particular steps, where it is different from the RF case. Showing these differences is one of the core agenda of the present article. Although, reader can find similarities between most of steps of GHD of GF and RHD of RF. The \(T^{\mu\nu}\) has two parts - ideal part \(T_{0}^{\mu\nu}\), related to the knowledge of thermodynamics and dissipative part \(T_{D}^{\mu\nu}\), related to the different dissipation processes. So, \[T^{\mu\nu} = T_{0}^{\mu\nu}+T_{D}^{\mu\nu}. \tag{1}\] In this dynamic picture of fluid, ideal energy-momentum tensor and electron number flow can be expressed in macroscopic form as, \[T_{0}^{\mu\nu} = \epsilon\frac{u^{\mu}u^{\nu}}{v_{g}^{2}}-P\left(g^{\mu\nu}-\frac {u^{\mu}u^{\nu}}{v_{g}^{2}}\right)\,\] \[N_{0}^{\mu} = n\frac{u^{\mu}}{v_{g}}\, \tag{2}\] in terms of the building blocks - energy density \(\epsilon\), pressure \(P\), number density \(n\), fluid (element) velocity \(u^{\mu}\) and metric tensor \(g^{\mu\nu}\). Here, four-velocity \(u^{\mu}=\gamma_{g}(v_{g},\vec{u})\) for GHD is designed by following the four velocity structure \(u^{\mu}=\gamma(c,\vec{u})\) for RHD as done in Refs. [20]. One can notice that speed of light \(c\) in RHD is replaced by graphene Fermi velocity \(v_{g}\) in GHD. So, Lorentz factor \(\gamma=1/\sqrt{1-u^{2}/c^{2}}\) in RHD is also converted into \(\gamma_{g}=1/\sqrt{1-u^{2}/v_{g}^{2}}\) in GHD. In static limit (\(\vec{u}\to 0\)), four velocity, \(u^{\mu}=\gamma_{g}(v_{g},\vec{u})\to u^{\mu}=\gamma_{g}(v_{g},0)\) and \(\gamma_{g}=1/\sqrt{1-u^{2}/v_{g}^{2}}\to 1\). So, Eq. (2) provides a static electron number flow \(N_{0}^{\mu}\equiv n\), and static energy-momentum tensor, \[T_{0}^{\mu\nu}\equiv\begin{pmatrix}\epsilon&0&0&0\\ 0&P&0&0\\ 0&0&P&0\\ 0&0&0&P\end{pmatrix}\, \tag{3}\] which reflects the standard static fluid aspect like Pascal's law. The macroscopic quantities \(T_{0}^{\mu\nu}\) and \(N_{0}^{\mu}\) can be expressed in terms of the microscopic quantities; four-momentum (\(p^{\mu}\)) and four-velocity (\(v^{\mu}\)) of electrons as, \[T_{0}^{\mu\nu}=N_{s}\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}p^{\mu}v^{\nu}f_{0}\, \tag{4}\] and \[N_{0}^{\mu}=N_{s}\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}v^{\mu}f_{0}\, \tag{5}\] where \(N_{s}=2\) is spin degeneracy factor of electron and \(f_{0}\) is its Fermi-Dirac (FD) distribution function \(f_{0}=1/\{\exp\left(\beta(E-\mu)\right)+1\}\). Here, \(\beta=1/(k_{B}T)\) and \(\mu\) are the thermodynamic parameter and the chemical potential of the system, respectively. From these microscopic expressions of the energy-momentum tensor and electron current, given in Eqs. (4) and (5), we can write the energy density \(\epsilon\), pressure \(P\) and number density \(n\) for 2D graphene (G) case, which is addressed briefly in next subsection. We follow natural unit \(\hbar=c=k_{B}=1\) during the calculation. ### Entropy density in two-dimensional Graphene For Graphene, the dispersion relation is given by \[E=pv_{g}. \tag{6}\] The total number of fermions at any value of temperature is given by, \[N=\int_{0}^{\infty}D\left(E\right)dEf_{0}\, \tag{7}\] where \(D\left(E\right)dE\) is number of energy states in energy range \(E\) to \(E+dE\). After plugging the value of \(D\left(E\right)dE\) (see Appendix A) in the above Eq. (7) and \(f_{0}\), the total number of electrons in graphene is, \[N=N_{s}\frac{2\pi a}{\left(2\pi\right)^{2}v_{g}^{2}}\int_{0}^{\infty}\frac{E} {A^{-1}e^{\beta E}+1}dE\,\] where \(A=\exp\left(\beta\mu\right)\) is the fugacity of the system. After converting this integral into the Fermi integral function (see Appendix B), we get the expression of number density as \[n_{g}^{2D}=\frac{N}{a}=\frac{N_{s}}{2\pi v_{g}^{2}}f_{2}\left(A\right)T^{2}. \tag{8}\] Now from the Eq. (4), the energy density for (2D) graphene system is, \[\epsilon_{g}^{2D}=T_{0}^{00}=N_{s}\int\frac{d^{2}p}{\left(2\pi\right)^{2}} \left(E\right)f_{0}. \tag{9}\] After using the graphene dispersion relation and plugging the value of \(f_{0}\), we get \[\epsilon_{g}^{2D}=\frac{1}{\pi v_{g}^{2}}\int_{0}^{\infty}\frac{E^{2}}{A^{-1}e ^{\beta E}+1}dE \tag{10}\] and after replacing this integral in terms of Fermi integral function, the final expression of the energy density is given by \[\epsilon_{g}^{2D}=\frac{N_{s}}{\pi v_{g}^{2}}f_{3}\left(A\right)T^{3}. \tag{11}\] Now again from the Eq. (4), the pressure can be expressed as \[P_{g}^{2D}=T_{0}^{11}=N_{s}\int\frac{d^{2}p}{\left(2\pi\right)^{2}}\left(\frac {E}{2}\right)f_{0}\, \tag{12}\] since \(\vec{p}_{x}\vec{v}_{x}\approx\frac{|\vec{p}|}{\sqrt{2}}\frac{|\vec{v}_{d}|}{ \sqrt{2}}=\frac{E}{2}\). After solving this expression as similar to the energy density, we get \[P_{g}^{2D}=\frac{N_{s}}{2\pi v_{g}^{2}}f_{3}\left(A\right)T^{3}. \tag{13}\] In terms of number density, energy density, and pressure, we can write the entropy density from the Euler thermodynamic relation: \[s=\frac{S}{V}=\frac{\epsilon+P-\mu n}{T}. \tag{14}\] After substituting the value of energy density (\(\epsilon_{g}^{2D}\)), pressure (\(P_{g}^{2D}\)), and number density (\(n_{g}^{2D}\)) in Eq. (14), we get \[s_{g}^{2D}=\frac{N_{s}}{2\pi v_{g}^{2}}T^{2}\Big{[}3f_{3}\left(A\right)-\frac{ \mu}{T}f_{2}\left(A\right)\Big{]}. \tag{15}\] ### Shear Viscosity in two-dimensional Graphene Next, let us come to dissipative part of \(T_{D}^{\mu\nu}\), where only shear stress tensor \(\pi^{\mu\nu}\) will be considered for calculating shear viscosity coefficient (\(\eta\)). Macroscopic relation \(\pi_{\mu\nu}=\eta\mathcal{U}_{\mu\nu}\) defines that the shear viscosity \(\eta\) is a basically proportional constant between shear stress \(\pi^{\mu\nu}\) and velocity gradient \(\mathcal{U}_{\mu\nu}=\frac{1}{2}(\partial_{\mu}u_{\nu}+\partial_{\nu}u_{\mu})\). Usually, greek index like \(\mu\equiv(0,i)\) takes values \(0\) for the temporal component and \(i=1,2,3\) for the spatial component for 3D system but here for 2D system, we will consider \(\mu\equiv(0,i=1,2)\) because z-component \(i=3\) will not be considered. The microscopic theory describes shear stress tensor in terms of particle velocity \(v\) and momentum \(p\) as, \[\pi_{\mu\nu}=N_{s}\int\frac{d^{2}\vec{p}}{(2\pi)^{2}}p_{\mu}v_{\nu}\delta f_{ \eta}\, \tag{16}\] where we are assuming that equilibrium distribution function \(f_{0}\) gets a small deviation \(\delta f\), which can be considered as first-order Taylor series expansion equilibrium distribution function: \[\delta f \propto \frac{\partial f_{0}}{\partial E}=\phi_{\eta}\frac{\partial f_{0 }}{\partial E}\, \tag{17}\] \[= \mathcal{A}^{\mu\nu}\mathcal{U}_{\mu\nu}\frac{\partial f_{0}}{ \partial E}\.\] Considering the relation \(v_{\nu}=(E/p^{2})p_{\nu}\), macroscopic \(\pi_{\mu\nu}=\eta\mathcal{U}_{\mu\nu}\) and microscopic Eq. (16) can be connected as, \[\pi_{\mu\nu}=\eta\ \mathcal{U}_{\mu\nu}=N_{s}\int\frac{d^{2}\vec{p}}{(2\pi)^{2 }}\left(\frac{E}{p^{2}}\right)p_{\mu}p_{\nu}\mathcal{A}^{\alpha\beta}\mathcal{ U}_{\alpha\beta}\frac{\partial f_{0}}{\partial E}. \tag{18}\] The four-momentum of an electron can be defined as \(p^{\mu}=(E/v_{g},\vec{p})\) in unconventional notation. Considering energy as a static limit of \(p^{\nu}u_{\nu}\), we can write FD as, \[f_{0}=\frac{1}{exp\Big{(}\frac{p^{\nu}u_{\nu}\pi\mu(x)}{T(x)}\Big{)}+1}. \tag{19}\] Here, we have to consider the local thermalization concept, where thermodynamical quantities \(T(x)\), \(\mu(x)\) as well as fluid velocity \(u^{\mu}(x)\) are assumed to be functions of \(x\equiv x^{\mu}=(x^{0},x^{i})\). To find the unknown coefficient \(\mathcal{A}^{\alpha\beta}\), we will use Boltzmann transport equation (BTE) \[\frac{\partial f}{\partial t}+v^{\mu}\frac{\partial f}{\partial x^{\mu}}+F^{ \mu}\frac{\partial f}{\partial p^{\mu}}=\left(\frac{\partial f}{\partial t} \right)_{Col}\, \tag{20}\] where \(\left(\frac{\partial f}{\partial t}\right)_{Col}\) is the collision term that leads the system out of equilibrium. \(F^{\mu}\) is represented as all external forces, and \(v^{\mu}\) is the velocity of the fluid particles. Using velocity expression in terms of \(E\) and \(p\) for graphene, \(v^{\mu}=(\frac{E}{p^{2}})p^{\mu}\), we get: \[\left(\frac{E}{p^{2}}\right)p^{\mu}\partial_{\mu}f=\left(\frac{\partial f}{ \partial t}\right)_{Col}\, \tag{21}\] where we ignore \(\frac{\partial f}{\partial t}\) and \(F^{\mu}\frac{\partial f}{\partial p^{\mu}}\) as they will not contribute in shear dissipation. Using the Relaxation Time Approximation (RTA) method, the collision term can be considered as, \[\left(\frac{\partial f}{\partial t}\right)_{Col}=-\frac{\delta f}{\tau_{c}}\, \tag{22}\] where \(\tau_{c}\) is the relaxation time. Putting \(f\approx f_{0}\) in the left-hand side (lhs) of BTE, \[\left(\frac{E}{p^{2}}\right)p^{\mu}\partial_{\mu}f_{0}=-\frac{\delta f}{\tau_ {c}}. \tag{23}\] Using the Eq. (19), the lhs of the above Eq. (23) can be expanded as, \[\left(\frac{E}{p^{2}}\right)p^{\mu}\partial_{\mu}f_{0} = -f_{0}(1-f_{0})\Big{[}\left(\frac{E}{p^{2}}\right)\frac{p^{\mu}p^ {\nu}}{T}\partial_{\mu}u_{\nu}(x)\Big{]}\, \tag{24}\] \[= -f_{0}(1-f_{0})\left(\frac{E}{p^{2}}\right)\frac{p^{\mu}p^{\nu}}{ 2T}(\partial_{\mu}u_{\nu}+\partial_{\nu}u_{\mu})\,\] \[= -f_{0}(1-f_{0})\left(\frac{E}{p^{2}}\right)\frac{p^{\mu}p^{\nu}}{ T}\mathcal{U}_{\mu\nu}\,\] and right hand side (rhs) of Eq. (23) can be written as \[-\frac{\delta f}{\tau_{c}} = \frac{f_{0}(1-f_{0})}{T}\frac{1}{\tau_{c}}\mathcal{A}^{\mu\nu} \mathcal{U}_{\mu\nu} \tag{25}\] So, equating lhs and rhs of Eq. (23), we get \(\mathcal{A}^{\mu\nu}=(-\frac{E}{p^{2}}p^{\mu}p^{\nu}\tau_{c})\). Transforming temporal+spatial to only spatial index, we can write the shear stress tensor as \[\pi_{ij} = \eta\mathcal{U}_{ij}\, \tag{26}\] \[= N_{s}\int\frac{d^{2}\vec{p}}{(2\pi)^{2}}\left(\frac{E}{p^{2}} \right)^{2}\tau_{c}(p_{i}p_{j}p_{k}p_{l})\mathcal{U}^{kl}\beta f_{0}(1-f_{0})\,\] (27) \[= \frac{N_{s}}{8}\int\frac{d^{2}\vec{p}}{(2\pi)^{2}}E^{2}\tau_{c} \beta f_{0}(1-f_{0})\mathcal{U}_{ij}\, \tag{28}\] where, we used \(<p_{i}p_{j}p_{k}p_{l}>=\frac{\vec{p}^{4}}{8}\big{(}\delta_{ij}\delta_{kl}+ \delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\big{)}\) (see Appendix C), and we have the two Eqs; \[\beta f_{0}\left(1-f_{0}\right)=-\frac{\partial f_{0}}{\partial E}\, \tag{29}\] and \[\left(-\frac{\partial f_{0}}{\partial E}\right)=\beta\frac{e^{\beta(E-\mu)}} {\big{(}e^{\beta(E-\mu)}+1\big{)}^{2}}=\frac{\partial}{\partial\mu}\left( \frac{1}{e^{\beta(E-\mu)}+1}\right). \tag{30}\] Finally, the expression for shear viscosity is, \[\eta=\frac{N_{s}}{8}\int\frac{d^{2}\vec{p}}{(2\pi)^{2}}E^{2}\tau_{c}\beta f_{ 0}(1-f_{0}). \tag{31}\] After using the Eqs. (29) and (30) and converting the momentum terms into energy using dispersion relation (6), the Eq. (31) becomes \[\eta=\frac{N_{s}}{16\pi v_{g}^{2}}\tau_{c}\frac{\partial}{\partial\mu}\int_{0 }^{\infty}\frac{E^{3}}{A^{-1}e^{\beta E}+1}dE. \tag{32}\] After solving this integration by using the identity of the Fermi integral function, the expression of shear viscosity for 2D graphene (using subscript and superscript notation to distinguish the expressions of different systems) is \[\eta_{g}^{2D}=\frac{3N_{s}}{8\pi v_{g}^{2}}\tau_{c}f_{3}\left(A\right)T^{3}. \tag{33}\] Now, on taking the ratio of the shear viscosity (33) and entropy density (15), we get \[\frac{\eta_{g}^{2D}}{s_{g}^{2D}}=\frac{3}{4}\tau_{c}\Bigg{[}3f_{3}\left(A \right)-\frac{\mu}{T}f_{2}\left(A\right)\Bigg{]}^{-1}f_{3}\left(A\right)T. \tag{34}\] After doing a similar way of calculation, the expressions of entropy density, shear viscosity, and the ratio of shear viscosity and entropy density for a non-relativistic electron fluid (i.e. \(E=p^{2}/(2m)\)) 2-dimensional system are given by \[s_{NR}^{2D} = \frac{N_{s}mT}{2\pi}\Bigg{[}2f_{2}\left(A\right)-\frac{\mu}{T}f_ {1}\left(A\right)\Bigg{]}\, \tag{35}\] \[\eta_{NR}^{2D} = \frac{N_{s}m}{8\pi}\tau_{c}f_{2}\left(A\right)T^{2}\,\] (36) \[\frac{\eta_{NR}^{2D}}{s_{NR}^{2D}} = \frac{1}{4}\tau_{c}\Bigg{[}2f_{2}\left(A\right)-\frac{\mu}{T}f_{ 1}\left(A\right)\Bigg{]}^{-1}f_{2}\left(A\right)T. \tag{37}\] Most of the fluids or liquids (e.g., water) used in our daily life follow non-relativistic fluid dynamics, whose fluid constituent particles obey \(E=p^{2}/(2m)\) dispersion relation. However, for the purpose of comparing, we may assume a hypothetical 2D NR system showing fluid behavior, which may be difficult to be found in the real world. By this comparison (given details in the results section), our aim is to encourage the scientific community to use the expressions of G-case, given in Eqs. (33), (34) instead NR-case, given in Eqs. (36), (37) when they are describing eHD in graphene system. If we consider graphene as a 3-dimensional (3D) system, following the dispersion relation \(E=pv_{g}\), then it may be again a hypothetical example but a good example for comparison purpose. Modifying our above calculation with replacement of \(\int d^{2}p\rightarrow\int d^{3}p\) and \(p_{x}v_{x}\approx\frac{pv_{g}}{3}=\frac{E}{3}\), we get the expressions of entropy density, shear viscosity, and their ratio as \[s_{g}^{3D} =\frac{N_{s}T^{3}}{\pi^{2}v_{g}^{3}}\Bigg{[}4f_{4}\left(A\right) -\frac{\mu}{T}f_{3}\left(A\right)\Bigg{]}\, \tag{38}\] \[\eta_{g}^{3D} =\frac{4N_{s}}{5\pi^{2}v_{g}^{3}}\tau_{c}f_{4}\left(A\right)T^{4 }\,\] (39) \[\frac{\eta_{g}^{3D}}{s_{g}^{3D}} =\frac{4}{5}\tau_{c}\Bigg{[}4f_{4}\left(A\right)-\frac{\mu}{T}f_{ 3}\left(A\right)\Bigg{]}^{-1}f_{4}\left(A\right)T. \tag{40}\] Now we have a 3-dimensional non-relativistic (3D NR) system of fermions. After applying the same methodology to this system, we get all the expressions of entropy density, shear viscosity, and the ratio of shear viscosity to entropy density, \[s_{NR}^{3D} =N_{s}\left(\frac{m}{2\pi}\right)^{\frac{3}{2}}T^{\frac{1}{2}} \Bigg{[}\frac{5}{2}f_{\frac{5}{2}}\left(A\right)-\frac{\mu}{T}f_{\frac{3}{2}} \left(A\right)\Bigg{]}\, \tag{41}\] \[\eta_{NR}^{3D} =\frac{N_{s}}{4}\left(\frac{m}{2\pi}\right)^{\frac{3}{2}}\tau_{c} f_{\frac{5}{2}}\left(A\right)T^{\frac{5}{2}}\,\] (42) \[\frac{\eta_{NR}^{3D}}{s_{NR}^{3D}} =\frac{1}{4}\tau_{c}\Bigg{[}\frac{5}{2}f_{\frac{5}{2}}\left(A \right)-\frac{\mu}{T}f_{\frac{3}{2}}\left(A\right)\Bigg{]}^{-1}f_{\frac{5}{2} }\left(A\right)T. \tag{43}\] This 3D NR system, showing fluid behavior, can be applicable for most of the fluids or liquids (e.g. water) used in our daily life. One can consider the above shear viscosity, entropy density, and their ratio for the water molecule, where water molecule obeying NR dispersion relation, \(E=p^{2}/(2m)\) with effective mass \(m\), but that will be not a good example to compare with the same expressions for eHD in graphene case. So, we can again consider a hypothetical example - 3D eHD NR case. We can also compare the above expressions for 2D, 3D eHD of G and NR cases with the same for ultra-relativistic (UR) case. A good example is hot QGP, where RHD can be applicable. According to the latest understanding [36], RHD is quite successful in describing QGP phenomenology. Again to make our comparison on equal footing, we will consider the hypothetical case of 2D, 3D UR electron fluid. If the Fermi velocity of electrons in graphene \(v_{g}\) is replaced by the factor 1 (as \(c=1\) in natural unit), then all expressions of the G-case will be converted to corresponding expressions of UR case. ## III Results After addressing the final expressions of \(\eta\), \(s\), and its ratio for different systems like 2D, 3D NRF, GF and URF in the formalism section, here we will discuss their numerical estimations through different graphs. Let us first come to the entropy density results. In the early universe scenario, a hot quark-gluon plasma (QGP) state around temperature \(T=400\) MeV or \(T=700\) MeV and zero quark chemical potential (\(\mu=0\)) is expected just after a few micro-second from the big bang. Due to the very high temperature of the medium, the constituent particle average momenta become so large that we can ignore its mass term, and it can be considered a UR case. UR case is famous for photon gas or black body radiation example where internal energy density or Intensity (they have connecting relation) follows \(T^{4}\) law, popularly known as Stefan-Boltzmann (SB) law. QGP thermodynamics at high temperatures reaches that SB limits. Eq. (38) can be converted to UR case by replacing \(v_{g}=c=1\) and by putting \(\mu=0\), we can get SB limit expression for 3D case, \[s_{SB}=\frac{N_{s}T^{3}}{\pi^{2}}\Bigg{[}4\times\Big{\{}\frac{7}{8}\zeta_{4} \Big{\}}\Bigg{]}\, \tag{44}\] where Fermi integral function can be converted into Riemann zeta function \(f_{4}=\frac{7}{8}\zeta_{4}\) for \(\mu=0\) condition, following the general relation \(f_{n}=\Big{(}1-\frac{1}{2^{n-1}}\Big{)}\zeta_{n}\). For the QGP case, quark degeneracy factors will have to be put into the \(N_{s}\), and gluon contribution must be added separately. Since two-flavor quark has a degeneracy factor of 24 and gluon has a degeneracy factor of 16, so massless QGP entropy density or SB limits of QGP will be \[s_{SB}^{QGP}=\frac{24T^{3}}{\pi^{2}}\Big{[}4\times\Big{\{}\frac{7}{8}\zeta_{4} \Big{\}}\Big{]}+\frac{16T^{3}}{\pi^{2}}\Big{[}4\times\zeta_{4}\Big{]}. \tag{45}\] When we plan to compare graphene entropy density with this SB limit, we have to understand that the temperature range (few hundred mega electron Volt (MeV), which is equivalent to \(10^{12}\)\({}^{0}\)K ) of QGP is too much larger than temperature range (1-23 milli electron Volt (meV), which is equivalent to 15-300 \({}^{0}\)K ) of graphene system. Fig. (1) has addressed nicely about these two domains. It is basically \(T\) vs. \(\mu\) plots in log scale to cover a broad band of \(T\) and \(\mu\) range. We marked the condensed matter physics (CMP) domain, covering \(T\approx 1-23\) meV and \(\mu\approx 0-10\) eV. We know that metal Fermi energy remains within the range \(\mu=2-10\) eV, which is marked as yellow. Unlike metal, graphene system Fermi energy can be changed via doping methods, and its \(\mu/T\ll 1\) and \(\mu/T\gg 1\) domains are called Dirac fluid (DF) or Dirac liquid (DL) and Fermi liquid (FL) domains, respectively, marked by arrows in Fig. (1). Similar to DF and FL domains for electrons, we may call early universe QGP as DF domain of quark and quark matter, expected in the core of neutron star as FL domain of quark. A rectangular domain within \(T=1-400\) MeV and \(\mu=0-1000\) MeV is marked as high energy physics (HEP) domain for quark. Reader can easily visualize the gap between CMP and HEP domains. After realizing the scale gap in \(T\)-\(\mu\) plane between URF of quark and GF of electrons, one should understand that we must consider a hypothetical electron URF to make an equal footing comparison. Within the temperature (\(T=0-0.023\) eV) and chemical potential (\(\mu=0-10\) eV) range, entropy density of URF, \[s_{UR}^{3D}=\frac{N_{s}T^{3}}{\pi^{2}}\Bigg{[}4f_{4}\left(A\right)-\frac{\mu} {T}f_{3}\left(A\right)\Bigg{]}\, \tag{46}\] has to be plotted with a normalization by \(s_{SB}\), given in Eq. (44). This normalized estimation is sketched by the blue dotted line in the left panel of Fig. (2), which shows that \(s_{UR}^{3D}\approx s_{SB}\) in the domain \(\mu/T\ll 1\), as expected. Figure 1: Location of condensed matter physics (CMP) domain and the high energy physics (HEP) domain in \(T\)-\(\mu\) diagram. nterstingly, we noticed that the main \(\mu/T\) dependence in entropy density is coming beyond the \(\mu/T=1\). Reader can understand that the terms with Fermi integral function are the main source of \(\mu/T\) dependence. Next, using Eq. (38), the graphene entropy density for the Fermi velocity \(v_{g}=0.006\) is plotted (red solid). From Ref. [37], we can get knowledge about a broad range of Fermi velocity \(v_{g}=1-3\times 10^{6}\) m/s or \(v_{g}=0.003-0.01\) (in natural unit) in graphene system. As charge career density or \(\mu\) decreases, \(v_{g}\) will increase and approach towards Dirac fluid (DF) or strongly coupled electron-electron domain. We have considered in-between constant values \(v_{g}=0.006\). We can understand that the \(\mu/T\) dependence of entropy density for URF and GF are the same but \(GF\gg URF\) due to the \(1/v_{g}^{3}\approx 5\times 10^{6}\) term. Next, we use Eq. (41) to draw the entropy density of NRF to plot (green dash line) in the left panel of Fig. (2). Reader can understand its different trend of \(\mu/T\) dependence for NRF is because of the term \(\left[\frac{5}{2}f_{\frac{5}{2}}\left(A\right)-\frac{\mu}{T}f_{\frac{3}{2}} \left(A\right)\right]\). A similar trend we can notice for 2D case with similar ranking URF \(\ll\) GF \(\ll\) NRF. Only for the transition from 3D to 2D, their orders of magnitude are shifted toward lower values. Next, let us come to the shear viscosity results. Here also, we can expect SB limit type simple expression for UR case at \(\mu=0\): \[\eta_{SB}=\frac{4N_{s}}{5\pi^{2}}\tau_{c}\frac{7}{8}\zeta_{4}T^{4}\, \tag{47}\] from the general \(\eta(T,\mu)\) expression for URF: \[\eta_{UR}^{3D}=\frac{4N_{s}}{5\pi^{2}}\tau_{c}f_{4}\left(A\right)T^{4}\, \tag{48}\] by putting \(v_{g}=c=1\) in Eq. (39). For massless QGP at \(\mu=0\) case, by replacing degeneracy factors of quark and gluons in \(N_{s}\), we get \[\eta_{SB}^{QGP}=24\Big{[}\frac{4}{5\pi^{2}}\tau_{c}\times\Big{\{}\frac{7}{8} \zeta_{4}\Big{\}}T^{4}\Big{]}+16\Big{[}\frac{4}{5\pi^{2}}\tau_{c}\times\zeta_{ 4}T^{4}\Big{]}. \tag{49}\] Again, this QGP is a realistic example of URF but for comparison, we have to consider electron URF. When we plan to compare the shear viscosity of URF, GF, and NRF, then we should use Eqs. (48), (39), (42) and for SB limit, we will use Eq. (47). Similar to normalized entropy density by its SB limit in Fig. (2), we have plotted normalized shear viscosity by its SB limit in Fig. (3), where 3D and 2D estimations are plotted in left and right panels respectively. Shear viscosity expression carries two kinds of information. One is relaxation time \(\tau_{c}\), and another is the remaining thermodynamic phase-space part as a function of \(T\) and \(\mu\). During the normalization, \(\tau_{c}\) information is canceled, and we can only see their thermodynamic phase-space part of shear viscosity. Interestingly, it follows a similar trend to other thermodynamical quantities like entropy density - which shows two types of \(\mu/T\) dependence in the domain \(\mu/T\ll 1\) and \(\mu/T\gg 1\), which are commonly assigned as DF and FL. Now, let us come to the shear viscosity to entropy density ratio \(\eta/s\), which is a more important quantity than only \(\eta\) to measure the fluidity of the system. In the D.F domain, the extreme situation (mathematically) is \(\mu\to 0\). In this Figure 2: The ratio of entropy density in different domains to \(s_{SB}\) with \(\mu/T\) (a) in the 3D case and (b) in the 2D case respectively limit, \(\eta\) and \(s\) carry quite similar terms, so when we take their ratio, we will get very simplified expressions: \[\frac{\eta}{s} = \frac{\tau_{c}T}{5}\text{ for 3D URF/GF }, \tag{50}\] \[= \frac{\tau_{c}T}{10}\text{ for 3D NRF },\] (51) \[= \frac{\tau_{c}T}{4}\text{ for 2D URF/GF },\] (52) \[= \frac{\tau_{c}T}{8}\text{ for 2D NRF }. \tag{53}\] From the String theory-based calculation [34], it was conjectured that \(\eta/s\) has a lower bound, well known as KSS bound, which gives an inequality \(\frac{\eta}{s}\geq\frac{\hbar}{k_{B}}\frac{1}{4\pi}=\frac{1}{4\pi}\) (in natural unit). Though classically one may expect \(\tau_{c}\to 0\Rightarrow\frac{\eta}{s}\to 0\), but quantum mechanically, relaxation or scattering time \(\tau_{c}\) or mean free path \(\lambda_{c}\approx\nu\tau_{c}\) can not be lower than de Broglie range of time or wavelength scale. This simple quantum mechanical concept also suggests a lower bound of \(\frac{\eta}{s}\), sometimes called a quantum lower bound. By imposing this bound \(\frac{\eta}{s}=\frac{1}{4\pi}\), we can get a rough expression lower bound of \(\tau_{c}\) as, \[\tau_{c} = \frac{5}{4\pi T}\text{ for 3D URF/GF }, \tag{54}\] \[= \frac{10}{4\pi T}\text{ for 3D NRF },\] (55) \[= \frac{4}{4\pi T}\text{ for 2D URF/GF },\] (56) \[= \frac{8}{4\pi T}\text{ for 2D NRF }. \tag{57}\] This KSS bound conjecture [34] makes the scientific community curious to find such fluid whose \(\eta/s\) is close to this bound. In other words, if we write \(\eta/s=n/(4\pi)\), where \(n\geq 1\)[38], then fluid with \(n=1-5\) may be considered as those special fluids and may be called nearly or close to perfect fluid. Empirically, QGP is the evidence of such perfect Figure 4: Shear viscosity to entropy density ratio; (a) with \(T\) for \(\mu=0\) (Undoped graphene) and (b) with \(\mu/T\) Figure 3: The ratio of shear viscosity of electron flow in different domains to \(\eta_{SB}\) with \(\mu/T\) (a) in the 3D case and (b) in the 2D case respectively fluid (\(n\approx 1-2\)) in the relativistic domain, while close to perfect fluid (\(n\approx 5\)) example for NR case is cold atom systems [35]. According to Eqs. (57), we can expect gross values of relaxation time for QGP and cold atom systems as \(\tau_{c}\approx\frac{5}{4\pi T}\cdot\frac{10}{4\pi T}\) and \(\tau_{c}\approx\frac{50}{4\pi T}\) respectively. Similarly, according to the theoretical prediction from Ref. [22], GF may also belong to this close-to-perfect fluid category. So far, to the best of our knowledge, no experimental measurement of \(\eta/s\) vs. \(T\) plot is available, so the theoretical plot of \(\eta/s\) vs. \(T\) in Ref. [22] is considered as our reference to guess or tune order of magnitude for \(\tau_{c}\approx\frac{n}{\pi T}\). In the left panel of Fig. (4), we can get guidance that \(\tau_{c}\approx\frac{n}{\pi T}\) within \(n=3\)-\(5\) can cover the order of magnitude of \(\eta/s\) in the temperature range \(T=35\)-\(150^{\circ}\)K, predicted by Muller et al. [22]. By considering an average value \(\tau_{c}\approx\frac{4}{\pi T}\), we have plotted \(\eta/s\) of 2D GF or URF (red solid line) and 2D NRF (green dash line) against \(\mu/T\)-axis in the right panel of Fig. (4). We notice that \(\eta/s\) in DF domain becomes lower than in the FL domain, mainly because of the thermodynamical phase-space part of \(\eta/s\). In terms of Fermi integral function, this part for GF can be identified as \(\left[3f_{3}\left(A\right)-\frac{\mu}{T}f_{2}\left(A\right)\right]^{-1}\!f_{ 3}\left(A\right)\) from Eq. (34). We have put the NRF case for reference, but 2D NRF for electrons may be possible in a hypothetical situation. So present study indicates that dropping \(\eta/s\) values and saturating towards constant values may be found during the transition from FL to DF domains in the graphene system. Although we have a limitation in that we have considered \(\tau_{c}\propto 1/T\), which may be changed in the actual microscopic calculation of \(\tau_{c}\), and so the trend of \(\eta/s\) may also be changed. It demands more theoretical studies on these \(\eta/s\) estimations as well as the explicit measurement of this quantity from the experimental side. ## IV Summary and conclusion We can summarize our investigation in the following steps. First, we introduce a brief macroscopic description of electron fluid in graphene, then we focus on its microscopic description. Our dealing quantity is considered as energy-momentum tensor, whose ideal part represents energy density and pressure in the static limit picture of fluid dynamics. Using those thermodynamical quantities, our destination from the ideal part of the energy-momentum tensor becomes entropy density, which will be used to be normalized with shear viscosity. From the dissipative part of the energy-momentum tensor, shear viscosity coefficients of electron fluid are calculated based on the kinetic theory approach with relaxation time approximation. Temperature and chemical potential-dependent general expressions of shear viscosity, entropy density, and the shear viscosity to entropy density ratio has been calculated and plotted for different cases of electron fluid like non-relativistic, graphene and ultra-relativistic cases. For completeness of comparison, we considered both 3D and 2D systems. Analyzing the results of different cases, we get a comparative understanding and conclusions, which are addressed briefly in bullet points: * \(\mu/T\) dependence of shear viscosity \(\eta\) as well as entropy density \(s\) for URF and GF are exactly similar but a little different from the NRF. * We notice a huge difference among URF, GF, and NRF in terms of the order of magnitude of \(\eta\) and \(s\) with ranking URF \(\ll\) GF \(\ll\) NRF. * During transiting from 3D to 2D, order of magnitude of \(\eta\) and \(s\) shift towards lower values * When we go from Fermi Liquid (\(\mu/T\gg 1\)) to Dirac Liquid (\(\mu/T\ll 1\)) domain, values of \(\eta\), \(s\) and \(\eta/s\) ratio decrease towards a saturated values. * Interesting ranking for \(\eta/s\) become URF = GF \(\geq\) NRF Present comparative study on the microscopic calculation of shear viscosity may be considered as a good documentation of master formulas of different cases from 3D URF, GF, NRF to 2D URF, GF and NRF. In the future, it may be useful for actual graphene system estimation, where one should go with some first principle or model dependent calculation of relaxation time. Also, one should deal with electron-hole plasma with appropriate degeneracy factor in the Dirac Fluid domain for the actual graphene system but the present work sticks with electron description only due to observing the estimations for its different dispersion relations. Our immediate future plan is to concentrate on that actual graphene phenomenology on the viscous aspects. ###### Acknowledgements. This work was partly (CWA and TZW) supported by the Doctoral Fellowship in India (DIA) program of the Ministry of Education, Government of India. The authors thank the other members of eHD club - Sesha P. Vempati, Ashutosh Dwibedi, Narayan Prasad, Bharat Kukkar, and Subhalaxmi Das Nayak. ## Appendix A Density of States The density of states is nothing but the total number of energy states per unit energy interval. If the total number of energy states in energy range \(E\) to \(E+dE\) are \(D\left(E\right)dE\), then the density of states will be \[g\left(E\right)=\frac{D\left(E\right)dE}{dE}. \tag{10}\] Now, here are some expressions of the number of energy states for different-different cases expressed as **Case:1.** for 3D Graphene, \[D\left(E\right)dE=N_{s}\frac{4\pi V}{h^{3}v_{g}^{3}}E^{2}dE\, \tag{11}\] **Case:2.** for 3D Non-Relativistic, \[D\left(E\right)dE=N_{s}2\pi V\left(\frac{2m}{h^{2}}\right)^{\frac{3}{2}}\sqrt{ E}dE\, \tag{12}\] **Case:3.** for 2D Graphene, \[D\left(E\right)dE=N_{s}\frac{2\pi S}{h^{2}v_{g}^{2}}E\,dE\, \tag{13}\] **Case:4.** for 2D Non-Relativistic, \[D\left(E\right)dE=N_{s}\frac{2\pi S}{h^{2}}m\,dE. \tag{14}\] In the above expressions, \(V\) and \(S\) represent the volume and area in position space, respectively. ## Appendix B Fermi-Dirac Integral We have the integral form as \[f_{\nu}(A)=\frac{1}{\Gamma(\nu)}\int_{0}^{\infty}\frac{x^{\nu-1}}{A^{-1}e^{x} +1}dx\, \tag{15}\] where \(f_{\nu}(A)\) is known as the Fermi-Dirac integral and \(x=\beta E\). And the expression in terms of energy can be written in terms of \(x\) as \[\int_{0}^{\infty}\frac{E^{\nu-1}}{A^{-1}e^{\beta E}+1}dE= \frac{1}{\beta^{\nu}}\int_{0}^{\infty}\frac{x^{\nu-1}}{A^{-1}e^{x} +1}dx=\frac{1}{\beta^{\nu}}\Gamma(\nu)f_{\nu}(A). \tag{16}\] ## Appendix C Average Angular Integral in 2D We have the integral form as \[\int p_{i}p_{j}p_{k}p_{l}\,d^{2}p=pdp\int p_{i}p_{j}p_{k}p_{l}\,d\theta\.\] Since \[\vec{p}=p\hat{n}\] where, \[\hat{n}=\cos\theta\,\hat{\hat{i}}+\sin\theta\,\hat{\hat{j}}\,\] \[p_{i}=\vec{p}.\hat{e}_{i}=p\left(\hat{n}.\hat{e}_{i}\right)=pn_{i}\,\] the integral becomes \[\int p_{i}p_{j}p_{k}p_{l}\,d^{2}p=p\,dp\,p^{4}\int n_{i}n_{j}n_{k}n_{l}\,d\theta\.\] Now, we have to calculate \[\int n_{i}n_{j}n_{k}n_{l}\,d\theta=?\] **Case:1** The above integral becomes \[\int n_{1}^{2}n_{2}^{2}\,d\theta =\int_{0}^{2\pi}\cos^{2}\theta\sin^{2}\theta\,d\theta\,\] \[=4\int_{0}^{\frac{\pi}{2}}\cos^{2}\theta\sin^{2}\theta\,d\theta\.\] Now, using the Beta function, we know that \[B\left(u,v\right) =2\int_{0}^{\frac{\pi}{2}}\left(\cos\theta\right)^{2u-1}\left( \sin\theta\right)^{2v-1}\,d\theta\, \tag{10}\] \[\implies B\left(u,v\right) =\frac{\Gamma u\,\Gamma v}{\Gamma\left(u+v\right)}. \tag{11}\] Applying this, we get \[\int n_{1}^{2}n_{2}^{2}\,d\theta=\frac{2\pi}{8}. \tag{12}\] **Case:2** \[\int n_{1}^{3}n_{2}\,d\theta =\int n_{1}n_{2}^{3}\,d\theta\,\] \[=\int_{0}^{2\pi}\cos^{3}\theta\sin\theta\,d\theta=0\.\] **Case:3** \[\int n_{1}^{4}\,d\theta =\int n_{2}^{4}\,d\theta\,\] \[=\int_{0}^{2\pi}\sin^{4}\theta\,d\theta\,\] \[=2\,B\left(\frac{5}{2},\frac{1}{2}\right)=\frac{3\pi}{4}\.\] Now, the above integral can be written as \[\int n_{1}^{4}\,d\theta=\int n_{2}^{4}\,d\theta=\frac{2\pi}{8}\times 3\,\] Now, the integral can be expressed as \[\int n_{i}n_{j}n_{k}n_{l}\,d\theta=\frac{2\pi}{8}\left(\delta_{ij}\delta_{kl} +\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\, \tag{13}\] \[\int p_{i}p_{j}p_{k}p_{l}\,d^{2}p=2\pi\,p\,dp\frac{p^{4}}{8}\left(\delta_{ij}\delta_ {kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\.\] Now, the average final expression will be \[<p_{i}p_{j}p_{k}p_{l}>=\frac{p^{4}}{8}\left(\delta_{ij}\delta_{kl}+\delta_{ik} \delta_{jl}+\delta_{il}\delta_{jk}\right). \tag{10}\]
2301.10586
Transition from wave turbulence to acousticlike shock-wave regime
We report on the experimental observation of a transition from a dispersive wave turbulence regime to a nondispersive regime involving shock waves on the surface of a fluid. We use a magnetic fluid in a canal subjected to an external horizontal magnetic field to tune the dispersivity of the system. For a low magnetic field, gravity-capillary wave turbulence is observed, whereas for a high enough field, random steep coherent structures arise which are found to be shock waves. These shock waves create singularities in the second-order difference of the surface elevation, leading to an $\omega^{-4}$ frequency power spectrum. This spectrum is also found to be controlled by the number and amplitude of the shocks and is well captured by a model based on a random Dirac-$\delta$ distribution (Kuznetsov-like spectrum). Finally, the shock-amplitude statistics exhibits a power-law distribution with an exponent close to the predictions of the one-dimensional random-forced Burgers equation. This shock-wave regime, discovered here for surface waves, thus paves the way to better explore their properties.
Guillaume Ricard, Eric Falcon
2023-01-25T13:44:32Z
http://arxiv.org/abs/2301.10586v1
# Transition from wave turbulence to acousticlike shock-wave regime ###### Abstract We report on the experimental observation of a transition from a dispersive wave turbulence regime to a nondispersive regime involving shock waves on the surface of a fluid. We use a magnetic fluid in a canal subjected to an external horizontal magnetic field to tune the dispersivity of the system. For a low magnetic field, gravity-capillary wave turbulence is observed, whereas for a high enough field, random steep coherent structures arise which are found to be shock waves. These shock waves create singularities in the second-order difference of the surface elevation, leading to an \(\omega^{-4}\) frequency power spectrum. This spectrum is also found to be controlled by the number and amplitude of the shocks and is well captured by a model based on a random Dirac-\(\delta\) distribution (Kuznetsov-like spectrum). Finally, the shock-amplitude statistics exhibits a power-law distribution with an exponent close to the predictions of the one-dimensional random-forced Burgers equation. This shock-wave regime, discovered here for surface waves, thus paves the way to better explore their properties. ## I Introduction Wave turbulence is a statistical state in which numerous random weakly nonlinear waves interact with each other. This phenomenon is described by the weak-wave turbulence theory (WTT) which predicts a power-law cascade of the wave energy spectrum from large to small scales [1; 2; 3]. This out-of-equilibrium stationary state occurs in various domains with different scales such as ocean surface waves, plasma waves, hydroelastic waves, elastic waves on a plate, internal or inertial waves on rotating stratified fluids, and optical waves [2]. Despite its success in predicting analytically the wave spectrum, WTT requires many assumptions (e.g., infinite system, weak nonlinearity, constant energy flux, timescale separation, and dispersive waves), which can be difficult to satisfy experimentally. Although wave turbulence has been assessed in different experimental systems [4; 5; 6; 7; 8; 9], it is of paramount interest to know the validity domain of the theory in experiments regarding its assumptions. For example, finite-size effects are beginning to be considered theoretically [10; 11] and experimentally [12; 13; 14; 15] for hydrodynamics surface waves. Finite-amplitude effects have also been tackled to address the existence of a transition from weak to strong wave turbulence [2]. In comparison, few studies have investigated whether or not wave turbulence exists in a nondispersive wave system. In this case, waves of different frequencies travel with the same phase velocity and thus cannot transfer energy between each other by resonant interactions [2]. This leads to the breaking of a main assumption of WTT, and coherent structures such as solitons or shocks are thus expected due to cumulative effects of the nonlinearity [16; 17]. This has been the source of a long-standing debate about whether acoustics waves should be considered as a random set of shocks (leading to the Kadomtsev-Petviashvili spectrum) [18] or if WTT is applicable for their description [19]. Indeed, three-dimensional acoustic WTT could be theoretically possible because the large range of possible wave directions in three dimensions acts as an effective dispersion [2; 16; 17; 19], although yet unsupported by a rigorous proof [20]. Conversely, WTT is not applicable for two-dimensional (2D) nondispersive acoustic waves, but can be regularized by weakly dispersive effects leading to predictions for 2D weakly dispersive acoustic wave turbulence [20]. Weakly dispersive wave turbulence also occurs theoretically or numerically for Alfven waves in plasma [21], gravitational waves in the early universe [22] and elastic waves on a stretched membrane [7]. Experimentally, a weakly dispersive wave regime can be obtained on the surface of a magnetic fluid subjected to an external horizontal magnetic field. The latter modifies the dispersion relationship of surface waves adding a nondispersive term that is tunable experimentally [23]. In this case, dispersive wave turbulence is evidenced experimentally in two dimensions because of the anisotropic dispersion relation, nondispersivity occurring only in the magnetic field direction [24; 25]. Another method to experimentally control the wave dispersion is to decrease the fluid depth of gravity-capillary wave turbulence from a deep regime to a shallow one [26; 27]. This deep-to-shallow transition leads to a less steep gravity wave spectrum, the formation of a depth-dependent hump in the capillary spectrum (as an analog of a bottleneck effect) for a weak forcing [26], and the formation of coherent structures as solitons when the forcing is strong enough [27; 28]. Here, we use a one-dimensional (1D) canal filled with a magnetic fluid subjected to an external horizontal magnetic field to tune the dispersivity of the wave system within a deep-water regime. At a low magnetic field, the classical quasi-1D dispersive gravity-capillary wave turbulence is observed [29], whereas at a high enough field a nondispersive regime is reached. In the latter, we observe the emergence of random shock waves, keeping their shape over time, with a very steep profile close to the one derived from the 1D Burgers equation [30], although not reaching a fully vertical front. They are characterized by a discontinuity that leads to a Dirac-\(\delta\) singularity in the second-order difference of their amplitude. We show that these shock waves are coherent structures rich in the frequency domain, which carry energy over the canal. They thus become the main mechanism building the wave energy spectrum. Indeed, we found that the energy spectrum of these shocks agrees with a model of a Kuznetsov-like spectrum of second-order singularities [31]. The shock-wave statistics are also reported and show that their probability distribution is close to the one of a diluted gas of shocks driven by the 1D random-forced Burgers equation [32; 33; 34; 35; 36]. A phase diagram of the wave turbulence and shock-wave regimes is also reported as a function of the control parameters. The energy transfer driven by the shock waves is thus fundamentally different from the local one occurring in wave turbulence by nonlinear wave resonant interactions. The article is organized as follows. We first present in Sec. II some theoretical background (dispersion relationship, magnetic steepening, and energy spectrum predictions). Section III presents the experimental setup. Section IV shows the experimental results on the wave energy spectrum (using spatiotemporal, time-frequency, and frequency analyses), the energy flux, and timescales. Section V focuses on the nondispersive regime emphasizing the presence of dissipative coherent structures as shock waves, and their statistics. Section VI presents the model used to predict the shock wave spectrum and the conditions for an experimental agreement. We summarize in Sect. VII. ## II Theoretical background ### Dispersion relation The dispersion relation of one-dimensional linear deep-water inviscid gravity-capillary waves reads \(\omega^{2}=gk+(\gamma/\rho)k^{3}\), with \(\omega=2\pi f\) the angular frequency, \(k\) the wave number, \(g\) the acceleration of gravity, \(\gamma\) the surface tension, and \(\rho\) the density of the liquid [37]. For a magnetic liquid subjected to a horizontal magnetic induction \(B\) (collinear to the wave propagation), an additional nondispersive term, i.e., acousticlike term in \(\omega\sim k\), has to be taken into account for which its strength is controlled by \(B\). The corresponding dispersion relation then reads [23; 38] \[\omega^{2}=gk+\frac{\gamma}{\rho}k^{3}+v_{A}^{2}(B)k^{2}, \tag{1}\] where \(v_{A}^{2}=\frac{\mu_{0}M^{2}}{1+\mu/\mu_{0}}\) is the characteristic nondispersive velocity analogous of the Alfven wave velocity in plasma [39], \(M(B)\) is the magnetization within the liquid depending on the applied magnetic field induction \(B\), \(\mu_{0}=4\pi\times 10^{-7}\) Tm/A is the magnetic permeability of a vacuum, and \(\mu=\mu_{0}(1+\frac{\partial M}{\partial B})\) is the liquid permeability [23]. Note that \(B\) should not be confused with the external magnetic field, \(H=B/\mu-M\), even if \(B\) will be hereafter referred to as the magnetic field. The dispersion relation can be rewritten as \[\omega=v_{A}(B)k\sqrt{1+\alpha k^{-1}+\beta k}, \tag{2}\] with \(\alpha=g/v_{A}^{2}\) and \(\beta=\gamma/(\rho v_{A}^{2})\). A nondispersive regime \(\omega\sim k\) could be obtained if the gravity and the capillary terms are much smaller than the magnetic one, i.e., if \[\alpha k^{-1}<C\quad\text{and}\quad\beta k<C\, \tag{3}\] where \(C\) is a chosen constant quantifying the ratio between the magnetic term and the gravity or capillary one. Using the dispersion law of Eq. (1), we plot in Fig. 1 the theoretical diagram of the predominance of the gravity, capillary, and magnetic regimes [24] as a function of the parameter \(C\). From our ranges of experimental parameters used afterward, we can reach \(C\sim 20\), i.e., a magnetic term larger than 20 times each of the other two. This will be possible because of the use of a ferrofluid with a high magnetic susceptibility and a relatively low viscosity (see below). ### Magnetic wave steepening It is worth noting that in the dispersion law of Eq. (1) the magnetic term comes from the spatiotemporal fluctuations of the magnetic field generated at the liquid-gas wavy interface to satisfy the magnetic boundary conditions at the interface [23]. The magnetic fluctuations \(h\) at the interface in the direction \(Ox\) of the constant horizontal field \(H\) are obtained due to a calculation similar to the one performed in [23] for a vertical magnetic field and read \(h_{1}=h_{2}=M\eta k(1+\mu/\mu_{0})\), where indices 1 and 2 refer to the magnetic liquid and the gas, respectively. The more important the surface perturbation is, the more the fluctuation in the magnetic field appears. With typical values used here (\(B\approx 760\) G, \(\mu_{0}M\approx 340\) G, \(\mu/\mu_{0}\sim 1.05\), \(k\approx 500\) m\({}^{-1}\), and \(\eta\approx\pm 1\) mm), the magnetic induction fluctuations \(b_{1}=\mu h_{1}\) and \(b_{2}=\mu_{0}h_{2}\) are about \(\pm 80\) G, that is to say, about \(\pm 10\%\) of the applied value. We can thus infer the magnetic force \(F_{m}\) acting on the fluid in the \(x\) direction as \(F_{m}=\frac{\mu_{0}M}{\rho}\frac{\partial h}{\partial x}=\rho v_{A}^{2}k^{2}\eta\). \(F_{m}\) acts more at the extrema of a wave than at its base (\(\eta=0\)) and thus leads to a steepening of the wave and a difference of the fluid velocity along the wave height. This mechanism is the source of the appearance of shock waves as it is for the Burgers shock waves [30] (see Sec. IV.2). Note that no experimental comparison is performed here to check the above theoretical predictions on the field fluctuations \(h\), but such a comparison is done to explain qualitatively the physical process of the shock-wave formation observed below. Note also that magnetic stress, called Maxwell stress, occurs at the interface of a magnetic fluid [23]. For a horizontal magnetic field, this stress \(s_{n}=-\frac{1}{2}\mu_{0}H^{2}\), normal to the surface, tends to flatten the surface wave acting as a stabilizer. This higher-order effect will be not visible here but might appear at higher \(v_{A}\), although not achievable experimentally. ### Energy spectra Wave turbulence arises from the interaction of weakly nonlinear waves and is described by the weak turbulence theory [1; 2]. The latter predicts that the wave energy spectrum follows a power-law cascade of the scale (frequency or wavenumber) only for a system involving a single term in its dispersion relation \(\omega(k)\). For example, in one dimension, pure gravity waves dominated by five-wave resonant interactions are predicted to have a power spectrum of the surface elevation \(\eta\) as \(S_{\eta}\sim\omega^{-17/4}\)[40]. It has been also observed experimentally that 1D capillary waves are dominated by five-wave resonant interactions and follow a power spectrum in \(S_{\eta}\sim\omega^{-31/12}\)[29]. Thus, for a 1D gravity-capillary system (with no magnetic field), these two asymptotic spectra are thus expected, the pure gravity spectrum for large enough scales (\(f\lesssim 5\) Hz) and the pure capillarity one for small enough scales (\(f\gtrsim 50\) Hz) [9]. However, the finite size of our experimental system and the nonvanishing viscosity of the fluid used here will lead to work in the intermediate-frequency scales and thus to an entanglement of the gravity and capillary effects [9]. Indeed, for a 1D gravity-capillary system, we previously reported experimentally a power-law spectrum in \(S_{\eta}\sim\omega^{-3.3\pm 0.2}\) in the intermediate-scale range as a result of the occurrence of three-wave interactions [29] [see also the purple curve in Fig. 7(a)]. Coherent structures are more likely to appear in one dimension than in higher dimensions [28]. For instance, transitions from wave turbulence to solitonic regimes have been predicted theoretically [28] and observed experimentally [27] for 1D gravity waves in shallow water, coherent structures such as Korteweg-de Vries solitons occurring as a result of the weak dispersion. For 1D deep-water gravity waves, other types of solitons, e.g., Peregrine solitons or envelope solitons, were observed experimentally [41; 42], but are not expected in our study. Nevertheless, since our system is Figure 1: Theoretical diagram of the predominance of the gravity, capillary, and magnetic regimes. Here, \(C\) is defined as how much the magnetic term is bigger than the two others. The experimental ranges are \(f<100\) Hz and \(v_{A}<0.55\) m/s (i.e., \(B<760\) G). The white vertical dashed line corresponds to the run at \(v_{A}=0.51\) m/s. nondispersive at high \(v_{A}\), other coherent structures could arise such as singularities [28]. Singularities can be defined by local discontinuities of order \(n\) in the wave field, i.e., leading to a Dirac-\(\delta\) distribution on the \(n\)th-order derivative of the wave field \(\partial^{n}\eta\). As discontinuities contain energy at all frequency scales [31; 43], these coherent structures would lead to a spectrum only driven by their geometry, i.e., the order of the discontinuity. Since the power spectrum of a Dirac-\(\delta\) distribution occurring on \(\partial^{n}\eta\) is a white noise, i.e., \(S_{\partial^{n}\eta}\sim\) const, one has thus, by integration, the spectrum of \(\eta\) in \(S_{\eta}\sim\omega^{-2n}\). For discontinuities of the first order \(n=1\) (e.g., shock waves in the Burgers' equation) an acoustic spectrum in \(S_{\eta}\sim\omega^{-2}\) is thus excepted, i.e., the Kadomtsev-Petviashvilli spectrum [44; 18; 45]. If discontinuities are of second order \(n=2\), e.g., sharp-crested waves, or shock waves not reaching a fully vertical front, one thus expects to obtain a spectrum in \(S_{\eta}\sim\omega^{-4}\) (or Kuznetsov-like spectrum) [31]. ## III Experimental setup Experiments were performed in a canal made of polytetrafluoroethylene, i.e., Teflon, to decrease the wetting, with a length \(L=15\) cm and a width \(L_{y}=2\) cm (see Fig. 2). This hydrophobic canal is filled up to a depth \(d=2\) cm with a ferrofluid (see below). A shaker linked to a wave maker is located at one end to inject energy in a narrow random frequency bandwidth \(f_{0}\pm\Delta F\), with \(f_{0}=8.5\) Hz and \(\Delta F=2.5\) Hz. Since \(L\gg L_{y}\), waves propagate only in the longitudinal (\(Ox\)) direction and are thus considered to be quasi-1D [29]. The whole setup is located between two vertical coils in Helmholtz configuration, 25 cm in internal diameter, generating a horizontal magnetic field (\(B\in[0,800]\) G) homogeneous on the liquid surface. Two measurement methods of surface elevation are used: a single point measurement and a laser sheet profilometry (LSP). The temporal variations of the surface elevation \(\eta(t)\) are measured at a single point using a homemade capacitive wire gauge (0.22 mm in diameter and 10 \(\upmu\)m vertical resolution) [4] with a 2 kHz sampling frequency leading thus to a resolved frequency up to 1 kHz and thus to a discretization time \(dt=0.5\) ms. A space- and time-resolved wave-field measurement \(\eta(x,t)\) is reached by the LSP method. A camera (Basler, 200 frames/s) is located above the canal and the wave field is illuminated over 8 cm with a laser sheet at an angle of \(\alpha=45^{\circ}\) with respect to the horizontal (see Fig. 2). The horizontal shift \(\Delta y(x,t)\) of the laser sheet along \(Oy\) detected by the camera is hence directly linked to the surface elevation by \(\eta(x,t)=\Delta y(x,t)/\tan\left(\alpha\right)=\Delta y(x,t)\)[46]. The horizontal and vertical resolutions of LSP are 43 \(\upmu\)m. The wave elevation is monitored for both measurements for \(\mathcal{T}=15\) min. We use a Ferrotec PBG400 ferrofluid. This black-brown opaque ferrofluid offers high magnetization, high colloidal stability, and superparamagnetic properties. It is a water-based (with polyethylene glycol) suspension synthesized with 7.9% by volume of ferromagnetic particles (Fe\({}_{3}\)O\({}_{4}\) iron oxide, 10 nm in diameter). The properties of the liquid are density \(\rho=1400\) kg/m\({}^{3}\), surface tension \(\gamma=34\) mN/m, kinematic viscosity \(\nu=2.86\times 10^{-6}\) m\({}^{2}\)/s, magnetic saturation Figure 2: Experimental setup. A pair of Helmholtz coils generates a horizontal homogeneous magnetic field \(B\) on the ferrofluid surface. Random waves are driven by a wave maker linked to a shaker at one end of the canal. The wave elevation \(\eta(t)\) is measured at a single point using a capacitive wire gauge, and resolved in space and time \(\eta(x,t)\) with a laser sheet profilometry using a camera and a laser sheet illuminating a horizontal line of the free surface. \(M_{sat}=440\) G, and initial susceptibility \(\chi_{i}=3.28\). Note that \(M_{sat}=\lim\limits_{B\to\infty}M\) and \(\chi_{i}=\frac{\partial M}{\partial B}|_{B=0}\) are obtained due to the magnetization curve \(M(B)\) provided by Ferrotec. Here \(M(B)\) is also used to compute the characteristic velocity \(v_{A}(B)\) used in Eq. (1) (see Appendix A). The ferrofluid high sensibility to magnetic effects with a relatively low viscosity is crucial to reach experimentally a significant inertial range (see Fig. 1). To quantify nonlinearities, we measure the wave steepness as \(\epsilon\equiv\sigma k_{m}\), where \(\sigma\) is the standard deviation of the surface elevation signal, computed as \(\sqrt{\overline{\eta(t)^{2}}}\) or \(\sqrt{\int_{L}\eta(x,t)^{2}dx/L}\) (the overline is time average), and \(k_{m}\) is the wave number for which the wave spectrum is maximum (typically at the forcing scale) [47; 13]. We keep \(\epsilon\simeq 0.07\) to validate the weak nonlinearity assumption from WTT. ## IV Experimental results ### Spatiotemporal spectral analysis From LSP measurements, applying to the surface elevation \(\eta(x,t)\) a double space and time Fourier transform \(\widehat{\eta}(k,\omega)\), we compute the spatiotemporal power spectrum \(S_{\eta}(k,\omega)=|\widehat{\eta}(k,\omega)|^{2}/(\mathcal{T}L)\). Note that the signal \(\eta(x,t)\) has been increased in length using its spatial symmetry to reach symmetric boundary conditions to compute \(S_{\eta}(k,\omega)\). A Hanning windowing (hanning Matlab function) has also been performed to improve the quality of the spectrum. The space-time power spectra \(S_{\eta}(k,\omega)\) are shown in Fig. 3 for different applied magnetic field \(B\), that is, for different \(v_{A}\). In Fig. 3(a), \(v_{A}=0\) m/s, meaning that the wave field is only driven by gravity and capillary effects. In this case, the wave energy is found to cascade over small scales and is concentrated around the gravity-capillary dispersion relation (white solid line). This is a clear indication of the presence of wave turbulence as previously reported in Ref. [29]. A spectral broadening \(\delta_{\omega}\) of the wave energy around this dispersion relation is also observed due to nonlinearities [29] and is estimated1. When the magnetic field is increased [Figs. 3(b) and 3(c)], the energy still cascades following the dispersion relation, but is now influenced by the magnetic effects lowering significantly the spectrum [see solid lines in Figs. 3(b) and 3(c)]. For \(v_{A}=0.47\) m/s, the nondispersive term in Eq. (1) is at least ten times larger than the dispersive ones in the range of interest (\(20<f<100\) Hz) as quantified in Fig. 1. As a consequence of this quasinondispersive dispersion relation, the wave energy is then found to be concentrated around a straight line of slope close to \(1/v_{A}\) as shown in Fig. 3(c). We thus evidence a transition from a dispersive gravity-capillary wave field to a nondispersive magnetic wave field where all waves travel at a constant velocity \(v_{A}\). The operator thus controls the dispersivity of the system via the parameter \(v_{A}(B)\). Note that a slight mismatch between the theoretical dispersion relation and the experimental data occurs at large \(v_{A}\). This might be due to the inhomogeneous magnetic fluctuations Figure 3: Power spectrum \(S_{\eta}(k,\omega)\) of the wave elevation for (a) \(v_{A}=0\), (b) \(v_{A}=0.3\), and (c)\(v_{A}=0.47\) m/s. The constant wave steepness \(\epsilon\simeq 0.07\). The solid line shows the theoretical dispersion relation \(\omega(k)\) of Eq. (1) in the (a) dispersive, (b) intermediate, and (c) nondispersive cases. In the latter case, the slope of the straight line is \(1/v_{A}\). The dashed line shows the spread dispersion relation \(\omega(k)\pm\delta_{\omega}\) with \(\delta_{\omega}=30\) Hz. The white rectangle shows the fixed frequency forcing range between 6 and 11 Hz. The color bar is on a logarithmic scale. appearing along the wave height as explained in Sec. II.2. The fluctuations of the field, involving fluctuations of \(v_{A}\), explain the mismatch but are not quantified in the present study. Note also that for \(v_{A}=0.47\) m/s a weaker branch of the energy appears at the top of Fig. 3(c). Although the maximum visible frequency in the spectrum is \(f_{e}/2=100\) Hz, i.e., half the sampling frequency, energy at higher frequencies \(f>f_{e}\), i.e., \(k/(2\pi)>198\) m\({}^{-1}\) for \(v_{A}=0.47\) m/s, can be seen, however, due to the spectrum aliasing effect. Despite viscous effects acting from about 100 Hz, energy occurring at higher frequencies is a consequence of singularities that give energy to all frequencies (see below). Note also that no other coherent structure such as bound waves appears in Fig. 3. ### Surface elevation signals and time-frequency analysis Typical temporal signals of the surface elevation \(\eta(t)\) (black line) and of its first-order difference \(\delta\eta(t)=\eta(t+dt)-\eta(t)\) (red lines) are shown in Fig. 4(a) for the dispersive case (\(v_{A}=0\) m/s) and in Fig. 4(c) for the nondispersive case (\(v_{A}=0.51\) m/s). We also compute the corresponding wavelet transforms (using the continuous 1D wavelet transform Matlab function) [48] to obtain a time-frequency analysis of the energy spectra as plotted in Figs. 4(b)-4(d) (see Appendix B for longer signals). The wavelet transform is preferred to a short-time Fourier transform, e.g., spectrogram, that has issues with the frequency-time resolution trade-off. For the dispersive case (\(v_{A}=0\) m/s), no coherent structure appears for the temporal evolution of the surface elevation, its first-order difference \(\delta\eta\) remaining close to 0. For the nondispersive case (\(v_{A}=0.51\) m/s) the typical wave height is found to increase, whereas some peaks occur in its first-order difference corresponding to discontinuities in \(\eta(t)\). As discussed in Sec. II.2, a concentration of the magnetic field lines occurs at the crests and troughs of the wavy interface to satisfy the magnetic boundary conditions at the interface [23], leading to a stronger magnetic field and so to a stronger value of \(v_{A}\) at the wave crest than at its base. Thus, for a given wave, \(v_{A}\) depends on the vertical coordinate \(z\) with \(\partial v_{A}/\partial z>0\). Since the wave crest is faster than its base, it ends up creating a discontinuity, i.e., a singularity, called afterward shock wave. Shock waves are also visible in the wavelet spectrum [Fig. 4(d)], where energy is present at all frequencies even beyond the viscous scale of the order of 100 Hz. Although subjected to dissipation during their propagation, shock waves are thus coherent structures rich in the frequency domain. Note that the Maxwell stress which should decrease the wave height in the magnetic field direction [23; 25] is not reported here. This higher-order effect could occur at higher \(v_{A}\) not experimentally achievable in our parameter range (see Appendix C). Figure 4: (a) Typical temporal evolution of the surface elevation \(\eta(t)\) (black line) and its first-order difference \(\delta\eta(t)/dt\) (red line) and (b) corresponding time-frequency spectrum of \(\eta(t)\) obtained by a wavelet transform, for the dispersive case (\(v_{A}=0\) m/s). (c) and (d) Same as in (a) and (b) but for the nondispersive case (\(v_{A}=0.51\) m/s). Here \(\epsilon\simeq 0.07\) in the two cases. A typical shock wave signal \(\eta(t)\) and its first- and second-order differences \(\delta\eta(t)=\eta(t+dt)-\eta(t)\) and \(\delta^{(2)}\eta(t)=\eta(t+2dt)-2\eta(t+dt)+\eta(t)\) respectively, are plotted in Fig. 5 for the nondispersive case (\(v_{A}=0.51\) m/s). We checked that this localized singularity keeps its shape and travels along the canal at constant velocity with no breaking (see Appendix D for the displacement of a single shock along the canal). The discontinuity in \(\eta(t)\) displayed in Fig. 5 corresponds to a rather long peak in its first-order difference \(\delta\eta\) and to a very thin peak in its second-order difference \(\delta^{(2)}\eta\). This short peak is assumed to be close to a Dirac peak, to claim that the singularity observed here is of second order. It is worth noting that the nondispersive shock waves observed here do not exhibit a fully vertical front. This observation is emphasized in the inset of Fig. 5, where only the experimental discrete data of the shock wave are plotted. A jump in the signal is visible corresponding to a second-order discontinuity of \(\eta(t)\). Although a fully vertical shock cannot be measured with a single-point gauge, the spatiotemporal measurement of the shock-wave shape confirms that the latter does not reach a fully vertical front (see Appendix D). The shocks observed therefore differ from classical shock waves driven by the 1D Burgers equation displaying singularities of the first order (Dirac-\(\delta\) distribution in their first-order difference) [30]. Even if its amount is small, dispersive effects might prevent the formation of a vertical-front shock wave, and it is difficult to say if higher \(v_{A}\) values would lead to a vertical front since the Maxwell stress would occur, flattening the waves. Note that each singularity in the system can be removed by numerical postprocessing, leading, as expected, to smoothing the signal around the discontinuity (see dashed lines in Fig. 5). To compare the typical shape of our coherent structures (Fig. 5), we solve numerically the 1D Burgers equation [30] \[\frac{\partial\eta}{\partial t}+A\eta\frac{\partial\eta}{\partial x}=\nu\frac{ \partial^{2}\eta}{\partial x^{2}}, \tag{4}\] with \(A=v_{A}/d\) (\(v_{A}=0.5\) m/s and \(d=2\) cm) a constant chosen for dimensional homogeneity and \(\nu=2.86\times 10^{-6}\) m\({}^{2}\)/s the kinematic viscosity of the liquid. We use an implicit scheme using the Crank-Nicolson formulation [49] and a Thomas algorithm [50] with the initial condition \(\eta(x,t=0)=\sin(x)\). The numerical grid is resolved with 1024 points. The results are plotted for different times in Fig. 6(a). As expected, a steepening of the wavefront appears before dissipation decreases slightly the amplitude of the shock. This kind of vertical shock would lead to breaking experimentally. No fully vertical front appears experimentally, but rather a shape close to the one obtained just before the Burgers shock, and that is conserved over time [see Figs. 5 and 6(b)]. Moreover, this shock-wave shape exhibits both for the numerical and experimental results a long peak on the first-order difference \(\delta\eta\) and a short peak (similar to a Dirac one) on the second-order difference \(\delta^{(2)}\eta\). To sum up, as a consequence of the magnetic effects, this nondispersive system generates coherent structures that are close to the Burgers shock waves with a slightly less steep front (less than 0.1%) and a self-similar shape that is conserved over time. It is worth noting that even if strong similarities occur between the numerical results of the Burgers equation and the experimental results found here, e.g., the presence of shock waves and nondispersive system, no rigorous analytical link is established in the present study. Figure 5: Enlargement of a typical shock wave \(\eta(t)\) (black solid line), its first-order difference \(\delta\eta(t)/dt\) (red solid line) and its second-order one \(\delta^{2}\eta(t)/dt^{2}\) (blue solid line) in the nondispersive case (\(v_{A}=0.51\) m/s). The value of \(\delta^{(2)}\eta/dt^{2}\) is divided by 500 to observe it on the same vertical scale as \(\delta\eta/dt\). Dashed lines show the same but when the singularity is removed by numerical postprocessing, thus smoothing the signals. The black arrow shows the direction of wave-front propagation. The purple arrow shows the nonlinear timescale of a shock wave, \(\tau_{nl}^{S}\), (see Sec. IV.5). The inset shows the enlargement of the shock wave with only experimental discrete data to evidence the jump at the second-order discontinuity. The link is only qualitative (see Sec. IV.3 for power spectra and Sec. V for probability density functions of the surface elevation) but provides some interesting insights that deserve further theoretical work. ### Experimental wave energy spectra The frequency power spectrum \(S_{\eta}(\omega)\equiv|\widehat{\eta}(\omega)|^{2}/\mathcal{T}\) is now computed from the single-point measurement of the surface elevation \(\eta(t)\) using its temporal Fourier transform \(\widehat{\eta}(\omega)\). \(S_{\eta}(\omega)\) is shown in Fig. 7(a) for different dispersion strengths, i.e., different \(v_{A}\). For the dispersive case (\(v_{A}=0\) m/s), the wave spectrum follows a power-law cascade characteristic of wave turbulence although occurring over a rather small inertial range (bottom blue curve). This frequency range (between 20 and 70 Hz) corresponds to the entanglement of gravity and capillary effects, whereas no pure capillary wave turbulence is observed here due to viscous effects (\(f\gtrsim 70\) Hz). Note that the exponent of this frequency power law \(S_{\eta}(\omega)\sim\omega^{-3.0\pm 0.3}\) is close to what was obtained with a low-viscosity fluid, e.g., mercury with \(S_{\eta}(\omega)\sim\omega^{-3.3\pm 0.3}\), within a similar gravity-capillary frequency range [see purple curve in Fig. 7(a)] [29]. For quasi-nondispersive cases (high enough \(v_{A}\)), two phenomena are visible on the power spectra. The first one is the emergence of well-defined series of local peaks. These peaks are found to be separated by a frequency gap \(\Delta f\) which is nearly constant for a single value of \(v_{A}\). The frequency gap is averaged for each spectrum and plotted against \(v_{A}\) in Fig. 7(b). Two sets of measurements corresponding to two different forcing are plotted and are well fitted linearly by \(\langle\Delta f\rangle_{f}=v_{A}/L^{\prime}\), i.e., \(\langle\Delta\omega\rangle_{f}=v_{A}(2\pi/L^{\prime})\) with \(L^{\prime}=13\) cm the length of the canal \(L\) minus the gap filled by the wave maker (\(\sim 2\) cm). As all waves travel with the same nondispersive velocity, they are then detected by the single-point gauge every same time \(1/\Delta f\). This implies the emergence of peaks of frequencies that are directly linked to the main eigenmode of the canal \(2\pi/L^{\prime}\). Finite-size effects thus emerge experimentally because of the nondispersivity. The second effect of the nondispersivity is visible at high frequencies of the power spectra. A very-well-defined power law appears on one decade in the range \(f\in[30,300]\) Hz, thus well beyond the beginning of viscous effects around 100 Hz. This cascade scales in \(S_{\eta}(\omega)\sim\omega^{-4.01\pm 0.05}\) and is found to agree with the Kuznetsov spectrum of singularities of second order \(n=2\), i.e., Dirac-\(\delta\) distribution on the second-order difference \(\delta^{(2)}\eta\), conserving their shape, i.e., \(\omega\sim k\)[31]. The shock waves present in the signal thus spread energy at all frequency scales. When removing the singularities from the signal (as in Fig. 5), the previous well-defined power law in \(\omega^{-4}\) in the power spectrum disappears and dissipative effects seem to drive the cascade after 90 Hz [see dashed lines in Fig. 7(a)]. These results evidence a transition from gravity-capillary wave turbulence, in the dispersive case (\(v_{A}=0\) m/s), for which the cascade mechanism is due to resonant interactions between weakly nonlinear waves, to a nondispersive regime (\(v_{A}=0.51\) m/s) where the energy is mainly concentrated in second-order singularities (\(n=2\)) and dissipated by viscous effects. Using the spatiotemporal measurements averaged over time, the wave-number power spectrum \(S_{\eta}(k)\) is plotted in the inset of Fig. 7(b) for different values of \(v_{A}\). At \(v_{A}=0\) m/s, a power law in \(k^{-2.4\pm 0.1}\) is observed due to Figure 6: Numerical solution of the 1D Burgers equation, \(\eta(x,t)\), following an implicit scheme from a sinusoidal initial condition at \(t=0\) (blue). (a) Solutions for increasing values of \(t\) (from blue to orange). (b) Solution at a fixed time \(t\) (just before reaching the vertical front) along with the corresponding first- (\(\delta\eta/dx\), red solid line) and second-order difference (\(\delta^{(2)}\eta/dx^{2}\), blue solid line). The abscissa is from right to left to be consistent with the experimental temporal measurements in Fig. 5. The arrows show the direction of the wave-front propagation. gravity-capillary wave turbulence. This power-law exponent differs from the one found for a much less viscous fluid, i.e., mercury [29], plotted also in the inset of Fig. 7(b). At large \(v_{A}\), a steeper power law in \(k^{-4.1\pm 0.1}\) is found, and the exponent is close to the one found for the frequency power spectrum \(S_{\eta}(\omega)\sim\omega^{-4.01}\). This similarity thus confirms that a nondispersive regime is achieved since the two spectra are linked by \(S_{\eta}(k)dk=S_{\eta}(\omega)d\omega\) using \(\omega\sim k\). The spectrum close to \(k^{-4}\) is hence a spectrum of second-order discontinuities due to shock waves, which supports the conclusion made with the temporal spectrum on the second-order singularities. Note that, in the inset of Fig. 7(b), the forcing scale moves to smaller \(k\) with increasing \(v_{A}\) as a result of Eq. (1) with a constant forcing frequency. Note also that because of the lowering of the dispersion relation with increasing \(v_{A}\) as observed in Fig. 3, the measurement noise level appears at \(k/2\pi>400\) m\({}^{-1}\) for \(v_{A}=0\) m/s and at \(k/2\pi>200\) m\({}^{-1}\) for \(v_{A}=0.47\) m/s. The statistics of the shock waves will be thus performed in Sec. V using the single-point measurements due to their better resolution and signal-to-noise ratio than the spatiotemporal ones. ### Energy flux Weak turbulence theory aims to describe wave turbulence but requires strong hypotheses [1; 2; 3]. In particular, WTT assumes a constant energy flux during the energy cascade through the scales. In this section, we test this hypothesis when the wave turbulence regime occurs (\(v_{A}=0\) m/s) and how the energy flux departs from a constant when reaching the shock-wave regime (at high \(v_{A}\)). The energy flux \(P\) is computed as \(P(\omega^{*})=\int_{\omega^{*}}^{\omega_{m}}E(\omega)D(\omega)d\omega\) with \(E(\omega)=gS_{\eta}(\omega)+v_{A}^{2}kS_{\eta}(\omega)+(\gamma/\rho)k^{2}S_{ \eta}(\omega)\) the total wave-energy density, \(D=k(\omega)\sqrt{\nu\omega/2}\) the main contribution of the viscous energy dissipation rate for a contaminated interface [29; 37; 47; 51], \(\omega_{m}/(2\pi)=1000\) Hz, and \(k(\omega)\) as in Eq. (1). The variation of \(P\) over frequency scales is plotted in Fig. 8 for different \(v_{A}\) with (solid lines) and without (dashed lines) shock waves. For low values of \(v_{A}\), the energy flux is, as expected, constant in the inertial range (as in [29]), showing that no dissipation occurs in this range and that the energy cascades over scales continuously because of wave interactions following WTT predictions. \(P\) is found to increase with \(v_{A}\) as a consequence of the increase of the energy at the forcing scales that is required to keep a constant wave steepness [because \(v_{A}\) increases the wavelength as shown by Eq. (1)]. Furthermore, for large values of \(v_{A}\), \(P\) is no longer constant and is found to decrease with \(f\). This can be explained by dissipation that occurs at all scales [51]. As shock waves travel by conserving their shape, they transport energy over space Figure 7: (a) Frequency spectra \(S_{\eta}(f)\) for different \(v_{A}\) (solid lines) and \(\epsilon\simeq 0.07\) on a log-log plot. Spectra have been shifted vertically for clarity. The dashed lines show the same but with the singularities removed from the signal. The gray area is the frequency bandwidth of the random forcing. The black dash-dotted line shows \(f^{-4.01}\) best fit for \(v_{A}=0.51\) m/s, \(f^{-3.0}\) best fit for \(v_{A}=0\) m/s, and \(f^{-3.3}\) best fit for \(v_{A}=0\) m/s using mercury [29]. Here \(\Delta f\) is the frequency difference occurring between two successive spectrum peaks. (b) Evolution of the mean frequency gap \(\langle\Delta f\rangle_{f}\) between local spectral peaks as a function of \(v_{A}\) for two sets of forcing, either at constant \(\epsilon\) or at constant standard deviation \(\sigma\) of the surface elevation. The dashed line shows the best linear fit in \(v_{A}/L^{\prime}\) with \(L^{\prime}=13\) cm, the available canal length. Error bars come from the standard deviation of the measurement of \(\Delta f\). The inset shows wave-number power spectra \(S_{\eta}(k)\) for different \(v_{A}\) and \(\epsilon\simeq 0.07\) on a log-log plot. Spectra have been shifted vertically for clarity. Black dash-dotted lines show the best fits in \(k^{-4.1}\) for \(v_{A}=0.47\) m/s, \(k^{-2.4}\) for \(v_{A}=0\) m/s using ferrofluid, and \(k^{-3.2}\) for \(v_{A}=0\) m/s using mercury [29]. without any interactions. While they transport this energy, viscous dissipation occurs reducing their amplitude until they disappear (see Appendix D). Note that the presence of the discontinuity in the shock wave does not have any significant impact on the energy flux (solid and dashed lines are almost superimposed in Fig. 8), even when the discontinuity is removed, the energy is still in the shock wave and continues to travel and to be dissipated. ### Timescales We now test another WTT assumption, namely, the timescale separation between the linear time \(\tau_{l}\), the nonlinear time \(\tau_{\rm nl}\), the dissipation time \(\tau_{\rm diss}\) (quantifying dissipative effects), and the discreteness time \(\tau_{\rm disc}\) (quantifying finite-size effects of the canal) [9; 15; 29]. Indeed, WTT assumes [2] \[\tau_{l}(\omega)\ll\tau_{\rm nl}(\omega)\ll[\tau_{\rm diss}(\omega);\tau_{\rm disc }(\omega)], \tag{5}\] Figure 8: Evolution of the indirectly measured energy flux \(P\) with the frequency \(f\), for \(\epsilon\simeq 0.07\). Solid lines correspond to different \(v_{A}\). Dashed lines show the same but with singularities removed by signal postprocessing. The gray area indicates the forcing frequency bandwidth. Figure 9: Wave turbulence timescales as a function of the frequency scale \(f\) for different \(v_{A}\). The solid black line shows the linear timescale \(\tau_{\rm nl}=1/\omega\). Circles show the nonlinear timescale \(\tau_{\rm nl}\) estimated from Fig. 3. Colored solid lines show the linear viscous dissipation timescale \(\tau_{\rm diss}\) (see the text). Colored dashed lines show the discreteness time \(\tau_{\rm disc}\) (see the text). The purple dash-dotted line shows the nonlinear shock-wave timescale \(\tau_{\rm nl}^{S}\) estimated from Fig. 5. regardless of \(\omega=2\pi f\) in the inertial range. The nonlinear evolution is thus assumed to be slow compared to the fast linear oscillations (wave period) but short compared to the typical wave dissipation time and the time linked to finite-size effects, enabling then an energy cascade to occur in the inertial range. The evolutions of these timescales with \(f\) are plotted in Fig. 9. The linear timescale is defined as \(\tau_{l}=1/\omega\) (black solid line). The nonlinear timescale \(\tau_{\rm nl}\) (colored circles) is estimated by the broadening of the energy around the dispersion relation as \(1/\delta_{\omega}\) (see Fig. 3). \(\tau_{\rm nl}\) follows a frequency power law close to \(f^{-1/2}\) and decreases slightly with \(v_{A}\). The dissipation timescale \(\tau_{\rm diss}\) (colored solid lines) is computed as \(\tau_{\rm diss}=2\sqrt{2}/[k(\omega)\sqrt{\nu\omega}]\), the main viscous contribution from the surface boundary layer with an inextensible film [37, 51]. This time increases with \(v_{A}\) meaning that dissipative effects are less significant at high \(v_{A}\). This effect can be observed in the spectra in Fig. 7(a), even when the discontinuities are removed (energy is present until 250 Hz for \(v_{A}=0.51\) m/s and less than 150 Hz for \(v_{A}=0\) m/s). The discreteness time \(\tau_{\rm disc}\) (colored dashed lines) is computed as \(\tau_{\rm disc}=1/\Delta\omega_{\rm disc}\) with \(\Delta\omega_{\rm disc}=(\partial\omega/\partial k)\Delta k\) and \(\Delta k=2\pi/L^{\prime}\) the first eigenmode of the canal [9]. No discreteness effect is expected for \(\tau_{\rm nl}(\omega)<2\tau_{\rm disc}(\omega)\), i.e., when the nonlinear spectral widening is larger that the half-frequency separation between adjacent eigenmodes. This discreteness time decreases with increasing \(v_{A}\), meaning that finite-size effects are more significant at large \(v_{A}\). These effects are highlighted in the spectra of Fig. 7(a) by the emergence of well-defined series of local peaks separated by a constant frequency gap \(\Delta\omega=v_{A}(2\pi/L^{\prime})\) [see Fig. 7(b) and Sec. IV.3]. Note that, neglecting gravity and capillary effects, \(\Delta\omega=\Delta\omega_{\rm disc}=1/\tau_{\rm disc}\). Figure 9 then evidences that the timescale separation of Eq. (5) is well validated experimentally in the inertial range, for all values of \(v_{A}\). However, it is worth noting that the estimation of \(\tau_{\rm nl}\) from the spatiotemporal spectrum of Fig. 3 does not include shock waves (as they do not explicitly appear in such a plot). To solve this issue, we define another nonlinear timescale \(\tau_{\rm nl}^{S}\) that only takes into account the shock waves (purple dash-dotted line). \(\tau_{\rm nl}^{S}\) is defined as the width of the corresponding peak of the second-order difference \(\delta^{(2)}\eta/dt^{2}\) (see Fig. 5). We find \(\tau_{\rm nl}^{S}\sim 10^{-3}\) s which is of the same order of magnitude for every shock wave regardless of the value of \(v_{A}\). Figure 9 then shows that \(\tau_{\rm l}(\omega)>\tau_{\rm nl}^{S}(\omega)\) which means that when shock waves are prevalent the timescale separation hypothesis is no longer verified and a critical balance is achieved [2]. This supports the fact that the energy is stored in coherent structures at large enough \(v_{A}\), whereas at low \(v_{A}\) an energy transfer through the scales occurs due to wave turbulence. ## V Shock-wave statistics We focus now on the statistics of shock waves as a function of the magnetic parameter \(v_{A}\). To count the number of shock waves, an arbitrary thresholding criterion on the first-order difference signal \(\delta\eta\), is fixed to \(\delta\eta>5\sigma_{\delta\eta}\), with \(\sigma_{\delta\eta}=\sqrt{\overline{\delta\eta^{2}}}\) its standard deviation. This criterion thus selects if and when a peak within the signal corresponds to a shock wave. The shock rate \(\Gamma\) is defined as the average number of shocks found per second and \(\mathit{dt}_{S}\) the time between two successive shocks. The number of shocks depends on \(v_{A}\) as well as on the forcing strength quantified by the Figure 10: (a) Phase diagram between the dispersive wave turbulence regime and the shock-wave regime as a function of the magnetic parameter \(v_{A}\) and the wave steepness \(\epsilon\). The dash-dotted line distinguishes the predominance of each regime (random waves or localized shock-wave structures) and corresponds to a fixed shock rate \(\Gamma\approx 0.1\) s\({}^{-1}\). (b) PDF of the time \(dt_{S}\) between two successive shocks for all \(v_{A}\) values and \(\Gamma>0.1\) s\({}^{-1}\). The color bar is the same as in (a). The black-dashed line is the best fit in \(e^{-0.78\mathit{dt}_{S}}\). measured steepness \(\epsilon\). Note that if the forcing is too weak, no shock can emerge because of viscous effects, even at high \(v_{A}\). Figure 10(a) displays the phase diagram in the (\(\epsilon\), \(v_{A}\)) parameter space of the gravity-capillary wave turbulence regime and the shock wave regime. The transition between the two regimes is shown at a chosen shock rate of \(\Gamma\approx 0.1\) s\({}^{-1}\) (see the dashed line). Moreover, we observe that \(\Gamma\) increases with \(\epsilon\) and \(v_{A}\), as expected, and that no shock appears, even for strong forcing, when \(v_{A}\) is small enough (\(v_{A}<0.25\) m/s). For these low \(v_{A}\), stronger forcing (not achievable in our setup) would probably end up in wave breaking instead of shocks. Figure 10(b) shows the probability distribution function (PDF) of the time lag \(dt_{S}\) for all \(v_{A}\) values and \(\Gamma>0.1\) s\({}^{-1}\), i.e., the shock-wave regime. The PDF is independent of \(v_{A}\) and \(\Gamma\) and decreases exponentially, meaning that the shock waves are, as expected, independent and random events. Let us now look at the probability distribution of the amplitude of the shock wave, e.g., those occurring in Figs. 4(a) and 4(c). To do so, we compute the probability density functions of the first \(\delta\eta\) and second-order difference \(\delta^{(2)}\eta\) of the shock-wave amplitude for different \(v_{A}\) as shown in Fig. 11. For low enough \(v_{A}\), the distributions remain roughly Gaussian, whereas for high enough \(v_{A}\) they follow well-defined power-law tails. It is worth noting that the power-law tail appears only for \(v_{A}\geq 0.3\) m/s, as for the occurrence of shock waves (see Fig. 10). The power-law tail clearly converges to \(\delta\eta^{-4.3}\) for the first-order difference and to \(\delta^{(2)}\eta^{-3.0}\) for the second-order difference at high \(v_{A}\). A power-law tail distribution of the first-order difference is predicted in the case of diluted shocks driven by the 1D random-forced-driven Burgers equation [32; 33; 34; 35; 36]. The prediction of the power-law exponent is more controversial and depends in particular on the forcing correlation degree [35]. The PDF tail is predicted either in \(\delta\eta^{-4}\)[32] for finite viscosity or in \(\delta\eta^{-7/2}\)[33; 34] in the limit of vanishing viscosity. The observation in Fig. 11 of a power-law PDF for the first- and second-order difference thus confirms that the shock waves drive the wave spectrum scaling. The above prediction of the power-law exponents is close to the experimental one (\(-4.3\)). The deviation is probably due to viscous dissipation and the fact that the experimental shocks do not generate a vertical front with a discontinuity of order one. To our knowledge, the statistics of random shock waves involving second-order discontinuities, i.e., \(\delta^{(2)}\eta\) is a Dirac-\(\delta\) distribution, has not been addressed theoretically but would be of primary interest to compare with our experimental results. ## VI Shock-wave spectrum We have experimentally observed in Sec. IV.3 that the power spectrum \(S_{\eta}(\omega)\) scales as \(\omega^{-4}\) when it is dominated by second-order singularities. Let us now investigate the dependence of the spectrum \(S_{\eta}(\omega)\) with the other parameters. To derive analytically the spectrum, we follow the model of acoustic turbulence [44] but for second-order singularities, as Kuznetsov did for pointlike surface singularities [31]. If we assume that the second-order difference of the signal \(\delta^{(2)}\eta\) is only made of a set of \(N\) Dirac singularities, of amplitudes \(\Delta_{2}(\eta)\), located at the random times \(t=t_{S}\) (\(N\) is Figure 11: (a) Probability distribution functions of the first-order difference \(\delta\eta\) normalized by its standard deviation \(\sigma_{\delta\eta}\) for increasing \(v_{A}\) (from blue to orange) for \(\epsilon\simeq 0.07\) on a log-log scales. The dashed line shows the Gaussian distribution. The dash-dotted line shows the best power-law fit in \(\delta\eta^{-4.3}\) for \(v_{A}=0.51\) m/s. The inset shows the same on a semilogarithmic scale. (b) Same as in (a) but for the second-order difference \(\delta^{(2)}\eta\) with the best power-law fit in \(\delta^{(2)}\eta^{-3.0}\) for \(v_{A}=0.51\) m/s. the total number of shocks and \(t_{S}\) are the moments they appear), one has \[\frac{\partial^{2}\eta}{\partial t^{2}}=\sum_{t=0}^{\mathcal{T}}\Delta_{2}(\eta) \delta(t-t_{S})/dt^{2}, \tag{6}\] where \(\Delta_{2}[\eta(t)]\equiv\eta(t+2dt)-2\eta(t+dt)+\eta(t)\) is the second-order difference amplitude, \(dt=1/f_{e}\), and \(\delta\) is the Dirac operator. Using the Fourier transform of the surface elevation \(\eta(t)\) as \(\hat{\eta}_{\omega}=\int_{0}^{\mathcal{T}}\eta(t)e^{i2\pi ft}dt\), performing two integrations by parts to include \(\partial^{2}\eta/\partial t^{2}\) in the Fourier transform, and then using Eq. (6) and the definition of the spectrum \(S_{\eta}(\omega)\equiv|\widehat{\eta}(\omega)|^{2}/\mathcal{T}\), we thus obtain \[S_{\eta}(\omega)=C_{S}\overline{\Delta_{2}^{2}}\Gamma\omega^{-4}/dt^{2}\, \tag{7}\] with \(C_{S}=1\) and the shock rate \(\Gamma=N/\mathcal{T}=1/\overline{dt_{S}}\) with \(dt_{S}\) the time between two successive shocks. The shock-wave spectrum of Eq. (7) thus predicts a \(\omega^{-4}\) scaling (as experimentally found above), is proportional to the number of shocks, \(N\), and to the variance of their amplitude \(\overline{\Delta_{2}^{2}}\), and is independent of \(v_{A}\). Note that the acoustic spectrum of shock waves of first-order singularities scales as \(\Gamma\overline{\Delta_{1}^{2}}\omega^{-2}\), with \(\Delta_{1}\equiv\eta(t+dt)-\eta(t)\)[44, 31, 45]. More generally, for singularities of order \(n\), their power spectrum reads \(S_{\eta}(\omega)=C_{S}\overline{\Delta_{n}^{2}}\Gamma\omega^{-2n}/dt^{2(n-1)}\), showing that the higher \(n\) is, the denser the shocks have to be to dominate in the spectrum. To test the prediction of Eq. (7), we compute experimentally the second-order difference of \(\eta(t)\) taking only the shock waves into account, i.e., we keep the maxima of the detected shock-wave events and remove the residual noise coming from the regular waves (see the red crosses in the inset of Fig. 13), the notation for \(\Delta_{2}\) is not changed in the following, for the sake of clarity. Figure 12(a) then shows the experimental compensated spectrum \(S_{\eta}(\omega)\omega^{4}\), which is found to be constant over almost one decade in frequency and independent of \(v_{A}\), as expected from Eq. (7), for a roughly constant shock rate \(\Gamma\). This independence is of paramount interest and contrasts with the weak wave turbulence case in which the energy cascade is strongly dependent on the dispersion relationship. In the shock-wave regime, only the singularities and their statistics drive the spectrum once \(v_{A}\) is high enough (\(v_{A}>0.4\) m/s). The increase of the experimental compensated spectrum with \(\Gamma\) is displayed in Fig. 12(b). It clearly shows that the shock rate \(\Gamma\) drives the value of the spectrum amplitude. When \(\Gamma>0.4\) s\({}^{-1}\), the scaling in \(\omega^{-4}\) is achieved [see the flat compensated spectra above the horizontal blue dashed line in Fig. 12(b)]. Looking experimentally at the scaling of the spectrum with the shock rate \(\Gamma\) and the variance of their amplitude \(\overline{\Delta_{2}^{2}}\) is more challenging since fixing their values independently is not possible. However, Fig. 13(a) shows that the value of the compensated spectrum \(\langle S_{\eta}(\omega)\omega^{4}\rangle_{f}\), averaged within \(80<f<170\) Hz, increases linearly with \(\Gamma\overline{\Delta_{2}^{2}}\), as expected from Eq. (7), when the shock-wave regime is reached, i.e., for \(\Gamma>0.85\) s\({}^{-1}\). The theoretical spectrum of Eq. (7) is thus fully verified experimentally since the experimental constant \(C_{S}=6.5\) is found to be of the same order Figure 12: Frequency compensated spectra \(S_{\eta}(\omega)\omega^{4}\) for (a) different \(v_{A}\) and almost constant shock rate \(0.8<\Gamma<1\) s\({}^{-1}\) and (b) different values of \(\Gamma\in[0,1.2]\) s\({}^{-1}\) (i.e., \(v_{A}\in[0,0.55]\) m/s) on a log-log plot. The horizontal blue (red) dashed lines separate the wave turbulence from the intermediate (resp., full shock wave) regimes. The gray area is the frequency bandwidth of the random forcing. of magnitude as the expected unit value. For lower shock rates (\(0.4<\Gamma<0.85\) s\({}^{-1}\)), the spectrum still scales in \(\omega^{-4}\) [see Fig. 12(b)] but its amplitude does not follow Eq. (7), since it corresponds to an intermediate state between the shock-wave and wave turbulence regimes [see Fig. 13(a)]. Finally, Fig. 13(b) displays the evolution of \(\Gamma\overline{\Delta_{2}^{2}}/dt^{2}\) as a function of the shock rate \(\Gamma\). At low \(\Gamma\), almost no shock wave is detected and a wave turbulence regime is present. For moderate \(\Gamma\), the quantity increases slightly with \(\Gamma\), whereas for high \(\Gamma\), it increases strongly leading to a full shock-wave regime well described by Eq. (7). To sum up, we used a simple model showing very good agreement with the experiments. In particular, it explains that the random shocks drive the frequency spectrum scaling in \(\omega^{-4}\), whereas the number and amplitude of shocks control the spectrum amplitude independently of the value of \(v_{A}\). We found three different regimes depending on the shock rate value: When \(\Gamma<0.4\) s\({}^{-1}\), the shock waves are not significant enough and gravity-capillary wave turbulence occurs [below the blue dashed lines in Figs. 12(b), 13(a), and 13(b)]; when \(0.4<\Gamma<0.85\) s\({}^{-1}\), shock waves are significant enough to develop a spectrum of second-order singularities in \(\omega^{-4}\) but not enough to get the full spectrum of Eq. (7) [between the blue and red dashed lines in Figs. 12(b), 13(a), and 13(b)]; and when \(\Gamma>0.85\) s\({}^{-1}\), the full spectrum of discontinuities from Eq. (7) is achieved [beyond the red dashed lines in Figs. 12(b), 13(a), and 13(b)]. In the latter regime, note that despite nondispersivity, the Zakharov-Sagdeev spectrum of acoustic weak-wave turbulence [19], recently observed numerically [52], is not achieved. Shock waves indeed prevent weak turbulence. Note also that in this regime, the spectrum depends on the shock rate \(\Gamma\) and so is linked to the input power. Even if a critical balance (\(\tau_{l}>\tau_{\rm{nl}}^{S}\)) is achieved (see Fig. 9) [2], the spectrum obtained here does not follow the Phillips spectrum that is predicted to saturate and to be independent of the input power [53]. The agreement with a spectrum of singularities, i.e., Kuznetsov-like spectrum, rather than the Phillips spectrum is here discovered experimentally for hydrodynamics surface waves and has been also observed numerically for elastic plates [54; 55]. ## VII Conclusion We have studied the transition from quasi-1D dispersive wave turbulence to an acoustic-like nondispersive regime. To do so, we used a magnetic fluid within a canal, subjected to an external horizontal magnetic field, to tune the dispersivity of waves on the surface of the fluid. For a low magnetic field, we recovered the classical wave turbulence regime driven by nonlinear resonant interactions [29]. For a high enough field, shock waves occur randomly involving second-order discontinuities, i.e., the second-order difference of the wave amplitude is a Dirac \(\delta\). The frequency power spectrum of this shock-wave regime is found to scale as \(\omega^{-4}\) and to be proportional to the shock rate and to the variance of the shock amplitudes, provided the shock rate is high enough. These experimental findings are well captured by a Kuznetsov-like spectrum of a random Dirac-\(\delta\) distribution involving second-order singularities. The transition from wave turbulence to the shock-wave regime is also evidenced by measuring the energy flux. As expected, the latter is found to be constant in the wave turbulence regime and to decrease over scales in the shock-wave regime due to the damping of shock waves storing energy at all scales. When shock waves are prevalent the timescale separation hypothesis of weak turbulence theory is no longer validated experimentally and a critical balance occurs instead. The shock-wave statistics is then studied and a phase diagram between wave turbulence and the shock-wave regime is shown as a function of the control parameters. The probability density functions of the first- and second-order differences of the surface elevation are computed and found to exhibit a power-law tail with an exponent close to the predictions of the 1D random-forced Burgers equation [32, 33, 34, 35, 36]. The observation of this shock-wave regime, discovered here for surface waves, is significant for two reasons. First, the assumption of weak turbulence theory of dispersive waves has been tested experimentally with this setup and shows that the presence of shock waves prevents the possibility to reach a wave turbulence regime. Second, the energy cascades in wave turbulence due to local resonant interactions, whereas in the shock-wave regime, the energy is mainly stored in shock waves that are coherent structures rich in the frequency domain. These singularities travel over the canal length, keeping their shapes, but are damped by viscous dissipation. Theoretical and numerical works would be of paramount interest to understand in more detail the transition reported here. It would also be significant to extend the bridge between the shock-wave regime reported here, as second-order singularities, and the 1D random-forced Burgers equation. Finally, high-order statistics could be investigated experimentally in such shock-dominated acoustic regime, in particular, to test intermittency and anomalous scalings of structure functions predicted by 1D random-forced Burgers turbulence [56, 57, 58]. ###### Acknowledgements. This work was supported by the French National Research Agency (ANR DYSTURB Project No. ANR-17-CE30-004 and ANR SOGOOD Project No. ANR-21-CE30-0061-04) and the Simons Foundation MPS No. 651463-Wave Turbulence (USA). ## Appendix A Ferrofluid characteristics The magnetization curve \(M(B)\) of the PBG400 ferrofluid is plotted in Fig. 14 and is provided by the Ferrotec manufacturer. It enables us to compute the variation of \(v_{A}\) with the magnetic induction \(B\) (see the inset of Fig. 14). ## Appendix B Time-frequency spectrum The time-frequency spectrum of the surface elevation obtained by a wavelet transform is plotted in Fig. 15. In the dispersive case [Fig. 15(a)], the energy cascades continuously over frequency scales and time until viscous dissipation occurs around 100 Hz. In the nondispersive case [Fig. 15(b)], localized coherent structures occur randomly and contain energy to all frequency scales. Figure 14: Magnetization curve \(M(B)\) of the PBG400 ferrofluid provided by the Ferrotec manufacturer. The gray part represents the fields achievable experimentally. The inset shows the theoretical velocity \(v_{A}\) as a function of the applied magnetic induction \(B\) corresponding to the gray part of the main figure. Figure 15: Time-frequency spectrum of the surface elevation signals obtained by a wavelet transform for (a) the dispersive case (\(v_{A}=0\) m/s) and (b) the nondispersive case (\(v_{A}=0.51\) m/s). ## Appendix C Typical wave amplitude The evolution of the typical wave amplitude \(\sigma\) with \(v_{A}\) is plotted in Fig. 16. It increases with \(v_{A}\), except for the maximum value of \(v_{A}\) where Maxwell stress due to the external magnetic field probably flattens the wave amplitude. ## Appendix D Shock wave formation The response of the surface to a single pulse forcing is shown in Fig. 17 for different values of \(v_{A}\). No shock wave occurs at small \(v_{A}\) [Figs. 17(a) and 17(b)] due to the dispersion. In the nondispersive case [Fig. 17(c)], a shock wave is formed and travels along the canal, keeping a constant shape with a discontinuity. Figure 16: Evolution of the standard deviation \(\sigma=\sqrt{\eta^{2}}\) of the surface elevation \(\eta(t)\) as a function of the magnetic parameter \(v_{A}\) for a constant steepness \(\epsilon\simeq 0.07\). Figure 17: Spatial evolution of a surface wave in response to a single pulse forcing for increasing times (spaced from 25 ms, from blue to purple) for the (a) dispersive (\(v_{A}=0\) m/s), (b) intermediate (\(v_{A}=0.3\) m/s), and (c) nondispersive (\(v_{A}=0.51\) m/s) cases. The arrows indicate the discontinuity location over time.
2306.17266
Subgraph Stationary Hardware-Software Inference Co-Design
A growing number of applications depend on Machine Learning (ML) functionality and benefits from both higher quality ML predictions and better timeliness (latency) at the same time. A growing body of research in computer architecture, ML, and systems software literature focuses on reaching better latency-accuracy tradeoffs for ML models. Efforts include compression, quantization, pruning, early-exit models, mixed DNN precision, as well as ML inference accelerator designs that minimize latency and energy, while preserving delivered accuracy. All of them, however, yield improvements for a single static point in the latency-accuracy tradeoff space. We make a case for applications that operate in dynamically changing deployment scenarios, where no single static point is optimal. We draw on a recently proposed weight-shared SuperNet mechanism to enable serving a stream of queries that uses (activates) different SubNets within this weight-shared construct. This creates an opportunity to exploit the inherent temporal locality with our proposed SubGraph Stationary (SGS) optimization. We take a hardware-software co-design approach with a real implementation of SGS in SushiAccel and the implementation of a software scheduler SushiSched controlling which SubNets to serve and what to cache in real-time. Combined, they are vertically integrated into SUSHI-an inference serving stack. For the stream of queries, SUSHI yields up to 25% improvement in latency, 0.98% increase in served accuracy. SUSHI can achieve up to 78.7% off-chip energy savings.
Payman Behnam, Jianming Tong, Alind Khare, Yangyu Chen, Yue Pan, Pranav Gadikar, Abhimanyu Rajeshkumar Bambhaniya, Tushar Krishna, Alexey Tumanov
2023-06-21T16:02:52Z
http://arxiv.org/abs/2306.17266v1
# SubGraph Stationary Hardware-Software Inference Co-design ###### Abstract A growing number of applications depend on Machine Learning (ML) functionality and benefits from both higher quality ML predictions and better timeliness (latency) at the same time. A growing body of research in computer architecture, ML, and systems software literature focuses on reaching better latency/accuracy tradeoffs for ML models. Efforts include compression, quantization, pruning, early-exit models, mixed DNN precision, as well as ML inference accelerator designs that minimize latency and energy, while preserving delivered accuracy. All of them, however, yield improvements for a single static point in the latency/accuracy tradeoff space. We make a case for applications that operate in dynamically changing deployment scenarios, where no single static point is optimal. We draw on a recently proposed weight-shared _SuperNet_ mechanism to enable serving a stream of queries that uses (activates) different _SubNets_ within this weight-shared construct. This creates an opportunity to exploit the inherent temporal locality with our proposed _SubGraph_ Stationary (SGS) optimization. We take a hardware-software co-design approach with a real implementation of SGS in **SushiAccel** and the implementation of a software scheduler **SushiSched** controlling which _SubNets_ to serve and what to cache in real-time. Combined, they are vertically integrated into **Sushi**--an inference serving stack. For the stream of queries **Sushi** yields up to 25% improvement in latency, 0.98% increase in served accuracy. **Sushi** can achieve up to 78.7% off-chip energy savings. ## 1 Introduction The number of applications leveraging Machine Learning (ML) functionality continues to grow, as ML is successfully applied beyond image classification (Ovtcharov et al., 2015), object detection/recognition (Chen et al., 2017; Ali et al., 2018), sentiment analysis (Jiang et al., 2020), and next word prediction (Sundermeyer et al., 2012). These applications are also increasingly latency sensitive. Their interactive experience depends on what fraction of prediction tasks are satisfied within the application-specified latency budget (typically in the 10-100 ms interactive latency range). Examples of such applications include self-driving cars (Gog et al., 2022), specifically the on-board software responsible for multi-modal sensory data processing, street sign detection (Tabernik and Skocaj, 2019), pedestrian detection (Liu et al., 2019), vehicle trajectory tracking (Deo and Trivedi, 2018), lane tracking (Datta et al., 2020), and Intensive Care Unit stability score prediction (Hong et al., 2020). These applications require the ability to serve trained ML models in a way that maximizes the fraction of queries completed within the application specified latency budget--defined as latency Service Level Objective (SLO) attainment. A unifying characteristic for this class of applications is that they simultaneously care about the quality (accuracy) and timeliness (latency) of ML inference served. There has been a body of work successfully improving achievable latency/accuracy tradeoffs for specific Deep Learning models. Examples include multiple forms of quantization (Bai et al., 2018; Zhang et al., 2018; Pouransari et al., 2020; Fang et al., 2020), mixed DNN precision (Abdelaziz et al., 2021), compression (Iandola et al., 2016), pruning (Liu et al., 2018), latency-aware neural architecture search (Cai et al., 2018; Eriksson et al., 2021), just to name a few. However, fundamentally, all of these techniques optimize for a _single static_ point in the latency/accuracy tradeoff space. Indeed, for a given deployment device, the outcome is typically a single static model that has a specific (latency, accuracy) tuple associated with it. We claim this is no longer sufficient. We observe that the applications with acute latency/ accuracy sensitivity typically operate in _dynamically_ variable deployment conditions. These include variable query traffic patterns (e.g., variable number of patients triaged in the ICU or ER), on-device battery power level (e.g. bed-side compute or battery-powered edge device), and query complexity (e.g., autonomous vehicle (AV) navigation of sparse suburban vs dense urban terrain). Under such variable deployment conditions, a choice of _any_ single static model from the latency/accuracy tradeoff space will be suboptimal. Indeed, a higher accuracy model may result in dropped queries during periods of transient overloads. The lower accuracy model may yield suboptimal prediction quality under low load--both unnecessarily under-performing. Inherently, the ideal solution would include dynamically picking a "best-fit" model from the latency/accuracy tradeoff space. For a specific latency constraint that varies over time, a just-in-time choice of the highest accuracy model satisfying this constraint is preferred. Thus, the ability to switch (or navigate) between points in the latency/accuracy tradeoff space in real-time is intuitively required for such applications. We identify one such mechanism that enables this -- weight-shared _SuperNets_(Cai et al., 2019) (SS2.1). This neural network construct consists of multiple convolutional neural networks (CNNs) sharing common model parameters. It simultaneously encapsulates "deep and thin" models as well as "wide and shallow" within the same structure without weight duplication. These _SuperNets_ can be used to activate different _SubNets_ without explicitly extracting them into different independently stored models. This is highly efficient from the systems perspective, as it obviates the need to store these model variants separately (saving memory cost), and enables rapidly switching _SubNets_ that are "activated" to serve different incoming queries. On the hardware end, the need for real-time inference has led to a plethora of ML accelerators. A key optimization technique (e.g., "dataflow" (Chen et al., 2016)) leveraged by most accelerators involves _reusing_ activations and/or weights across multiple computations, leading to architectures that can be classified as weight stationary, output stationary, input stationary, row stationary, and hybrid variations of these (Chen et al., 2016). These dataflows rely on neural network layers, specifically 2D convolutions, to be compute-bound. One challenge of serving _SubNets_ with diverse shapes, however, as we identify, is the memory-bound nature of some of the _SubNets_ (smaller FLOPS/Byte). To address this challenge, we make a key observation that the weight-shared _SuperNet_ mechanism inherently results in queries activating commonly shared _SubGraph_s within the same _SuperNets_ structure1. Furthermore, we note a significant amount of _temporal locality_ in the weights of the _SuperNets_ re-used _across_ queries. We identify this as an opportunity for a new kind of data reuse, which we name _SubGraph_Stationary (SGS) optimization--a technique we haven't seen used or proposed by any existing accelerator. We realize the benefits of SGS by implementing hardware caching support for weight reuse at the granularity of neural network _SubGraph_s. Footnote 1: We define _SubGraph_ as a subgraph consisting of any subset of weights from the _SuperNets_ connected together into a graph In addition to SGS-aware hardware implementation, we co-design an SGS-aware query scheduler that decides (a) which _SubNets_ to activate for each query and (b) which _SubGraph_s to cache. We propose an algorithmic approach to make these control decisions based on (a) a query's specified accuracy constraint and (b) the current state of the accelerator (which we abstract). We demonstrate that these control decisions benefit from hardware state awareness, as baseline state-unaware caching leaves room for improvement. Finally, we propose an abstraction that enables the query scheduling policy to generalize, while remaining accelerator state-aware. The abstraction is captured by a black-box table (Fig. 4) that exposes the latency of activating a _SubNet_\(i\) as a function of a currently cached _SubGraph_\(j\). We instantiate the concept of _SubGraph_ Stationary (SGS) cross-query optimization in our vertically integrated inference serving stack, **SUSHI**, which includes (a) **SushiAccel--** a real FPGA implementation of hardware support for SGS-aware weight-shared _SuperNet_ inference, and (b) **SushiSched** to make real-time control decisions on a stream of queries executed on **SushiAccel**, sequentially deciding for each query _SubNet_\(i\) to activate and (periodically) _SubGraph_\(j\) to cache on the accelerator. **SushiAccel** and **SushiSched** combined in **SUSHI** enable _agile_ navigation of the latency/accuracy tradeoff space, reaching better latency/accuracy tradeoffs by leveraging the key property of "cross query" temporal locality inherent to weight-shared _SuperNets_ with what we believe to be the first hardware-software co-design for weight-shared inference. The key contributions of this paper can be summarized as follows: * a concept of _SubGraph_ Stationary (SGS) approach for hardware acceleration of DNN inference on weight-shared _SuperNets_. * **SushiAccel--** a real SGS-aware FPGA implementation, with a simulator and design space exploration tools. * **SushiSched--**a software query scheduler that operates in SGS-aware fashion, controlling which _SubNets_ to activate and _SubGraph_s to cache in real time. * **SUSHI--**a hardware-software co-designed inference serving stack, vertically integrating **SushiAccel** and **SushiSched**. * **SushiAbs--**an abstraction that generalizes SGS-aware query scheduling to arbitrary accelerators, while retaining implicit accelerator state awareness. Combined, **SUSHI** is able to achieve up to 25% query serving latency improvement with 0.98% accuracy improvement. **SUSHI** can also save a significant amount of off-chip energy (78.7%) in simulation with realistic board configurations. ## 2 Background and Motivation We start with a background on weight-shared neural networks in SS2.1. Then we motivate and expose the opportunity for hardware support of weight-shared supernet inference (SS2.2). The need for hardware-software co-design follows from challenges in SS2.3. The hardware-software abstraction in SS2.4 is introduced for generality. ### Weight-Shared Deep Neural Networks (WS-DNNs) Recent advances in deep learning propose weight-shared deep neural networks (Cai et al., 2019; Sahni et al., 2021; Yu et al., 2020) that propose _SuperNet_ structures can be used to enable inference on Deep Neural Networks (DNNs) across a diverse set of deployment scenarios (both dynamic and static). Weight-shared DNNs (WS-DNN) induce a rich trade-off between accuracy and latency (Fig. 1b). The inference in WS-DNN fundamentally changes the traditional view of optimizing inference latency, which was focused on a single forward pass query. Instead, WS-DNN's inference makes it possible to satisfy the latency-accuracy requirements for a _stream of queries_ with each query potentially requesting a different point in the trade-off space. This positions WS-DNNs as a salient candidate for a variety of applications (Halpern et al., 2019; Hsieh et al., 2018; Reddi et al., 2020) and inference-serving systems (Romero et al., 2021) that benefit from navigating latency/accuracy trade-off. The key property of these networks is that different DNNs (_SubNet_), which may differ in several elastic dimensions, including depth and width, partially share their weights as part of a single large DNN (_SuperNet_). As a result, the _SuperNet_ contains all other _SubNets_ within it (Fig. 1a). These _SubNets_ can be directly used to render predictions without any further re-training. To get predictions from a specific _SubNet_, elastic dimensions are specified in order to select appropriate weights from the _SuperNet_ for the forward pass. These elastic dimensions typically include specification of the depth, the number of filters/channels of each convolutional layer and kernels. The elastic dimensions of the neural net architecture of the _SuperNet_ are exploited to attain elasticity. A typical _SuperNet_ architecture such as OFAResNet, OFAMobileNet is organized as a collection of stages. Each stage consists of repeating blocks, such as a Bottleneck block in OFAResNets. Each block in turn contains multiple convolution layers. The depth elastic dimension selects top \(k\in[2;4]\) blocks per-stage of the _SuperNet_. The expand ratio (another elastic dimension) selects top \(k\) kernels of the convolution layer in each block. As a result, the smallest _SubNet_'s weights are shared by all other _SubNets_ and the weights of the largest _SubNet_ contain all other _SubNets_ within it. Hence, there's always some amount of common weight sharing between _SubNets_, with cardinality of overlap ranging from the smallest to the largest _SubNet_. ### Need for Hardware Support for WS-DNN Inference The goal of hardware acceleration for ML inference is to serve a query with minimal latency and maximal accuracy. This goal becomes even more pronounced for WS-DNN inference, where each query may be served with different latency/accuracy requirements (Fig. 1b) (Cai et al., 2019; Sahni et al., 2021). Achieving this goal is challenging due to memory-boundedness of some of the convolutional layers (Kao et al., 2022; Siu et al., 2018). This is especially true for the more recent smaller models that have lower arithmetic intensity (FLOPS/Byte) and when they are deployed on bandwidth-constrained embedded boards (Wang et al., 2019; Wei et al., 2019; Chen et al., 2016; Jokic et al., 2020; Siu et al., 2018; Chen et al., 2022). We quantify this in Fig. 2, where we observe that a large fraction of convolution layers running on a canonical edge Figure 1: WS-DNN properties. Figure 2: Arithmetic intensity for different layers of various DNNs. Lower arithmetic intensity leads to relatively higher _memory_ intensity in MBV3 and ResNet50’s latter layers. accelerator are memory-bound 2. Footnote 2: In the same network, relatively lower arithmetic intensity corresponds to higher chances of becoming memory bound. This is problematic, since a significant portion of end-to-end inference latency and energy consumption comes from memory-bound layers, given the high latency and energy cost of data movement from memory to the on-chip storage (Chen et al., 2016; Yuan et al., 2021). Hence, for the same amount of FLOPS it is very important to convert memory-bound layers to compute-bound in order to reduce end-to-end inference latency and energy consumption. To do so, we leverage our key insight that WS-DNN inference on a stream of queries exhibits temporal locality. As different queries use different _SubNets_, many of them reuse the same weights shared among those _SubNets_, by design. We employ this insight to help convert memory-bound layers to be more compute-bound. Conceptually, this can be accomplished by reusing the shared weights used by previous queries for the next query in a stream, knowing that they all activate _SubNets_ within the same shared _SuperNet_ structure. This creates an opportunity for reuse _across queries_, in sharp contrast to techniques commonly explored and exploited in the computer architecture community for a _single_ query for intra-model optimizations, such as weight-stationary, row-stationary, input-stationary, and output-stationary (Chen et al., 2017, 2016; Fleischer et al., 2018; Venkatesan et al., 2019). We call this novel reuse as _SubGraph_ Reuse, as common shared weights form a _SubGraph_ (e.g., created as the intersection of computational graphs of any two served _SubNets_). Note that in this paper we distinguish between _SubGraphs_ and _SubNets_. _SubNet_ is a subset of a _SuperNet_ that can be used for forward-pass inference to serve a query, while a _SubGraph_ is a subset of _SubNet_. Note that any _SubNet_ is a _SubGraph_, but not vice versa. A natural way to leverage _SubGraph_ Reuse is to have a dedicated cache in the hardware accelerator. However, it comes with several challenges that we discuss in SS2.3. ### Design Challenges in WS-DNN Inference Specialized Hardware The proposed specialized hardware for WS-DNN-inference exploits the temporal locality and enables _SubGraph_ Reuse. However, assigning a dedicated on-chip buffer comes with both software and hardware challenges. **Hardware Challenges:** Due to the resource-restricted nature of many deployment devices, the cache size may be too small to cache entire _SubNets_. Thus, the hardware must operate at a _finer caching granularity_ of arbitrary _SubGraphs_ instead. Deciding the size of the dedicated on-chip buffer is non-trivial. Small buffer size leads to marginalizing the ability to exploit temporal locality. Larger dedicated on-chip buffer limits compute area as well as other on-chip buffer sizes that are leveraged for weight/row/input/output stationary optimizations. Furthermore, the _SubGraph_ Stationary depends on the compute/memory boundness of the convolution workload, which is further related to the off-chip bandwidth and throughput of the hardware. Therefore, the variation of the bandwidth and throughput will also affect the best cache size, which introduces more factors for consideration in the trade-off space. **Software Challenges:** We argue that the latency of served _SubNets_ depends on the _SubGraph_ cached in the on-chip buffer. Fig. 3 provides a toy example to illustrate that: (a) a deep and thin _SubNet_ gets a lower latency with a cached _SubGraph_ containing more layers compared to other cached _SubGraphs_ with fewer layers and wider bottleneck blocks, and (b) a wide and shallow _SubNet_ achieves lower latency with a cached _SubGraph_ with wider and fewer layers (matching its shape). This creates two challenges in software: (a) _SubNet_ selection decision to serve the current query must be _aware_ of the currently cached _SubGraph_ (state), and (b) the cached _SubGraph_ itself should be updated based on previously served _SubNets_ for optimized latency. In other words, the software needs to make _cache-state aware_ decisions to select the appropriate _SubNet_ and update the cached state based on temporally local (e.g., recent) _SubNets_ that were used to serve recent queries. ### Hardware-Agnostic Software Scheduling One final goal is to achieve generalizability for the software scheduler while retaining accelerator state awareness. The scheduler policy design could then generalize to any hardware that is able to support WS-DNN inference. Hence, there is a need to decouple the scheduler from the hardware, i.e., the change in the hardware should not require any changes in the scheduler policy code. We propose an abstraction between the software scheduler and the hardware accelerator that exposes latencies of serving a set of _SubNets_ over a set of cached _SubGraphs_. We show that this gives the policy sufficient information about the hardware state in an accelerator-agnostic fashion. We discuss the mechanism of achieving this while managing the spatial complexity of such a lookup table in SS3. We instantiate this mechanism in **SushiSched**, which we can now develop and improve upon independently on any hardware accelerator. ## 3 System Design & Architecture **SUSHI** serves a stream of queries with different latency/accuracy requirements. It consists of three major components -- scheduler (**SushiSched**), abstraction (**SushiAbs**), and accelerator (**SushiAccel**) as shown in Fig. 4. **SUSHI** exploits novel _SubGraph_ Reuse enabled via the interaction of its three components to serve queries with higher accuracy subjected to latency constraints or lower latency subjected to accuracy constraints. We describe our proposed **SushiAbs** and **SushiSched** below. **SushiAccel** is described in SS4 in detail. The terminology used in this paper is captured in Fig. 5. ### **SUSHI's System Architecture** We describe the interaction between **SUSHI**'s components. Fig. 4 demonstrates a query path in **SUSHI**. The query enters the system with a certain latency and accuracy constraint. Then, the **SushiSched** makes a two-part control decision. First, it selects an appropriate _SubNet_ (i.e., \(SN_{t}\)) that can serve the current query \(q_{t}\). It makes this subnet selection with the help of **SushiAbs**. **SushiAbs** provides the scheduler with the ability to perform latency estimation when a specific _SubNet_ is served with a given _SubGraph_ cached. **SushiAbs** exposes this state in an accelerator-agnostic fashion. Second, **SushiSched** decides the next cached-_SubGraph_. The exact algorithm for this control decision is described in Alg. 1. **SushiSched** control decision is then enacted by **SushiAccel**. The selected _SubNet_, next cached-_SubGraph_, and query-data are sent to the **SushiAccel**. **SushiAccel** performs inference of the query using the selected _SubNet_. Model weights that are not already SGS-cached as part of the cached _SubGraph_ are fetched from off-chip to on-chip buffer space. Finally, the accelerator returns the results of performing inference on _SubNet_ to **SushiSched** and enacts the _SubGraph_ caching control decision. ### **Abstraction** **SushiAbs** abstracts the ability to perform latency estimation for a given _SubNet_ as a function of a cached _SubGraph_ in an accelerator-agnostic fashion. It enables **SushiSched** to make cached_SubGraph_ aware control decisions. As these control decisions are performed on the critical path of the query, this enabling abstraction must be efficient both w.r.t. space (R1) and time (R2). Indeed, the set of all possible cached-_SubGraph_s is exponentially large for WS-DNNs (\(>>10^{19}\)) [14]. Thus, to achieve (R1), the abstraction limits the set of all possible cached _SubGraph_s to a significantly smaller set \(\mathcal{S}\), such that \(|\mathcal{S}|<<10^{19}\). The size of _SubGraph_s in \(\mathcal{S}\) are selected to be close to the cache size. Hence, at any point in time, **SushiAccel** always caches _SubGraph_s from \(\mathcal{S}\) and **SushiSched** also selects a _SubGraph_ to cache from \(\mathcal{S}\) as well. The abstraction achieves (R2) by using a lookup table data structure with _SubNets_ as rows and _SubGraph_s as columns. Hence, it takes the least amount of time to get latency-estimate of _SubNet_\(i\) for a given _SubGraph_\(j\). The size of the lookup table is given by \(O(|\mathcal{S}|.|\mathcal{X}|)\approx O(|\mathcal{S}|)\) where \(\mathcal{X}\) denotes the set of serving _SubNets_, since we expect \(O(|\mathcal{X}|)\approx O(1)\). ### **SushiSched Design** ``` Input:SubNet to be served \(SN_{i}\), \(i\in[1...N]\), SubGraph to be cached \(G_{j}\), \(j\)\(\varepsilon\)\([1...M]\), Latency table \(L[i][j]\). Result:SubNet to be served and SubGraph to be cached. CalculateSubNet to be served for every query \(q_{t}=(A_{t},L_{t})\), \(t\in[0...Q]\) and SubGraph to be cached every \(Q\) iterations; \(AvgNet\) = [0,0,0...0]; \(CacheState=\varnothing\); while\(q_{t}\)do ifpolicy==STRICT_ACCURACYthen \(id_{x}\) = argmin\({}_{latency}\)(\(L[i][CacheState]\) \(\forall i\in[0...N]\) s.t. \(SN_{i}\).accuracy \(>=A_{t}\)); else \(id_{x}\) = argmax\({}_{accuracy}\)(\(L[i][CacheState]\) \(\forall i\in[0...N]\) s.t. \(SN_{i}\).latency \(<=L_{t}\)); end while forevery \(Q\) queriesdo \(AvgNet\).update(\(SN_{id_{x}}\), \(Q\)) CacheState = argmin\({}_{Dist}\)(Dist(G\({}_{j}\),AvgNet)) \(\forall j\in[0...M]\)); end while end while ``` **Algorithm 1**Scheduling Algorithm **Algorithm 2**SushiSched** **SushiSched** **SushiSched** **SushiSched** Figure 3: Latency of two different _SubNets_ as a function of different cached _SubGraph_s. Different cached _SubGraph_s are optimal for different served _SubNets_ with a non-trivial relationship based on the similarity of NN architecture parameters. **SushiSched** captures this similarity with a distance measure in §3. On the software side, the scheduler receives a stream of queries, where each query is annotated with an (Accuracy, Latency) pair, denoted \((A_{t},L_{t})\). In this section we will describe exactly how the scheduler makes its _SubNet_ selection and _SubGraph_ caching control decisions. **Per-query _SubNet_ (\(SN_{t}\)) Selection.** As shown in Fig.4, the scheduler decision is guided by two primary considerations: (\(i\)) serve strictly higher accuracy and (\(ii\)) serve strictly smaller latency, which can be specified by the user. In case of strictly higher accuracy, the scheduler can choose from the feasibility set of all _SubNets_ with accuracy \(\geq A_{t}\). **SUSHI** serves a _SubNet_ that has minimum latency among all the _SubNets_ that have accuracy \(\geq A_{t}\). Note that, it may be possible that the served latency might not satisfy the latency constraint of \(\leq L_{t}\). In case of strictly lesser latency, the scheduler serves a _SubNet_ that has maximum accuracy among all the _SubNets_ that have latency \(\leq L_{t}\). Similarly, it is possible that the served accuracy might not satisfy the accuracy constraint of \(\geq A_{t}\). Notice that the accuracy for a given _SubNet_ is fixed, whereas the latency depends on the _SubGraph_ cached into the PB. The scheduler employs a \(Latency-Table\) to get the latency values for _SubNet_ given a cache state. **Across-query _SubGraph_ Caching (\(S_{t+Q}\)).** The scheduler needs to decide what _SubGraph_ to cache after every \(Q\) queries (\(S_{t+Q}\)). To make this decision, the scheduler needs to represent the _SubGraph_s and _SubNets_, use the information from the past \(Q\) queries, and predicts the next _SubGraph_ that should be cached into the PB. **Encoding _SubGraph_ NN Architecture.** The scheduler represents both the _SubNets_ and the _SubGraphs_ as a vector as shown in Fig.6. The scheduler uses the number of kernels \(K_{i}\) and the number of channels \(C_{i}\) of every layer \(i\) to create a vector of size \(2N\) for \(N\) layered neural network. For instance, the vectorized representation for a 3-layered neural network would be \([K_{1},C_{1},K_{2},C_{2},K_{3},C_{3}]\). **Amortizing Caching Choices.** The scheduler keeps a running average of the past \(Q\)_SubNets_ that were served by the scheduler as shown in Fig.6 (middle). The running average serves as a good indicator of the kernels and the channels that were frequently used in the _SubNets_ that were served for the past \(Q\) queries. If some kernels or channels were frequently used in the past \(Q\)_SubNets_, the values corresponding to these kernels or channels will be high in the vectorized representation. Notice that, the running average can be considered as an approximation of the intersection operation, but with more information. Doing intersection purely loses the information for the kernels and the channels that were frequent but not present in all the _SubNets_; however, averaging helps us to preserve this information. **Predicting the Next _SubGraph_ (\(S_{t+Q}\)).** The scheduler employs the distance from the running average of the past \(Q\) queries to predict the next _SubGraph_ to be cached as shown in Fig.6. The scheduler caches the _SubGraph_ that has the minimum distance from the average _SubNet_. Minimum distance ensures that the most frequent kernels and channels will be cached into the PB. In case fitting all of them is not possible, minimum distance from average _SubNet_ ensures that we are picking the best fit _SubGraph_ in terms of frequently occurring channels and kernels in the _SubNets_ served by the scheduler. The algorithm for performing both the scheduler decisions is described briefly in Algorithm1. **SushiSched** receives input from the user Figure 4: System architecture overview. Given a stream of queries annotated with (Accuracy, Latency) pairs \(q_{1},..,q_{Q}\) and the current cache state \(C_{1}\), the scheduler chooses the _SubNet_ to be served \(SN_{i}\) for each \(i\)’th query and next cache state \(G_{2}\) after every \(Q\) queries. Figure 5: **SUSHI** terminology and variable definitions. including \(\mathit{SubGraphs},\mathit{SubNets},LatencyTable\). AvgNet is the running average of the served \(\mathit{SubNets}\). The cache state is set to a random \(\mathit{SubGraph}\) initially. The \(\mathsf{SushiSched}\) decides the \(\mathit{SubNet}\) to be served for a given query when the accuracy is a hard constraint i.e. serving strictly better accuracy. The \(\mathsf{SushiSched}\) can also decide the \(\mathit{SubNet}\) to be served if the latency is a hard constraint i.e. serving strictly lesser latency. It updates the running average of the \(\mathit{SubNets}\). Finally, the \(\mathsf{SushiSched}\) determines the \(\mathit{SubGraph}\) that is closest to the AvgNet and caches it into the PB. ## 4 \(\mathsf{SushiAccel}\) Implementation ### Hardware Design Challenges As discussed earlier in SS2 and SS3, to support \(\mathit{SubGraph}\) Stationary, we propose to augment DNN accelerators with a custom cache called Persistent Buffer (PB). The introduction of PB leads to a new design space because it competes for a finite on-chip buffer capacity (that needs to be partitioned across input activation, weight, and output activation tiles, and also shared weights). To guarantee the best performance of hardware design on such a design space, we have to develop the parameterizable hardware template with the support of different hardware configurations. ### Architectural Components In this part, we introduce components of \(\mathsf{SushiAccel}\) (Fig. 7) and how it supports all proposed data reuse in Fig. 8. #### 4.2.1 Compute Array **Dot Product Engine (DPE).** The key building block of DNN accelerators is the ability to compute _dot-products_. For example, the Google TPU systolic array (Jouppi et al., 2017) computes fixed-size dot products in each column by keeping weights stationary and forwarding (streaming) inputs from one column to the other, NVDLA (NVIDIA, 2016) employs dedicated dot product engines (DPEs) of size 64, while flexible accelerators (Kwon et al., 2018; Qin et al., 2020) have DPEs of configurable sizes (enabled via all-to-all connectivity between the buffers and PEs). In this work, we picked fixed-size DPEs of size 9. Larger kernels will be breakdown into a serial of \(3\times 3\) kernels and get flattened across the multipliers for reduction using the adder tree. As for small kernels (\(1\times 1\)), \(C\) dimension will be flattened across multipliers to leverage input channel parallelism. **Parallelism.** To further increase the throughput, we instantiate a 2D array of DPEs to boost the throughput by leveraging parallelism and reuse as shown in the Fig. 8. As for the parallelism, the number of row indicates the total number of kernels being processed in parallel in DPE Array, i.e. kernel-level parallelism (\(K_{P}\)). While the number of column stands for total number of input activation (iAct) channels being processed in parallel, i.e. channel-level parallelism (\(C_{P}\)). Both iActs and weights take the same interface to save the wire cost and improve scalability. In the vertical axis, both weights and iActs pass through DPEs of different rows in the store-and-forward fashion. During the weights forwarding, DPE will keep targeted weights stationary. Then, iActs will be streamed and get processed. In the horizontal axis, we replicate the same DPE independently to process different iActs channels and add an extra adder tree to reduce results from DPEs in the same row. #### 4.2.2 On-chip Buffers and Supported Data Reuse We designed a custom on-chip buffer hierarchy to both store data in the layout preferred by the DPE array and support reuse opportunities not leveraged by the DPE array. The entire on-chip storage is divided into multiple separate buffers for different types of data as illustrated by different colors in Fig. 7. **Persistent Buffer (PB)**. The PB is designed to enable \(\mathit{SubGraph}\) Reuse. For example, \(\mathsf{SushiAccel}\) loads the \(\mathit{SubGraph}\) (kernel 1) in Fig. 7(d) from off-chip memory only once and stores it inside PB, such that it could be reused when switching between \(\mathit{SubNet}\) 1 and \(\mathit{SubNet}\) 2. **Dynamic Buffer (DB)**. The DB is a typical on-chip storage to store the distinct weights of the requested \(\mathit{SubNet}\). By adopting a PB, only non-common weights need to be fetched from the off-chip to the on-chip storage. For example, in Fig. 7(d), all kernels except the common part (kernel \(2\) to kernel \(N\)) will be loaded into DB when targeting at \(\mathit{SubNet}\) 1, and will be replaced by kernel \(M\) to kernel \(M+N\) when switching into \(\mathit{SubNet}\) 2. The DB is implemented as a pingpong buffer, as indicated by DB1 and DB2 in Fig. 7, to hide the latency of fetching distinct weights from the off-chip DRAM. **Streaming Buffer (SB)**. SB is designed to store entire iActs and support _iAct Reuse - Multiple kernels_. (Fig. 7(b)). **Line Buffer (LB)**. LB works as a serial to parallel conversion (Wang et al., 2021) because the line buffer takes a single pixel from SB and moves it internally. Therefore, Figure 6: The scheduler represents each neural network as a vector using the number of kernels and channels for each layer. The scheduler maintains a running average of the \(\mathit{SubNets}\) that were served for the past \(Q\) queries. For every \(Q\) queries, the scheduler caches the \(\mathit{SubGraph}\) that is the closest to the average \(\mathit{SubNet}\). **Output Buffer (OB)**. OB provides in-place accumulation for oActs of different channels such that only the final oActs will be sent off-chip to save data movement of partial sums. **ZP/Scale Buffer (ZSB)**. ZSB serves as the on-chip storage for zero point and scale for quantized inference. ### **SushiAccel Dataflow** #### 4.3.1 Latency Reduction from Inter-Query Dataflow The inter-query processing timeline of **SushiAccel** is shown in Fig. 8(a) where stage B indicates the movement of the common _SubGraph_ from off-chip to on-chip PB. The latency saving of **SushiAccel** comes from eliminating the redundant off-chip _SubGraph_ access, as illustrated in Fig. 8(a) where **SushiAccel** reduces common _SubGraph_ off-chip access (stage B) to only once in the critical path instead of multiple times in design w/o PB. #### 4.3.2 Hiding Latency from Intra-layer Dataflow Within each convolution layer, **SushiAccel** processes a convolution layer in the granularity of weight tiles shown in Fig. 8(b). Different stages (i.e., A-L) are defined in Fig. 7 that represent the movement of specific data. To further hide off-chip data access latency from critical path, we implement a double distinct weights buffer (ping-pong dynamic buffers DB1 and DB2 shown in Fig. 7) to hide the off-chip latency of fetching distinct weights behind the computation latency. This is indicated by stages D1 and D2 that are hidden from stages F-G-J-K shown with arrows in Fig. 8(b). ## 5 Experimental Results ### **System Setup** **Workload:** We choose weight shared version of ResNet50 and MobV3 as two _SuperNets_(Cai et al., 2019). To evaluate **Sushi** with full range on the pareto-frontier, we pick a sequence of 6 and 7 _SubNets_ from ResNet50 and MobV3, respectively. The sizes of ResNet50 _SubNets_ range from the [7.58 MB, 27.47 MB] while the sizes of MobV3 _SubNets_ range from [2.97 MB, 4.74 MB]. Shared weights take up 7.55 MB and 2.90 MB for ResNet50 and MobV3, separately 3. _SubNets_ are obtained using the procedure mentioned in OFA (Cai et al., 2019). Footnote 3: Weights, input activations, and zero points are quantized to int8, and the quantization scale is quantized into int32. **Metrics:** Latency in this section refers to the end-to-end serving latency of a given model, while accuracy refers to the top-1 accuracy. Both accuracy and latency are defined for _SubNets_ only. _SubGraphs_ are only used for the caching purpose as a subset of _SubNets_. **Architecture Analytic Model:** We have developed an analytic model which estimates the behavior of **SushiAccel** to explore design space by configuring the architecture with parameters. Figure 8: Data reuse opportunities in serving different _SubGraphs_ leveraged within **SushiAccel**. Figure 7: The overall **SushiAccel** architecture (\(K_{P}=2,C_{P}=3\)) Our model accurately predicts the latency trend of **SushiAccel** using profiled latency of **SushiAccel** on both workloads, enabling us to perform an exhaustive search of all parameter combinations within specified constraints. This approach allows for the identification of optimal configurations for improved performance in both simulation and real-world deployment. **Roofline Analysis** We also extended a roofline analysis tool to study the effect of PB on the boundness of **SushiAccel** under different workloads. **Deployment Platforms:** We implemented the proposed **SushiAccel** on two FPGA including ZCU104 (5 W) and Alveo U50 (75 W). We compare our **SushiAccel** w/ PB and w/o PB with Xilinx DPU and CPU (Intel i7 10750H, 45 W). **Scheduler Simulator:** We have developed **SushiSched**, which runs on the CPU and guides the **SushiAccel** on how to serve the current query and (a) what _SubGraph_ to serve and (b) _SubNet_ to be placed in PB. ### **Sushi Impact on Arithmetic Intensity** To understand the benefits of SGS, we perform roofline analysis as shown in Fig. 10 and Fig. 11, where roofline represents the normal roofline curve. And SGS-roofline virtually improves the overall off-chip bandwidth by saving off-chip data access, leading to an improved roofline curve shown by SGS roofline. The experiments are performed with a system with 19.2 GB/s off-chip memory bandwidth and 1.296 Tflops throughput running at 100 MHz (Reuther et al., 2022). The latency breakdown results in Fig. 10 shows that SGS can potentially remove the off-weights access latency from the critical path, such that the individual latency of serving a stream of queries from pareto-frontiers could be reduced by [6%, 23.6%] for MobV3 and [5.7%, 7.92%] for ResNet50. Such latency reduction essentially comes from the model boundedness shifting. The SGS pushes models towards compute-bound, which increases the utilization of the available compute resources for higher throughput and reduces latency and energy consumption. The shifting is illustrated by blue dots being pushed toward the red dots in Fig. 11. ### **SushiAccel Configuration Impact** In this subsection, we explore the impact of three main factors (i.e., bandwidth, throughput, and PB size) of **SushiAccel** on the overall end-to-end serving latency. #### 5.3.1 Bandwidth - Buffers Arrangement Different types of data require different bandwidths. A unified buffer for all different data types demands the controller to handle potentially all-to-all connections between buffers and all compute units. While the design of the splitting buffer only needs a direct connection between a buffer and compute units, which saves the complexity of both the data-path and the controller. The buffer is a 2D array and its size equals \(width\times height\). The width refers to the bandwidth a buffer could supply every cycle. The bandwidth demand of different buffers is shown in Tab. 1, which is determined by both workloads and hardware specifications. #### 5.3.2 PB Size - Sizes of Buffers All buffers compete on the same total storage budget so that a balance of them is preferred to achieve good performance. The addition of a persistent buffer also introduces a new factor of common weights reuse, leading to a trade-off between inter-layer data reuse and intra-layer data reuse. #### 5.3.3 Throughput - Parallelism of the Compute Array The parallelism of the 2D DPE Array is also a controllable knob. Within the same computation engine budget, a change in parallelism indicates a change in throughput, yielding different performances on different workloads. For example, \begin{table} \begin{tabular}{c c} \hline Buffer & Minimal Bandwidth Requirement \\ \hline DB & \(LCM\) (max off-chip \(BW\), DPE Array demanded on-chip \(BW\)) \\ SB & \(LCM\) (max off-chip \(BW,C_{P}\times R\times S\times\)iActs DataWidth) \\ LB & DPE Array demanded on-chip \(BW\) \\ OB & \(K_{P}\times\) OAC DataWidth \\ PB & \(LCM\) (max off-chip \(BW\), DPE Array demanded on-chip \(BW\)) \\ \hline \end{tabular} Note: \(BW\) = bandwidth, \(LCM(x_{1},x_{2})\): Least Common Multiple of \(x_{1}\) and \(x_{2}\). \end{table} Table 1: Bandwidth requirement of on-chip buffers Figure 11: SGS pushes memory-bound to compute-bound layers. Figure 10: Potential latency reduction with SGS (two bar per _SubGraph_, left: w/o PB; Right: w PB) the parallelism of \(16\) and \(32\) in K and C dimensions deliver a peak throughput of \(512\) data per clock cycle. Therefore, we use this throughput as the factor to abstract parallelism. #### 5.3.4 Design Space Exploration As Fig. 12 shows with larger PB sizes, more on-chip computation, and less off-chip bandwidth, the latency is improved. However, for MobV3, due to the smaller size, having depth-wise conv layers, and less reuse, the amount of improvement is lesser for MobV3 compared with the ResNet50. ### _SushiAccel Evaluation_ In this subsection, we evaluate how **SushiAccel** will impact the latency and energy reduction. We evaluate different scales of **SushiAccel** on two real FPGAs with different budgets running the 3x3 convolution layers of ResNet50. The **SushiAccel** on Alveo U50 has off-chip bandwidth of 14.4 GB/s, PB size of 1.69 MB, and throughput of 0.9216 TFlops running at 100 MHz. #### 5.4.1 Resources Allocation among Buffers The resource utilization of **SushiAccel** w/ PB and w/o PB under optimal configurations on both Xilinx ZCU104 and Alveo U50 are shown in Tab. 2 with a breakdown on-chip storage allocation shown in Tab. 3. Both **SushiAccel** w/ PB and **SushiAccel** w/o PB use the same amount of overall on-chip storage for a fair comparison. #### 5.4.2 Latency Evaluation The real-board latency and energy consumption results are shown in Fig. 12(a) with resources shown in Tab. 2. On ZCU104, compared with CPU, **SushiAccel** w/o PB achieves \(1.81X\sim 3.04X\) speedup and **SushiAccel** w/ PB achieves \(1.87X\sim 3.17X\) for different _SubNets_. While on Alveo U50, compared with CPU, **SushiAccel** w/o PB achieves \(1.43X\sim 2.54X\) speedup and **SushiAccel** w/ PB achieves \(1.57X\sim 2.61X\) for different _SubNets_. Fig. 12(a) also shows that the scale-up design on Alveo U50 performs worse than the small-scale design on ZCU104 under small _SubNets_ because of higher off-chip DRAM competition in data center cluster hosting Alveo U50 than simple embedded ZCU104. Thus, off-chip data access dominates latency in Alveo U50, resulting in the slow down for small _SubNets_. #### 5.4.3 Energy Evaluation Energy in data movement has been proved to dominate the entire power consumption of neural network accelerator (Dally et al., 2020) and thus we estimate the overall energy through profiling the off-chip DRAM data access for all different platforms shown in Figure 12(b). We estimate the off-chip energy by profiling the DRAM data access and compute it as \(NumberAccess\times EnergyPerAccess\). With the proposed _SubGraph_ Reuse, we could save \([14\%,52.6\%]\) off-chip data access energy saving for ResNet50 and \([43.6\%,78.7\%]\) for MobV3 compared to **SushiAccel** w/o PB. ### _Comparing with DPU_ We compared **SushiAccel** against Xilinx DPU using real layer-wise end-to-end inference latency of min-_SubNet_ on ZCU104 as shown in Fig. 14. We consider convolution layers with \(3\times 3\) kernel sizes. **SushiAccel** w/o PB achieved 0.5\(\sim\)1.95\(\times\) faster execution time than Xilinx DPU (\(25.1\%\) GeoMean speedup). This quantitative comparison lends credence to the proposal of adding a Persistent Buffer (PB) to a state-of-the-art ML accelerator design. There are also seldom cases when **SushiAccel** performs worse than Xilinx DPU, because **SushiAccel** takes less parallelism in height (X) and width (Y) dimensions (Fig. 5), leading to higher latency under workload with higher X and Y values. ### _SushiSched Functional Evaluation_ In this section, we evaluate the performance of **SushiSched** for both ResNet50 and MobV3. Fig. 15 shows that the **SushiSched** is able to serve queries with strictly lesser latency and/or better accuracy where blue dots represent served queries by employing **SushiSched**. Fig. 12: Latency reduction (Time Save in legend) improvement exploration on **SushiAccel** using Analytic Model. Fig. 13: Real board latency and energy reduction for ResNet50. (left and right bars in (b) are **SushiAccel** w/o PB and w/ PB) In Fig. 15a and Fig. 15c, blue dots are almost always below the line \(y=x\) manifesting that the **SushiSched** can serve strictly lesser latency if the latency is a hard constraint that needs to be satisfied. Similarly, all blue dots above the line \(y=x\) in Fig. 15b and Fig. 15d show that the **SushiSched** can serve strictly better accuracy if accuracy is a hard constraint that needs to be met. ### End-to-End **Sushi** Evaluation In this section, we compare the latency-accuracy tradeoff results among **Sushi** w/o PB, **SUSHI** w/ PB (state-unaware caching), and **SUSHI**. The blue dots in Fig. 16 illustrate how **SUSHI** serves random queries4. Footnote 4: Due to the overlap, only limited points in the figures are visible For ResNet50 in all cases, **SUSHI** w/o scheduler consistently outperforms No-**SUSHI**. For random queries, **SUSHI** is also able to decrease the latency by 21% on average given the same accuracy compared to not having **SUSHI**. In the case of MobV3, due to its small size, a relatively larger fraction of a _SubNet_ fits in PB, resulting in a higher cache-hit ratio (Appendix A.4). **SUSHI** offers better accuracy-latency tradeoff than **SUSHI** w/o scheduler, with the exception of only a few points. In the case of MobV3, **SUSHI** is also able to decrease the latency by 25% on average given the same accuracy compared to not having **SUSHI**. Finally, **SUSHI** increases the serving accuracy by up to 0.98% for the same latency, which is significant for ML serving applications. ## 6 Related Work Various accelerator designs such as Maeri (Kwon et al., 2018), Eyeriss (Chen et al., 2018), NVDLA (NVIDIA, 2016), and DPU (Xilinx, 2022) support different types of reuse Fig. 8. A comparison of them is shown in Tab. 4. However, all of these works achieve intra-model cross-layer reuse in contrast to the cross-query reuse we propose with **SushiAccel**. Clipper (Crankshaw et al., 2017) serves single model queries without exposing a latency/accuracy tradeoff. Inferline (Crankshaw et al., 2018) serves multiple models but in a pipeline, there's no latency/accuracy tradeoff per model. INFaaS (Romero et al., 2021b) provides a query-time latency/accuracy tradeoff mechanism and policy but suffers from expensive model switching mechanisms. This also translates into a policy that minimizes model switching as a result. The vertically integrated inference serving stack provided by **SUSHI** naturally plugs into existing inference serving frameworks, enabling agile navigation of the latency/accuracy tradeoff at query time. ## 7 Conclusion **SUSHI** is a vertically integrated hardware-software infer \begin{table} \begin{tabular}{c c c c c} \hline \hline & **SushiAccel** & **w/o PB** & **SushiAccel** & **SushiAccel** & **SushiAccel** \\ & & & & & \\ & & & & & \\ & & & & & \\ \hline Device & ZCU104 & ZCU104 & ZCU104 & ZCU104 & Avc U50 & Avc U50 \\ LUT & 61180 (26.6\%) & 64307 (27.9\%) & 41640 (18.1\%) & 231668 (26.63\%) & 244969 (28.16\%) \\ Register & 107216 (23.3\%) & 117724 (25.5\%) & 69180 (15.1\%) & 435071 (24.96\%) & 445602 (25.56\%) \\ BRAM & 1925 (61.7\%) & 198.5 (63.6\%) & 0 & 4525 (33.67\%) & 4525 (33.67\%) \\ URAM & 48 (50\%) & 96 (100\%) & 60 (62.5\%) & 48 (75.9\%) & 96 (15.9) \\ DSP & 1507 (87.2\%) & 1459 (87.2\%) & 438 (25.35\%) & 4739 (79.78\%) & 4740 (79.79\%) \\ PeakDops/cycle & 2592 & 2592 & 2304 & 9216 & 9216 \\ GFIops (100MHz) & 259.2 & 259.2 & 230.4 & 921.6 & 921.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Resources comparison of **SushiAccel** with DPU Figure 14: The latency comparison between **SushiAccel** w/o PB and Xilinx DPU for ResNet50. \begin{table} \begin{tabular}{c c c c} \hline \hline & **SushiAccel** w/o PB & **SushiAccel** w/ PB \\ & BRAM (KB) & URAM (KB) & BRAM (KB) & URAM (KB) \\ \hline DB-Ping & 0 & 1152 & 0 & 576 \\ DB-Pong & 0 & 1152 & 0 & 576 \\ SB & 8 & 1152 & 8 & 576 \\ LB & 54 & 0 & 54 & 0 \\ OB & 327 & 0 & 327 & 0 \\ ZSB & 8 & 0 & 8 & 0 \\ PB & 0 & 0 & 0 & 1728 \\ Overall & 397 & 3456 & 397 & 3456 \\ \hline \hline \end{tabular} \end{table} Table 3: Buffer configurations of **SushiAccel** (ZCU104 board) ence serving stack that takes advantage of the temporal locality induced by serving inference queries on the same weight-shared supernetwork structure. To the best of our knowledge, the concept of SubGraph Stationary (SGS) optimization across queries is novel. We demonstrate that to achieve the best temporal locality benefit, the proposed hardware implementation **SushiAccel** must work in tandem with the software scheduler **SushiSched** to control what SubNets to serve for each query and how to update the accelerator state. We further ensure generalizability of **SushiSched** by abstracting the effect of hardware state on the latency (and energy) of served SubNets with a black box SubGraph latency table. This decouples **SushiSched** from any accelerator implementation, while maintaining its state-awareness implicitly. **SUSHI** can be naturally integrated in state-of-the-art ML inference serving frameworks and enables better latency/accuracy tradeoffs for a stream of queries with latency/accuracy constraints. For a stream of queries, our results show 0.98% improvement in the served accuracy, and up to 25% latency reduction. ## 8 Acknowledgment This material is based upon work partially supported by the National Science Foundation under Grant Number CCF-2029004. Additional support was provided by a sponsored research award by Cisco Research. We would like to further acknowledge the insightful comments of the review panel as well as the skillful guidance of our shepherd, Dr. Qijing Jenny Huang, which greatly contributed to the quality of this paper. We thank the anonymous reviewers of MLSys, and the SAIL Research Group members for valuable feedback and the stimulating intellectual environment they provide. We also thank Taekyung Heo from Synergy lab for his feedback on the initial version of the paper. **Disclaimer:** Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2302.05395
Algebras of smooth functions and holography of traversing flows
Let $X$ be a smooth compact manifold and $v$ a vector field on $X$ which admits a smooth function $f: X \to \mathbf R$ such that $df(v) > 0$. Let $\partial X$ be the boundary of $X$. We denote by $C^\infty(X)$ the algebra of smooth functions on $X$ and by $C^\infty(\partial X)$ the algebra of smooth functions on $\partial X$. With the help of $(v, f)$, we introduce two subalgebras $\mathcal A(v)$ and $\mathcal B(f)$ of $C^\infty(\partial X)$ and prove (under mild hypotheses) that $C^\infty(X) \approx \mathcal A(v) \hat\otimes \mathcal B(f)$, the topological tensor product. Thus the topological algebras $\mathcal A(v)$ and $\mathcal B(f)$, \emph{viewed as boundary data}, allow for a reconstruction of $C^\infty(X)$. As a result, $\mathcal A(v)$ and $\mathcal B(f)$ allow for the recovery of the smooth topological type of the bulk $X$.
Gabriel Katz
2023-02-10T17:45:46Z
http://arxiv.org/abs/2302.05395v2
# Algebras of smooth functions and holography of traversing flows ###### Abstract. Let \(X\) be a smooth compact manifold and \(v\) a vector field on \(X\) which admits a smooth function \(f:X\to\mathbb{R}\) such that \(df(v)>0\). Let \(\partial X\) be the boundary of \(X\). We denote by \(C^{\infty}(X)\) the algebra of smooth functions on \(X\) and by \(C^{\infty}(\partial X)\) the algebra of smooth functions on \(\partial X\). With the help of \((v,f)\), we introduce two subalgebras \(\mathcal{A}(v)\) and \(\mathcal{B}(f)\) of \(C^{\infty}(\partial X)\) and prove (under mild hypotheses) that \(C^{\infty}(X)\approx\mathcal{A}(v)\hat{\otimes}\mathcal{B}(f)\), the topological tensor product. Thus the topological algebras \(\mathcal{A}(v)\) and \(\mathcal{B}(f)\), _viewed as boundary data_, allow for a reconstruction of \(C^{\infty}(X)\). As a result, \(\mathcal{A}(v)\) and \(\mathcal{B}(f)\) allow for the recovery of the smooth topological type of the bulk \(X\). ## 1. Introduction It is classically known that the normed algebra \(C^{0}(X)\) of continuous real-valued functions on a compact space \(X\) determines its topological type [GRS], [Ga], [Br]. In this context, \(X\) is interpreted as the space of maximal ideals of the algebra \(C^{0}(X)\). In a similar spirit, the algebra \(C^{\infty}(X)\) of smooth functions on a compact smooth manifold \(X\) (the algebra \(C^{\infty}(X)\) is considered in the Whitney topology [W3]) determines the _smooth_ topological type of \(X\) [KMS], [Na]. Again, \(X\) may be viewed as the space of maximal ideals of the algebra \(C^{\infty}(X)\). Recall that a harmonic function \(h\) on a compact connected Riemannian manifold \(X\) is uniquely determined by its restriction to the smooth boundary \(\partial X\) of \(X\). In other words, the Dirichlet boundary value problem has a unique solution in the space of harmonic functions. Therefore, the vector space \(\mathcal{H}(X)\) of harmonic functions on \(X\) is rigidly determined by its restriction (trace) \(\mathcal{H}^{\partial}(X):=\mathcal{H}(X)|_{\partial X}\) to the boundary \(\partial X\). As we embark on our journey, this fact will serve us as a beacon. This paper revolves around the following question: Which algebras of smooth functions on the boundary \(\partial X\) can be used to reconstruct the algebra \(C^{\infty}(X)\) and thus the smooth topological type of \(X\)? Remembering the flexible nature of smooth functions (in contrast with the rigid harmonic ones), at the first glance, we should anticipate the obvious answer "None!". However, when \(X\) carries an additional geometric structure, then the question, surprisingly, may have a positive answer. The geometric structure on \(X\) that does the trick is a vector field (i.e., an ordinary differential equation), drawn from a massive class of vector fields which we will introduce below. Let \(X\) be a compact connected smooth \((n+1)\)-dimensional manifold with boundary and \(v\) a smooth vector field admitting a Lyapunov function \(f:X\to\mathbb{R}\) so that \(df(v)>0\). We call such vector fields traversing. We assume that \(v\) is in general position with respect to the boundary \(\partial X\) and call such vector fields boundary generic (see [K1] or [K3], Definition 5.1, for the notion of boundary generic vector fields). Temporarily, it will be sufficient to think of the boundary generic vector fields \(v\) as having only \(v\)-trajectories that are tangent to the boundary \(\partial X\) with the order of tangency less than or equal to \(\dim(X)\). Section 3 contains a more accurate definition. Informally, we use the term "holography" when some residual structures on the boundary \(\partial X\) are sufficient for a reconstruction of similar structures on the bulk \(X\). Given such a triple \((X,v,f)\), in Section 3, we will introduce two subalgebras, \(\mathcal{A}(v)=C^{\infty}(\partial X,v)\) and \(\mathcal{B}(f)=(f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\), of the algebra \(C^{\infty}(\partial X)\), which depend only on \(v\) and \(f\), respectively. By Theorem 3.1, \(\mathcal{A}(v)\) and \(\mathcal{B}(f)\) will allow for a reconstruction of the algebra \(C^{\infty}(X)\). In fact, the boundary data, generated by these subalgebras, lead to a unique (rigid) "solution" \[C^{\infty}(X)\approx C^{\infty}(\partial X,v)\,\hat{\otimes}\,(f^{\partial})^ {*}(C^{\infty}(\mathbb{R})),\] the topological tensor product of the two algebras. As a result, the pair \(\mathcal{A}(v)\), \(\mathcal{B}(f)\), "residing on the boundary", determines the smooth topological type of the bulk \(X\) and of the \(1\)-dimensional foliation \(\mathcal{F}(v)\), generated by the \(v\)-flow. ## 2. Holography on manifolds with boundary and the causality maps Let \(X\) be a compact connected smooth \((n+1)\)-dimensional manifold with boundary \(\partial_{1}X=_{\mathsf{def}}\partial X\) (we use this notation for the boundary \(\partial X\) to get some consistency with similar notations below), and \(v\) a smooth traversing vector field, admitting a smooth Lyapunov function \(f:X\to\mathbb{R}\). We assume that \(v\) is boundary generic. We denote by \(\partial_{1}^{+}X(v)\) the subset of \(\partial_{1}X\) where \(v\) is directed inwards of \(X\) or is tangent to \(\partial_{1}X\). Similarly, \(\partial_{1}^{-}X(v)\) denotes the subset of \(\partial_{1}X\) where \(v\) is directed outwards of \(X\) or is tangent to \(\partial_{1}X\). Let \(\mathcal{F}(v)\) be the \(1\)-dimensional oriented foliation, generated by the traversing \(v\)-flow. We denote by \(\gamma_{x}\) the \(v\)-trajectory through \(x\in X\). Since \(v\) is traversing and boundary generic, each \(\gamma_{x}\) is homeomorphic either a closed segment, or to a singleton [K1]. In what follows, we embed the compact manifold \(X\) in an open manifold \(\hat{X}\) of the same dimension so that \(v\) extends to a smooth vector field \(\hat{v}\) on \(\hat{X}\), \(f\) extends to a smooth function \(\hat{f}\) on \(\hat{X}\), and \(d\hat{f}(\hat{v})>0\) in \(\hat{X}\). We treat \((\hat{X},\hat{v},\hat{f})\) as a germ in the vicinity of \((X,v,f)\). **Definition 2.1**.: _We say that a boundary generic and traversing vector field \(v\) possesses Property A, if each \(v\)-trajectory \(\gamma\) is either transversal to \(\partial_{1}X\) at some point of the set \(\gamma\cap\partial_{1}X\), or \(\gamma\cap\partial_{1}X\) is a singleton \(x\) and \(\gamma\) is quadratically tangent to \(\partial_{1}X\) at \(x\). \(\diamondsuit\)_ A traversing vector field \(v\) on \(X\) induces a structure of a partially-ordered set \((\partial_{1}X,\succ_{v})\) on the boundary \(\partial_{1}X\): for \(x,y\in\partial_{1}X\), we write \(y\succ x\) if the two points lie on the same \(v\)-trajectory \(\gamma\) and \(y\) is reachable from \(x\) by moving in the \(v\)-direction. We denote by \(\mathcal{T}(v)\) the trajectory space of \(v\) and by \(\Gamma:X\to\mathcal{T}(v)\) the obvious projection. For a traversing and boundary generic \(v\), \(\mathcal{T}(v)\) is a compact space in the topology induced by \(\Gamma\). Since any trajectory of a traversing \(v\) intersects the boundary \(\partial_{1}X\), we get that \(\mathcal{T}(v)\) is a quotient of \(\partial_{1}X\) modulo the partial order relation \(\succ_{v}\). A traversing and boundary generic \(v\) gives rise to the causality (scattering) map \[C_{v}:\partial_{1}^{+}X(v)\to\partial_{1}^{-}X(v) \tag{2.1}\] that takes each point \(x\in\partial_{1}^{+}X(v)\) to the unique consecutive point \(y\in\gamma_{x}\cap\partial_{1}^{-}X(v)\) that can be reached from \(x\) in the \(v\)-direction. If no such \(y\neq x\) is available, we put \(C_{v}(x)=x\). We stress that typically \(C_{v}\) is a _discontinuous_ map (see Fig. 2). We notice that, for any smooth positive function \(\lambda:X\to\mathbb{R}_{+}\), we have \(C_{\lambda\cdot v}=C_{v}\); thus the causality map depends only on the conformal class of a traversing vector field \(v\). In fact, \(C_{v}\) depends only on the oriented foliation \(\mathcal{F}(v)\), generated by the \(v\)-flow. In the paper, we will discuss two kinds of intimately related holography problems. The first kind amounts to the question: To what extend given boundary data are sufficient for reconstructing the unknown bulk and the traversing \(v\)-flow on it, or rather, the foliation \(\mathcal{F}(v)\)? This question may be represented symbolically by the two diagrams: \[\bullet\] Holographic Reconstruction Problem (2.2) \[(\partial_{1}X,\,\succ_{v},\,)\,\stackrel{{??}}{{ \longrightarrow}}\,(X,\,\mathcal{F}(v)),\] (2.3) \[(\partial_{1}X,\,\succ_{v},\,f^{\partial})\,\,\stackrel{{??}}{{\longrightarrow}}\,(X,\,\mathcal{F}(v),\,f),\] Figure 1. The map \(\Gamma:X\to\mathcal{T}(v)\) for a traversally generic (vertical) vector field \(v\) on a disk with 4 holes. The trajectory space is a graph whose verticies are of valencies 1 and 3. The restriction of \(\Gamma\) to \(\partial_{1}X\) is a surjective map \(\Gamma^{\partial}\) with finite fibers of cardinality 3 at most; a generic fiber has cardinality 2. where \(\succ_{v}\) denotes the partial order on boundary, defined by the causality map \(C_{v}\), and the symbol "\(\stackrel{{??}}{{\longrightarrow}}\)" points to the unknown ingredients of the diagrams. The second kind of problem is: Given two manifolds, \(X_{1}\) and \(X_{2}\), equipped with traversing flows, and a diffeomorphism \(\Phi^{\partial}\) of their boundaries, respecting the relevant boundary data, is it possible to extend \(\Phi^{\partial}\) to a diffeomorphism/homeomorphism \(\Phi:X_{1}\to X_{2}\) that respects the corresponding flows-generated structures in the interiors of the two manifolds? This problem may be represented by the commutative diagrams: \[\begin{array}{ccc}\bullet\mbox{Holographic Extension Problem}\\ (\partial_{1}X_{1},\ \succ_{v_{1}})&\stackrel{{\sf inc}}{{ \longrightarrow}}&(X_{1},\ \mathcal{F}(v_{1}))\\ \downarrow\ \Phi^{\partial}&&\downarrow\??\ \ \Phi\\ (\partial_{1}X_{2},\ \succ_{v_{2}})&\stackrel{{\sf inc}}{{ \longrightarrow}}&(X_{2},\ \mathcal{F}(v_{2}))\\ \end{array} \tag{2.4}\] \[\begin{array}{ccc}(\partial_{1}X_{1},\ \succ_{v_{1}},\ f_{1}^{\partial})& \stackrel{{\sf inc}}{{\longrightarrow}}&(X_{1},\ \mathcal{F}(v_{1}),f_{1})\\ \downarrow\ \Phi^{\partial}&&\downarrow\??\ \ \Phi\\ (\partial_{1}X_{2},\ \succ_{v_{2}},\ f_{2}^{\partial})&\stackrel{{\sf inc}}{{ \longrightarrow}}&(X_{2},\ \mathcal{F}(v_{2}),\ f_{2}),\end{array} \tag{2.5}\] where \(\sf inc\) denotes the inclusion of spaces, accompanied by the obvious restrictions of functions and foliations. The symbol "\(\downarrow\)?? " indicates the unknown maps in the diagrams. Figure 2. An example of the causality map \(C_{v}:\partial_{1}^{+}X(v)\to\partial_{1}^{-}X(v)\). Note the essential discontinuity of \(C_{v}\) in the vicinity of \(x\). These two types of problems come in a big variety of flavors, depending on the more or less rich boundary data and on the anticipated quality of the transformations \(\Phi\) (homeomorphisms, PD-homeomorphisms, Holder homeomorphisms with some control of the Holder exponent, and diffeomorphisms with different degree of smoothness). Let us formulate the main result of [K4], Theorem 4.1, which captures the philosophy of this article and puts our main result, Theorem 3.1, in the proper context. Theorem 2.1 reflects the scheme depicted in (2.4). **Theorem 2.1**.: **(Conjugate Holographic Extensions)** _Let \(X_{1},X_{2}\) be compact connected oriented smooth \((n+1)\)-dimensional manifolds with boundaries. Consider two traversing boundary generic vector fields \(v_{1},v_{2}\) on \(X_{1}\) and \(X_{2}\), respectively. In addition, assume that \(v_{1},v_{2}\) have Property \(\mathsf{A}\) from Definition 2.1._ _Let a smooth orientation-preserving diffeomorphism \(\Phi^{\partial}:\partial_{1}X_{1}\to\partial_{1}X_{2}\) commute with the two causality maps:_ \[C_{v_{2}}\circ\Phi^{\partial}=\Phi^{\partial}\circ C_{v_{1}}\] _Then \(\Phi^{\partial}\) extends to a smooth orientation-preserving diffeomorphism \(\Phi:X_{1}\to X_{2}\) such that \(\Phi\) maps the oriented foliation \(\mathcal{F}(v_{1})\) to the oriented foliation \(\mathcal{F}(v_{2})\)._ Let us outline the spirit of Theorem 2.1's proof, since this will clarify the main ideas from Section 3. The reader interested in the technicalities may consult [K4]. Proof.: First, using that \(v_{2}\) is traversing, we construct a Lyapunov function \(f_{2}:X_{2}\to\mathbb{R}\) for \(v_{2}\). Then we pull-back, via the diffeomorphism \(\Phi^{\partial}\), the restriction \(f_{2}^{\partial}:=f_{2}|_{\partial_{1}X_{2}}\) to the boundary \(\partial_{1}X_{2}\). Since \(\Phi^{\partial}\) commutes with the two causality maps, the pull back \(f_{1}^{\partial}=_{\mathsf{def}}(\Phi^{\partial})^{*}(f_{2}^{\partial})\) has the property \(f_{1}^{\partial}(y)>f_{1}^{\partial}(x)\) for any pair \(y\succ x\) on the same \(v_{1}\)-trajectory, the order of points being defined by the \(v_{1}\)-flow. Equivalently, we get \(f_{1}^{\partial}(C_{v_{1}}(x))>f_{1}^{\partial}(x)\) for any \(x\in\partial_{1}^{+}X(v_{1})\) such that \(C_{v_{1}}(x)\neq x\). As the key step, we prove in [K4] that such \(f_{1}^{\partial}\) extends to a smooth function \(f_{1}:X_{1}\to\mathbb{R}\) that has the property \(df_{1}(v_{1})>0\). Hence, \(f_{1}\) is a Lyapunov function for \(v_{1}\). Recall that each causality map \(C_{v_{i}}\), \(i=1,2\), allows to view the \(v_{i}\)-trajectory space \(\mathcal{T}(v_{i})\) as the quotient space \(\big{(}\partial_{1}X_{i}\big{)}\big{/}\{C_{v_{i}}(x)\sim x\}\), where \(x\in\partial_{1}^{+}X_{i}(v_{i})\) and the topology in \(\mathcal{T}(v_{i})\) is defined as the quotient topology. Using that \(\Phi^{\partial}\) commutes with the causality maps \(C_{v_{1}}\) and \(C_{v_{2}}\), we conclude that \(\Phi^{\partial}\) induces a homeomorphism \(\Phi^{\mathcal{T}}:\mathcal{T}(v_{1})\to\mathcal{T}(v_{2})\) of the trajectory spaces, which preserves their natural stratifications. For a traversing \(v_{i}\), the manifold \(X_{i}\) carries two mutually transversal foliations: the oriented \(1\)-dimensional \(\mathcal{F}(v_{i})\), generated by the \(v_{i}\)-flow, and the foliation \(\mathcal{G}(f_{i})\), generated by the constant level hypersurfaces of the Lyapunov function \(f_{i}\). To avoid dealing the singularities of \(\mathcal{F}(v_{i})\) and \(\mathcal{G}(f_{i})\), we extend \(f_{i}\) to \(\hat{f}_{i}:\hat{X}_{i}\to\mathbb{R}\) and \(v_{i}\) to \(\hat{v}_{i}\) on \(\hat{X}_{i}\) so that \(d\hat{f}_{i}(\hat{v}_{i})>0\). This generates nonsingular foliations \(\mathcal{F}(\hat{v}_{i})\) and \(\mathcal{G}(\hat{f}_{i})\) on \(\hat{X}_{i}\). By this construction, \(\mathcal{F}(\hat{v}_{i})|_{\hat{X}_{i}}=\mathcal{F}(v_{i})\) and \(\mathcal{G}(\hat{f}_{i})|_{X_{i}}=\mathcal{G}(f_{i})\). Note that the "leaves" of \(\mathcal{G}(f_{i})\) may be disconnected, while the leaves of \(\mathcal{F}(v_{i})\), the \(v_{i}\)-trajectories, are connected. The two smooth foliations, \(\mathcal{F}(\hat{v}_{i})\) and \(\mathcal{G}(\hat{f}_{i})\), will serve as a "coordinate grid" on \(X_{i}\): every point \(x\in X_{i}\) belongs to a _unique_ pair of leaves \(\gamma_{x}\in\mathcal{F}(v_{i})\) and \(L_{x}:=\hat{f}_{i}^{-1}(f_{i}(x))\in\mathcal{G}(\hat{f}_{i})\). Conversely, using the traversing nature of \(v_{i}\), any pair \((y,\,t)\), where \(y\in\gamma_{x}\cap\partial_{1}X_{i}\) and \(t\in[f_{i}^{\partial}(\gamma_{x}\cap\partial_{1}X_{i})]\subset\mathbb{R}\), where \([f_{i}^{\partial}(\gamma_{x}\cap\partial_{1}X_{i})]\) denotes the minimal closed interval that contains the finite set \(f_{i}^{\partial}(\gamma_{x}\cap\partial_{1}X_{i})\), determines a _unique_ point \(x\in X_{i}\). Note that some pairs of leaves \(L\) and \(\gamma\) may have an empty intersection, and some components of leaves \(L\) may have an empty intersection with the boundary \(\partial_{1}X_{i}\). In fact, using that \(f_{i}\) is a Lyapunov function, the hyprsurface \(L=f_{i}^{-1}(c)\) intersects with a \(v_{i}\)-trajectory \(\gamma\) if and only if \(c\in[f_{i}^{\partial}(\gamma\cap\partial_{1}X_{i})]\). Since the two smooth leaves, \(\hat{\gamma}_{y}\) and \(\hat{f}_{i}^{-1}(f_{i}(z))\), depend smoothly on the points \(y,z\in\partial_{1}X_{i}\) and are transversal, their intersection point \(\hat{\gamma}_{y}\cap\hat{f}_{i}^{-1}(f_{i}(z))\in\hat{X}_{i}\) depends smoothly on \((y,z)\in(\partial_{1}X_{i})\times(\partial_{1}X_{i})\), as long as \(f_{i}^{\partial}(z)\in[f_{i}^{\partial}(\gamma_{y}\cap\partial_{1}X_{i})]\). Note that pairs \((y,z)\), where \(y,z\in\partial_{1}X_{i}\), with the property \(f_{i}^{\partial}(z)\in f_{i}^{\partial}(\gamma_{y}\cap\partial_{1}X_{i})\) give rise to the intersections \(\hat{\gamma}_{y}\cap\hat{f}_{i}^{-1}(f_{i}(z))\) that belong to \(\partial_{1}X_{i}\). Now we are ready to extend the diffeomorphism \(\Phi^{\partial}\) to a homeomorphism \(\Phi:X_{1}\to X_{2}\). In the process, following the scheme in (2.4), _we assume the the foliations \(\mathcal{F}(v_{i})\) and of the Lyapunov functions \(f_{i}\) on \(X_{i}\) (\(i=1,2\)) do exist and are "knowable", although we have access only to their traces on the boundaries._ Take any \(x\in X_{1}\). It belongs to a unique pair of leaves \(L_{x}\in\mathcal{G}(f_{1})\) and \(\gamma_{x}\in\mathcal{F}(v_{1})\). We define \(\Phi(x)=x^{\prime}\in X_{2}\), where \(x^{\prime}\) is the unique point that belongs to the intersection of \(f_{2}^{-1}(f_{1}(x))\in\mathcal{G}(f_{2})\) and the \(v_{2}\)-trajectory \(\gamma^{\prime}=\Gamma_{2}^{-1}(\Phi^{\mathcal{T}}(\gamma_{x}))\). By its construction, \(\Phi|_{\partial_{1}X_{1}}=\Phi^{\partial}\). Therefore, \(\Phi\) induces the same homeomorphism \(\Phi^{\mathcal{T}}:\mathcal{T}(v_{1})\to\mathcal{T}(v_{2})\) as \(\Phi^{\partial}\) does. The leaf-hypersurface \(\hat{f}_{2}^{-1}(f_{1}(x))\) depends smoothly on \(x\), but the leaf-trajectory \(\hat{\gamma}^{\prime}=\Gamma_{2}^{-1}(\Phi^{\mathcal{T}}(\hat{\gamma}_{x}))\) may not! Although the homeomorphism \(\Phi\) is a diffeomorphism along the \(v_{1}\)-trajectories, it is not clear that it is a diffeomorphism on \(X_{1}\) (a priori, \(\Phi\) is just a Holder map with a Holder exponent \(\alpha=1/m\), where \(m\) is the maximal tangency order of \(\gamma\)'s to \(\partial_{1}X\)). Presently, for proving that \(\Phi\) is a diffeomorphism, we need Property A from Definition 2.1. Assuming its validity, we use the transversality of \(\gamma_{x}\)_somewhere_ to \(\partial_{1}X\) to claim the smooth dependence of \(\Gamma_{2}^{-1}(\Phi^{\mathcal{T}}(\hat{\gamma}_{x}))\) on \(x\). Now, since the smooth foliations \(\mathcal{F}(\hat{v}_{i})\) and \(\mathcal{G}(\hat{f}_{i})\) are transversal, it follows that \(x^{\prime}=\Phi(x)\) depends smoothly on \(x\). Conjecturally, Property A is unnecessary for establishing that \(\Phi\) is a diffeomorphism. Note that this construction of the extension \(\Phi\) is quite explicit, but not canonic. For example, it depends on the choice of extension of \(f_{1}^{\partial}:=(\Phi^{\partial})^{*}(f_{2}^{\partial})\) to a smooth function \(f_{1}:X_{1}\to\mathbb{R}\), which is strictly monotone along the \(v_{1}\)-trajectories. The uniqueness (topological rigidity) of the extension \(\Phi\) may be achieved, if one assumes _knowing fully_ the manifolds \(X_{i}\), equipped with the foliation grids \(\mathcal{F}(v_{i}),\mathcal{G}(f_{i})\) and the Lyapunov function \(f_{i}\). In Theorem 3.1, we will reflect on this issue. The next theorem (see [K4], Corollary 4.3) fits the scheme in (2.2). It claims that the _smooth topological type_ of the triple \(\{X,\mathcal{F}(v),\mathcal{G}(f)\}\) may be reconstructed from the appropriate boundary-confined data, provided that Property A is valid. **Corollary 2.1**.: **(Holography of Traversing Flows)** _Let \(X\) be a compact connected smooth \((n+1)\)-dimensional manifold with boundary, and let \(v\) be a traversing boundary generic vector field, which possesses Property A._ _Then the following boundary-confined data:_ * _the causality map_ \(C_{v}:\partial_{1}^{+}X(v)\to\partial_{1}^{-}X(v)\)_,_ * _the restriction_ \(f^{\partial}:\partial_{1}X\to\mathbb{R}\) _of the Lyapunov function_ \(f\)_,_ _are sufficient for reconstructing the triple \((X,\mathcal{F}(v),f)\), up to diffeomorphisms \(\Phi:X\to X\) which are the identity on the boundary \(\partial_{1}X\)._ Proof.: We claim that, in the presence of Property A, the data \(\{C_{v},\ \ f^{\partial}\}\) on the boundary \(\partial_{1}X\) allow for a reconstruction of the triple \((X,\mathcal{F}(v),f)\), up to a diffeomorphism that is the identity on \(\partial_{1}X\). Assume that there exist two traversing flows \((X_{1},\mathcal{F}(v_{1}),f_{1})\) and \((X_{2},\mathcal{F}(v_{2}),f_{2})\) such that \(\partial_{1}X_{1}=\partial_{1}X_{2}=\partial_{1}X\), \[\{C_{v_{1}},\ f^{\partial}_{1}\}=\{C_{v_{2}},\ \ f^{\partial}_{2}\}=\{C_{v},f^{ \partial}\}.\] Applying Theorem 2.1 to the identity diffeomorphism \(\Phi^{\partial}=\mathsf{id}_{\partial_{1}\mathsf{X}}\), we conclude that it extends to a diffeomorphism \(\Phi:X_{1}\to X_{2}\) that takes \(\{\mathcal{F}(v_{1})\cap\partial_{1}X_{1},\ \ f^{\partial}_{1}\}\) to \(\{\mathcal{F}(v_{2})\cap\partial_{1}X_{2},\ \ f^{\partial}_{2}\}\). **Remark 2.1**.: Unfortunately, Corollary 2.1 and its proof are not very constructive. They are just claims of existence: at the moment, it is not clear how to build the triple \((X,\mathcal{F}(v),f)\) only from the boundary data \((\partial_{1}X,C_{v},f^{\partial})\). \(\diamondsuit\) Fortunately, the following simple construction ([K4], Lemma 3.4), shown in Fig.3, produces an explicit recipe for recovering the triple \((X,\mathcal{F}(v),f)\) from the triple \((\partial_{1}X,C_{v},f^{\partial})\), but only up to a _homeomorphism_. As we have seen in the proof of Theorem 2.1, the causality map \(C_{v}\) determines the quotient trajectory space \(\mathcal{T}(v)\) canonically. Let \(f:X\to\mathbb{R}\) be a Lyapunov function for \(v\). The pair \((\mathcal{F}(v),f)\) gives rise to an embedding \(\alpha:X\hookrightarrow\mathcal{T}(v)\times\mathbb{R}\), defined by the formula \(\alpha(x)=([\gamma_{x}],f(x))\), where \(x\in X\) and \([\gamma_{x}]\in\mathcal{T}(v)\) denotes the point-trajectory through \(x\) The dependece \(x\leadsto[\gamma_{x}]\) is continuous by the definition of the quotient topology in \(\mathcal{T}(v)\). Note that \(\alpha\) maps each \(v\)-trajectory \(\gamma\) to the line \([\gamma]\times\mathbb{R}\), and, for any \(c\in\mathbb{R}\), each (possibly disconnected) leaf \(\mathcal{G}_{c}:=f^{-1}(c)\) to the slice \(\mathcal{T}(v)\times c\) of \(\mathcal{T}(v)\times\mathbb{R}\). With the the help of the embedding \(\alpha\), each trajectory \(\gamma\in\mathcal{F}(v)\) may be identified with the closed interval \([f^{\partial}(\gamma\cap\partial_{1}X)]\subset\mathbb{R}\), and the vector field \(v|_{\gamma}\) with the constant vector field \(\partial_{u}\) on \(\mathbb{R}\). Consider now the restriction \(\alpha^{\partial}\) of the embedding \(\alpha\) to the boundary \(\partial_{1}X\). Evidently, the image of \(\alpha^{\partial}:\partial_{1}X\hookrightarrow\mathcal{T}(v)\times\mathbb{R}\) bounds the image \(\alpha(X)=\coprod_{[\gamma]\in\mathcal{T}(v)}[f^{\partial}(\gamma\cap\partial _{1}X)]\). Therefore, using the product structure in \(\mathcal{T}(v)\times\mathbb{R}\), \(\alpha^{\partial}(\partial_{1}X)\) determines \(\alpha(X)\) canonically. Hence, \(\alpha(X)\) depends on \(C_{v}\) and \(f_{1}^{\partial}\) only! Note that \(\alpha\) is a continuous \(1\)-to-\(1\) map on a compact space, and thus, a homeomorphism onto its image. Moreover, the topological type of \(X\) depends only on \(C_{v}\): the apparent dependence of \(\alpha(X)\) on \(f^{\partial}\) is not crucial, since, for a given \(v\), the space \(\mathsf{Lyap}(v)\) of Lyapunov functions for \(v\) is convex. The standing issue is: How to make sense of the claim "\(\alpha\) is a diffeomorphism"? Section 3 descibes our attempt to address this question (see Lemma 3.3 and Theorem 3.1). 3. Recovering the algebra \(C^{\infty}(X)\) in terms of subalgebras of \(C^{\infty}(\partial_{1}X)\) In what follows, we are inspired by the following classical property of functional algebras: for any compact smooth manifolds \(X,Y\), we have an algebra isomorphism \(C^{\infty}(X\times Y)\approx C^{\infty}(X)\hat{\otimes}C^{\infty}(Y)\), where \(\hat{\otimes}\) denotes an appropriate completion of the algebraic tensor product \(C^{\infty}(X)\otimes C^{\infty}(Y)\) [Grot]. The trajectory space \(\mathcal{T}(v)\), although a singular space, carries a surrogate smooth structure [K3]. By definition, a function \(h:\mathcal{T}(v)\to\mathbb{R}\) is smooth if its pull-back \(\Gamma^{*}(h):X\to\mathbb{R}\) is a smooth function on \(X\). As a subspace of \(C^{\infty}(X)\), the \(C^{\infty}(\mathcal{T}(v))\) is formed exactly by the smooth functions \(g:X\to\mathbb{R}\), whose directional derivatives \(\mathcal{L}_{v}g\) vanish in \(X\). If \(\mathcal{L}_{v}(g)=0\) and \(\mathcal{L}_{v}(h)=0\), then \(\mathcal{L}_{v}(g\cdot h)=\mathcal{L}_{v}(g)\cdot h+g\cdot\mathcal{L}_{v}(h)=0\). Thus, \(C^{\infty}(\mathcal{T}(v))\) is indeed a subalgebra of \(C^{\infty}(X)\). Note that if we change \(v\) by a non-vanishing conformal factor \(\lambda\), then \(\mathcal{L}_{v}g=0\) if and only if \(\mathcal{L}_{\lambda\cdot v}\,g=0\). Therefore, the algebra \(C^{\infty}(\mathcal{T}(v))\) depends only on the conformal class of \(v\); in other words, on the foliation \(\mathcal{F}(v)\). In the same spirit, we may talk about diffeomorphisms\(\Phi^{\mathcal{T}}:\mathcal{T}(v)\to\mathcal{T}(v)\) of the trajectory spaces, as maps that induce isomorphisms of the algebra \(C^{\infty}(\mathcal{T}(v))\). If two (\(v\)-invariant) functions from \(C^{\infty}(\mathcal{T}(v))\) take different values at a point \([\gamma]\in\mathcal{T}(v)\), then they must take different values on the finite set \(\gamma\cap\partial_{1}X\subset\partial_{1}X\). Therefore, the obvious restriction homomorphism \(res^{\partial}_{\mathcal{T}}:C^{\infty}(\mathcal{T}(v))\to C^{\infty}(\partial _{1}X)\), induced by the inclusion \(\partial_{1}X\subset X\), is a _monomorphism_. We denote its image by \(C^{\infty}(\partial_{1}X,v)\). Thus, we get an isomorphism \(res^{\partial}_{\mathcal{T}}:C^{\infty}(\mathcal{T}(v))\to C^{\infty}(\partial _{1}X,v)\). We think of the subalgebra \(C^{\infty}(\partial_{1}X,v)\subset C^{\infty}(\partial_{1}X)\) as an integral part of the boundary data for the holography problems we are tackling. Let \(\pi_{k}:J^{k}(X,\mathbb{R})\to X\) be the vector bundle of \(k\)-jets of smooth maps from \(X\) to \(\mathbb{R}\). We choose a continuous family semi-norms \(|\sim|_{k}\) in the fibers of the jet bundle \(\pi_{k}\) and use it to define a sup-norm \(\|\sim\|_{k}\) for the sections of \(\pi_{k}\). We denote by \(jet^{k}\) the obvious map \(C^{\infty}(X,\mathbb{R})\to J^{k}(X,\mathbb{R})\) that takes each function \(h\) to the collection of its \(k\)-jets \(\{jet_{x}^{k}(h)\}_{x\in X}\). The Whitney topology [W3] in the space \(C^{\infty}(X)=\{h:X\to\mathbb{R}\}\) is defined in terms of the countable family of the norms \(\{\|jet^{k}(h)\|_{k}\}_{k\in\mathbb{N}}\) of such sections \(jet^{k}(h)\) of \(\pi_{k}\). This topology insures the uniform convergence, on the compact subsets of \(X\), of functions and their partial derivatives of an arbitrary order. Note also that \(\|jet^{k}(h_{1}\cdot h_{2})\|_{k}\leq\|jet^{k}(h_{1})\|_{k}\cdot\|jet^{k}(h_{2 })\|_{k}\) for any \(h_{1},h_{2}\in C^{\infty}(X)\). Any subalgebra \(\mathcal{A}\subset C^{\infty}(X)\) inherits a topology from the Whitney topology in \(C^{\infty}(X)\). In particular, the subalgebra \(C^{\infty}(\mathcal{T}(v))\approx C^{\infty}(X,v)\) does. As a locally convex vector spaces, \(C^{\infty}(\mathcal{T}(v))\) and \(C^{\infty}(\mathbb{R})\) are then nuclear ([DS], [Ga]) so that the topological tensor product \(C^{\infty}(\mathcal{T}(v))\,\hat{\otimes}\,C^{\infty}(\mathbb{R})\) (over \(\mathbb{R}\)) is uniquely defined as the completion of the algebraic tensor product \(C^{\infty}(\mathcal{T}(v))\,\otimes\,C^{\infty}(\mathbb{R})\) [Grot]. We interpret \(C^{\infty}(\mathcal{T}(v))\,\hat{\otimes}\,C^{\infty}(\mathbb{R})\) as the algebra of "smooth" functions on the product \(\mathcal{T}(v)\times\mathbb{R}\) and denote it by \(C^{\infty}(\mathcal{T}(v)\times\mathbb{R})\). **Lemma 3.1**.: _The intersection \(C^{\infty}(\mathcal{T}(v))\cap(f)^{*}(C^{\infty}(\mathbb{R}))=\underline{ \mathbb{R}},\) the space of constant functions on \(X\)._ Proof.: If a smooth function \(h:X\to\mathbb{R}\) is constant on each \(v\)-trajectory \(\gamma\) and belongs to \((f)^{*}(C^{\infty}(\mathbb{R}))\), then it must be constant on each connected leaf of \(\mathcal{G}(f)\) that intersects \(\gamma\). Thus, such \(h\) is constant on the maximal closed _connected_ subset \(A_{\gamma}\subseteq f^{-1}(f(\gamma))\) that contains \(\gamma\). Each trajectory \(\gamma\), homeomorphic to a closed interval, has an open neighborhood such that, for any trajectory \(\gamma^{\prime}\) from that neighborhood, we have \(A_{\gamma}\cap A_{\gamma^{\prime}}\neq\emptyset\). Since \(X\) is connected, any pair \(\gamma,\gamma^{\prime}\) of trajectories may be connected by a path \(\delta\subset X\). Using the compactness of \(\delta\), we conclude that the function \(h\) must be a constant along \(\delta\). Therefore, \(h\) is a constant globally. Let us consider two subalgebras, \(f^{*}(C^{\infty}(\mathbb{R}))\subset C^{\infty}(X)\) and \((f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\subset C^{\infty}(\partial_{1}X)\), the second one is assumed to be a "known" part of the boundary data. **Lemma 3.2**.: _The restriction operator \(H^{\partial}_{f}:f^{*}(C^{\infty}(\mathbb{R}))\to(f^{\partial})^{*}(C^{\infty }(\mathbb{R}))\) to the boundary \(\partial_{1}X\) is an epimorphism of algebras. If the range \(f^{\partial}(\partial_{1}X)\) of \(f^{\partial}\) is a connected closed interval of \(\mathbb{R}\) (which is the case for a connected \(\partial_{1}X\)), then \(H^{\partial}_{f}\) is an isomorphism._ Proof.: The restriction operator \(H^{\partial}_{f}\) is an algebra epimorphism, since any composite function \(\phi\circ f^{\partial}\), where \(\phi\in C^{\infty}(\mathbb{R})\), is the restriction to \(\partial_{1}X\) of the function \(\phi\circ f\). On the other hand, when \(f^{\partial}(\partial_{1}X)\) is a connected subset of \(\mathbb{R}\), we claim that \(H^{\partial}_{f}\) is a monomorphism. Indeed, take a function \(\phi\in C^{\infty}(\mathbb{R})\), such that \(\phi\circ f^{\partial}\equiv 0\), but \(\phi\circ f\) is not identically zero on \(X\). Then there is \(x\in X\) such that \(\phi\circ f(x)\neq 0\). On the other hand, by the hypothesis, \(f(x)=f^{\partial}(y)\) for some \(y\in\partial_{1}X\). By the assumption, \(f^{\partial}\circ\phi\equiv 0\), which implies that \(\phi(f^{\partial}(y))=0\). This contradiction validates the claim about \(H^{\partial}_{f}\) being a monomorphism. Therefore, when \(f^{\partial}(\partial_{1}X)\) is a connected interval, \(H^{\partial}_{f}\) is an isomorphism of algebras. Consider the homomorphism of algebras \[{\sf P}:C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\to C^{ \infty}(X)\] that takes every finite sum \(\sum_{i}h_{i}\otimes(f\circ g_{i})\), where \(h_{i}\in C^{\infty}({\mathcal{T}}(v))\subset C^{\infty}(X)\) and \(g_{i}\in C^{\infty}({\mathbb{R}})\), to the finite sum \(\sum_{i}h_{i}\cdot(g_{i}\circ f)\in C^{\infty}(X)\). Recall that, by Lemma 3.1, \(C^{\infty}({\mathcal{T}}(v))\cap(f)^{*}(C^{\infty}({\mathbb{R}}))={\underline {\mathbb{R}}}\), the constants. For any linearly independent \(\{h_{i}\}_{i}\), this lemma implies that if \(\sum_{i}h_{i}\cdot(g_{i}\circ f)\equiv 0\), then \(\{g_{i}\circ f\equiv 0\}_{i}\); therefore, \({\sf P}\) is a monomorphism. Let us compare the, so called, projective crossnorms \(\{\|\sim\|_{k}\}_{k\in{\mathbb{Z}}_{+}}\) (see (3.1)) of an element \[\phi=\sum_{i}h_{i}\otimes(f\circ g_{i})\] and the norms of the element \({\sf P}(\phi)=\sum_{i}h_{i}\cdot(f\circ g_{i})\). By comparing the Taylor polynomial of the product of two smooth functions with the product of their Taylor polynomials, we get that, for all \(k\in{\mathbb{Z}}_{+}\), \[\|\phi\|_{k}=_{\sf def}\ \inf\bigl{\{}\sum_{i}\|h_{i}\|_{k}\cdot\|(f\circ g_{i}) \|_{k}\bigr{\}}\geq\ \|{\sf P}(\phi)\|_{k}, \tag{3.1}\] where \(\inf\) is taken over all the representations of the element \(\phi\in C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\) as a sum \(\sum_{i}h_{i}\otimes(f\circ g_{i})\). Here we may assume that all \(\{h_{i}\}_{i}\) are linearly independent elements and so are all \(\{f\circ g_{i}\}_{i}\); otherwise, a simpler representation of \(\phi\) is available. By the inequality in (3.1), \({\sf P}\) is a bounded (continuous) operator. As a result, by continuity, \({\sf P}\) extends to an algebra homomorphism \[\hat{\sf P}:C^{\infty}({\mathcal{T}}(v))\,\hat{\otimes}\,f^{*}(C^{\infty}({ \mathbb{R}}))\to C^{\infty}(X)\] whose source is the completion of the algebraic tensor product \(C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\). **Lemma 3.3**.: _The embedding \(\alpha:X\to{\mathcal{T}}(v)\times{\mathbb{R}}\) (introduced in the end of Section 2 and depicted in Fig. 3) induces an algebra epimorphism_ \[\alpha^{*}:C^{\infty}({\mathcal{T}}(v))\,\hat{\otimes}\,C^{\infty}({\mathbb{R }})\stackrel{{\sf id}\,\hat{\otimes}\,f^{*}}{{\longrightarrow}}C^{ \infty}({\mathcal{T}}(v))\,\hat{\otimes}\,f^{*}(C^{\infty}({\mathbb{R}})) \stackrel{{\hat{\sf P}}}{{\longrightarrow}}C^{\infty}(X). \tag{3.2}\] _Moreover, the map \(\hat{\sf P}\) is an isomorphism._ Proof.: First, we claim that the subalgebra \({\sf P}\bigl{(}C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({ \mathbb{R}}))\bigr{)}\subset C^{\infty}(X)\) satisfies the three hypotheses of Nachbin's Theorem [Na]. Therefore, by [Na], the \({\sf P}\)-image of \(C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\) is _dense_ in \(C^{\infty}(X)\). Let us validate these three hypotheses. 1. _For each_ \(x\in X\)_, there is a function_ \(q\in C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\) _such that_ \(q(x)\neq 0\)_. Just take_ \(q=f\circ(t+c)\)_, where_ \(c>\min_{X}f\) _and_ \(t:{\mathbb{R}}\to{\mathbb{R}}\) _is the identity._ 2. _For each_ \(x,y\in X\)_, there is a function_ \(q\in C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\) _such that_ \(q(x)\neq q(y)\) _(i.e., the algebra_ \(C^{\infty}({\mathcal{T}}(v))\otimes f^{*}(C^{\infty}({\mathbb{R}}))\)_) separates the points of_ \(X\)_) If_ \(f(x)\neq f(y)\)_,_ \(q=f\) _will do. If_ \(f(x)=f(y)\)_, but_ \([\gamma_{x}]\neq[\gamma_{y}]\)_, then there is a_ \(v\)_-invariant function_ \(h\in C^{\infty}({\mathcal{T}}(v))\) _such that_ \(h(x)=1\) _and_ \(h(y)=0\)_. To construct this_ \(h\)_, we take a transversal section_ \(S_{x}\subset\hat{X}\) _of the_ \(\hat{v}\)_-flow in the vicinity of_ \(x\) _such that_ \(h(x)=0\)_._ that all the \(\hat{v}\)-trajectories through \(S_{x}\) are distinct from the trajectory \(\gamma_{y}\). We pick a smooth function \(\tilde{h}:S_{x}\to\mathbb{R}\) such that \(h\) is supported in \(\mathsf{int}(S_{x})\), vanishes with all its derivatives along the boundary \(\partial S_{x}\), and \(\tilde{h}(x)=1\). Let \(\mathcal{S}\) denote the set of \(\hat{v}\)-trajectories through \(S_{x}\). Of course, \(\tilde{h}\) extends to a smooth function \(h^{\dagger}:\mathcal{S}\to\mathbb{R}\) so that \(h^{\dagger}\) is constant along each trajectory from \(\mathcal{S}\). We denote by \(h^{\dagger}\) the obvious extension of \(h^{\dagger}\) by the zero function. Finally, the restriction \(h\) of \(h^{\ddagger}\) to \(X\) separates \(x\) and \(y\). 3. _For each \(x\in X\) and \(w\in T_{x}X\), there is a function \(q\in C^{\infty}(\mathcal{T}(v))\otimes f^{*}(C^{\infty}(\mathbb{R}))\) such that \(dq_{x}(w)\neq 0\)._ Let us decompose \(w=av+bw^{\dagger}\), where \(a,b\in\mathbb{R}\) and the vector \(w^{\dagger}\) is tangent to the hypersurface \(S_{x}=\hat{f}^{-1}(f(x))\). Then, if \(a\neq 0\), then \(df(w)\neq 0\). If \(a=0\), then the there is a function \(\tilde{h}:S_{x}\to\mathbb{R}\) which, with all its derivatives, is compactly supported in the vicinity of \(x\) in \(S_{x}\) and such that \(d\tilde{h}_{x}(w^{\dagger})\neq 0\). As in the case (2), this function extends to a desired function \(h\in C^{\infty}(\mathcal{T}(v))\). Now put \(q=h\otimes 1\). As a result, the image of \(\mathsf{P}:C^{\infty}(\mathcal{T}(v))\otimes f^{*}(C^{\infty}(\mathbb{R})) \longrightarrow C^{\infty}(X)\) is dense. Therefore, \(\hat{\mathsf{P}}\) and, thus, \((\alpha)^{*}:C^{\infty}(\mathcal{T}(v))\,\hat{\otimes}\,C^{\infty}(\mathbb{R}) \longrightarrow C^{\infty}(X)\) are epimorphisms. Let us show that \(\hat{\mathsf{P}}\) is also a monomorphism. Take a typical element \[\theta=\sum_{i=1}^{\infty}h_{i}\otimes(f\circ g_{i})\in C^{\infty}(X,v)\,\hat {\otimes}\,f^{*}(C^{\infty}(\mathbb{R})),\] viewed as a sum that converges in all the norms \(\|\sim\|_{k}\) from (3.1). We aim to prove that if \(\hat{\mathsf{P}}(\theta)=\sum_{i=1}^{\infty}h_{i}\cdot(f\circ g_{i})\) vanishes on \(X\), then \(\theta=0\). For each point \(x\in\mathsf{int}(X)\), there is a small closed cylindrical solid \(H_{x}\subset\mathsf{int}(X)\) that contains \(x\) and consists of segments of trajectories through a small \(n\)-ball \(D^{n}\subset f^{-1}(f(x))\), transversal to the flow. Thus, the product structure \(D^{1}\times D^{n}\) of the solid \(H_{x}\) is given by the \(v\)-flow and the Lyapunov function \(f:X\to\mathbb{R}\). We localize the problem to the cylinder \(H_{x}\). Consider the commutative diagram \[C^{\infty}(X,v)\,\hat{\otimes}\,f^{*}(C^{\infty}(\mathbb{R})) \stackrel{{\hat{\mathsf{P}}}}{{\longrightarrow}}C^{\infty}(X).\] \[\qquad\qquad\downarrow\mathsf{res}^{\prime}\hat{\otimes}\mathsf{res }^{\prime\prime}\qquad\qquad\qquad\downarrow\mathsf{res}\] \[C^{\infty}(D^{n})\,\hat{\otimes}\,C^{\infty}(D^{1})\stackrel{{ \approx\,\hat{\mathsf{Q}}}}{{\longrightarrow}}C^{\infty}(H_{x}), \tag{3.3}\] where \(\mathsf{res}:C^{\infty}(X)\to C^{\infty}(H_{x})\) is the natural homomorphism, \[(\mathsf{res}^{\prime}\hat{\otimes}\mathsf{res}^{\prime\prime})\big{(}\sum_{ i=1}^{\infty}h_{i}\otimes(f\circ g_{i})\big{)}=_{\mathsf{def}}\ \sum_{i=1}^{\infty}h_{i}|_{D^{n}}\otimes(f\circ g_{i})|_{D^{1}},\] and \(\hat{\mathsf{Q}}\big{(}\sum_{i=1}^{\infty}\tilde{h}_{i}\otimes\tilde{g_{i}} \big{)}=_{\mathsf{def}}\ \sum_{i=1}^{\infty}\tilde{h}_{i}\cdot\tilde{g_{i}}\quad\text{for $\tilde{h}_{i} \in C^{\infty}(D^{n})$, $\tilde{g}_{i}\in C^{\infty}(D^{1})$.}\) Since \(\hat{\mathsf{Q}}\) is an isomorphism [Grot] and \(\hat{\mathsf{P}}(\theta)=0\), it follows from (3.5) that \(\theta\in\ker(\mathsf{res}^{\prime}\hat{\otimes}\mathsf{res}^{\prime\prime})\) for any cylinder \(H_{x}\). After reshuffling terms in the sum, one may assume that all the functions \(\{h_{i}|_{D^{n}}\}_{i}\) are linearly independent. Using that the functions depend of the complementary groups of coordinates in \(H_{x}\), we conclude that these functions must vanish for any \(H_{x}\subset\mathsf{int}(X)\). As a result, \(\theta=0\) globally in \(\mathsf{int}(X)\) and, by continuity, \(\theta\) vanishes on \(X\). Consider now the "known" homomorphism of algebras \[(\alpha^{\partial})^{*}:\,C^{\infty}(\mathcal{T}(v))\,\hat{\otimes}\,C^{ \infty}(\mathbb{R})\stackrel{{\mathfrak{s}\mathsf{res}^{\partial}_ {\mathcal{T}}\,\hat{\otimes}\,(f^{\partial})^{*}}}{{\longrightarrow}} \tag{3.4}\] utilizing the boundary data. Here, by the definition of \(C^{\infty}(\partial_{1}X,v)\), \(\mathsf{res}^{\partial}_{\mathcal{T}}:C^{\infty}(\mathcal{T}(v))\to C^{\infty} (\partial_{1}X,v)\) is an isomorphism, and \(\hat{\mathsf{R}}^{\partial}\) denotes the completion of the bounded homomorphism \(\mathsf{R}^{\partial}\) that takes each element \(\sum_{i}h_{i}\otimes(f^{\partial}\circ g_{i})\), where \(h_{i}\in C^{\infty}(\partial_{1}X,v)\) and \(g_{i}\in C^{\infty}(\mathbb{R})\), to the sum \(\sum_{i}h_{i}\cdot(g_{i}\circ f^{\partial})\). The next lemma shows that the hypotheses of Theorem 3.1 are not restrictive, even when \(\partial_{1}X\) has many connected components. **Lemma 3.4**.: _Any traversing vector field \(v\) on a connected compact manifold \(X\) admits a Lyapunov function \(f:X\to\mathbb{R}\) such that \(f(X)=f(\partial_{1}X)\)._ Proof.: Note that, for any Lyapunov function \(f\), the image \(f(\partial_{1}X)\) is a disjoint union of finitely many closed intervals \(\{I_{k}=[a_{k},b_{k}]\}_{k}\), where the index \(k\) reflects the natural order of intervals in \(\mathbb{R}\). We will show how to decrease, step by step, the number of these intervals by deforming the original function \(f\). Note that the local extrema of any Lyapunov function on \(X\) occur on its boundary \(\partial_{1}X\) and away from the locus \(\partial_{2}X(v)\) where \(v\) is tangent to \(\partial_{1}X\). Consider a pair of points \(A_{k+1},B_{k}\in\partial_{1}X\setminus\partial_{2}X(v)\) such that \(f(A_{k+1})=a_{k+1}\) and \(f(B_{k})=b_{k}\), where \(a_{k+1}>b_{k}\). Then we can increase \(f\) in the vicinity of its local maximum \(B_{k}\) so that the \(B_{k}\)-localized deformation \(\tilde{f}\) of \(f\) has the property \(\tilde{f}(B_{k})>f(A_{k+1})\) and \(\tilde{f}\) is a Lyapunov function for \(v\). This construction decreases the number of intervals in \(\tilde{f}(\partial_{1}X)\) in comparison to \(f(\partial_{1}X)\) at least by one. We are ready to state the main result of this paper. **Theorem 3.1**.: _Assuming that the range \(f^{\partial}(\partial_{1}X)\) is a connected interval of \(\mathbb{R}\),1 the algebra \(C^{\infty}(X)\) is isomorphic to the subalgebra_ Footnote 1: which is the case for a connected \(\partial_{1}X\) \[C^{\infty}(\partial_{1}X,v)\,\hat{\otimes}\,(f^{\partial})^{*}(C^{\infty}( \mathbb{R}))\subset C^{\infty}(\partial_{1}X)\hat{\otimes}C^{\infty}( \partial_{1}X).\] _Moreover, by combining (3.2) with (3.4), we get a commutative diagram_ \[C^{\infty}(\mathcal{T}(v))\,\hat{\otimes}\,f^{*}(C^{\infty}( \mathbb{R}))\stackrel{{\hat{\mathsf{R}}}}{{\longrightarrow}}C^{ \infty}(X)\] \[\downarrow\mathsf{id}\,\hat{\otimes}\,H_{f}^{\partial} \downarrow\mathsf{res}\] \[C^{\infty}(\partial_{1}X,v)\,\hat{\otimes}\,(f^{\partial})^{*}(C ^{\infty}(\mathbb{R}))\stackrel{{\hat{\mathsf{R}}^{\partial}}}{{ \longrightarrow}}C^{\infty}(\partial_{1}X), \tag{3.5}\] _whose vertical homomorphism \(\operatorname{\sf id}\hat{\otimes}H^{\partial}_{f}\) and the horizontal homomorphism \(\hat{\mathsf{R}}\) are isomorphisms, and the vertical epimorphism \(\operatorname{\mathsf{res}}\) is the obvious restriction operator._ _As a result, inverting \(\operatorname{\sf id}\hat{\otimes}H^{\partial}_{f}\), we get an algebra isomorphism_ \[\mathcal{H}(v,f):C^{\infty}(\partial_{1}X,v)\,\hat{\otimes}\,(f^{\partial})^{* }(C^{\infty}(\mathbb{R}))\approx C^{\infty}(X). \tag{3.6}\] Proof.: Consider the commutative diagram (3.5). Its upper-right conner is "unknown", while the lower row is "known" and represents the boundary data, and \(\operatorname{\mathsf{res}}\) is obviously an epimorphism. By Lemma 3.2, the left vertical arrow \(\operatorname{\sf id}\hat{\otimes}H^{\partial}_{f}\) is an isomorphism. Since, by Lemma 3.3, \(\hat{\mathsf{R}}\) is an isomorphism, it follows that \(\hat{\mathsf{R}}\circ(\operatorname{\sf id}\hat{\otimes}H^{\partial}_{f})^{-1}\) must be an isomorphism as well. In particular, \(\hat{\mathsf{R}}^{\partial}\) is an epimorphism, whose kernel is isomorphic to the smooth functions on \(X\) whose restrictions to \(\partial_{1}X\) vanish. If \(z\in C^{\infty}(X)\) is a smooth function such that zero is its regular value, \(z^{-1}(0)=\partial_{1}X\), and \(z>0\) in \(\operatorname{\sf int}(X)\), then the kernel of \(\operatorname{\mathsf{res}}\) is the principle ideal \(\mathsf{m}(z)\), generated by \(z\). Therefore, by the commutativity of (3.5), the kernel of the homomorphism \(\hat{\mathsf{R}}^{\partial}\) must be also a principle ideal \(\mathsf{M}_{\partial}\), generated by an element \((\hat{\mathsf{R}}\circ(\operatorname{\sf id}\hat{\otimes}H^{\partial}_{f}))^{ -1}(z)\). **Corollary 3.1**.: _If the range \(f^{\partial}(\partial_{1}X)\) is a connected interval in \(\mathbb{R}\), then the two topological algebras \(C^{\infty}(\partial_{1}X,v)\subset C^{\infty}(\partial_{1}X)\) and \((f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\subset C^{\infty}(\partial_{1}X)\) determine, up to an isomorphism, the algebra \(C^{\infty}(X)\), and thus determine the smooth topological type of the manifold \(X\)._ Proof.: We call a maximal ideal of an algebra \(\mathcal{A}\) nontrivial if it is different from \(\mathcal{A}\). By Theorem 3.1, the algebra \(C^{\infty}(X)\) is determined by the two algebras on \(\partial_{1}X\), up to an isomorphism. In turn, the algebra \(C^{\infty}(X)\) determines the smooth topological type of \(X\), viewed as a ringed space. This fact is based on interpreting \(X\) as the space \(\mathcal{M}(C^{\infty}(X))\) of nontrivial maximal ideals of the algebra \(C^{\infty}(X)\) [KMS]. Let \(\mathsf{m}^{\partial}_{v}\triangleleft\,C^{\infty}(\partial_{1}X,v)\) and \(\mathsf{m}^{\partial}_{f}\triangleleft(f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\) be a pair of nontrivial maximal ideals. Note that \(\mathsf{m}^{\partial}_{v}=\mathsf{m}^{\partial}_{v}([\gamma])\) consists of functions from \(C^{\infty}(\partial_{1}X,v)\) that vanish on the locus \(\gamma\cap\partial_{1}X\), and \(\mathsf{m}^{\partial}_{f}=\mathsf{m}^{\partial}_{f}(c)\) consists of functions from \((f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\) that vanish on the locus \(\partial_{1}X\cap f^{-1}(c)\), where \(c\in f(\partial_{1}X)\subset\mathbb{R}\). We denote by \(\langle\mathsf{m}^{\partial}_{v},\mathsf{m}^{\partial}_{f}\rangle\) the maximal ideal of \(C^{\infty}(\partial_{1}X,v)\,\hat{\otimes}\,(f^{\partial})^{*}(C^{\infty}( \mathbb{R}))\) that contains both ideals \(\mathsf{m}^{\partial}_{v}\hat{\otimes}1\) and \(1\hat{\otimes}\mathsf{m}^{\partial}_{f}\). If the range \(f^{\partial}(\partial_{1}X)\) is a connected interval of \(\mathbb{R}\) and \(\langle\mathsf{m}^{\partial}_{v},\mathsf{m}^{\partial}_{f}\rangle\) is a nontrivial ideal, then \(\gamma\cap f^{-1}(c)\neq\emptyset\). Otherwise, \(\gamma\cap f^{-1}(c)=\emptyset\). Therefore, with the help of the isomorphism \(\mathcal{H}(v,f)\) from (3.6), the nontrivial maximal ideals of \(C^{\infty}(X)\) (which by [KMS] correspond to points \(x=\gamma\cap f^{-1}(c)\in X\)) are of the form \(\mathcal{H}(v,f)\big{(}\langle\mathsf{m}^{\partial}_{v},\mathsf{m}^{\partial}_{ f}\rangle\big{)}\). **Corollary 3.2**.: _Let the range \(f^{\partial}(\partial_{1}X)\) be a connected interval of \(\mathbb{R}\). With the isomorphism \(\mathcal{H}(v,f)\) from (3.6) being fixed, any algebra isomorphism \(\Psi^{\partial}:C^{\infty}(\partial_{1}X)\to C^{\infty}(\partial_{1}X)\) that preserves the subalgebras \(C^{\infty}(\partial_{1}X,v)\) and \((f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\) extends canonically to the algebra isomorphism \(\Psi:C^{\infty}(X)\to C^{\infty}(X)\)._ _Thus, an action of any group \(\mathsf{G}\) of such isomorphisms \(\Psi^{\partial}\) extends canonically to a \(\mathsf{G}\)-action on the algebra \(C^{\infty}(X)\) and, via it, to a \(\mathsf{G}\)-action on \(X\) by smooth diffeomorphisms._ Proof.: By [Mr], any algebra isomorphism \(\Psi:C^{\infty}(X_{1})\to C^{\infty}(X_{2})\) is induced by a unique smooth diffeomorphism \(\Phi:X_{1}\to X_{2}\). With this fact in hand, by Theorem 2.1 and Theorem 3.1, the proof is on the level of definitions. It remains to address the following crucial question: how to characterize intrinsically the trace \(C^{\infty}(\partial_{1}X,v)\) of the algebra \(C^{\infty}(\mathcal{T}(v))\approx\ker\{\mathcal{L}_{v}:C^{\infty}(X)\to C^{ \infty}(X)\}\) in the algebra \(C^{\infty}(\partial_{1}X)\)? Evidently, functions from \(C^{\infty}(\partial_{1}X,v)\) are constant along each \(C_{v}\)-"trajectory" \(\gamma^{\partial}:=\gamma\cap\partial_{1}X\) of the causality map. Furthermore, any smooth function \(\psi:\partial_{1}X\to\mathbb{R}\) that is constant on each finite set \(\gamma^{\partial}\) gives rise to a unique _continuous_ function \(\phi\) on \(X\) that is constant along each \(v\)-trajectory \(\gamma\). However, such functions \(\phi\) may not be automatically _smooth_ on \(X\) (a priory, they are just Holderian with some control of the Holder exponent that depends on the dimension of \(X\) only)! This potential complication leads to the following question. **Question 3.1**.: _For a traversing and boundary generic (alternatively, traversally generic) vector field \(v\) on \(X\), is it possible to characterize the subalgebra \(C^{\infty}(\partial_{1}X,v)\subset C^{\infty}(\partial_{1}X)\) in terms of the causality map \(C_{v}\) and, perhaps, some additional \(v\)-generated data, residing in \(\partial_{1}X\)? \(\diamondsuit\)_ To get some feel for a possible answer, we need the notion of the Morse stratification of the boundary \(\partial_{1}X\) that a vector field \(v\) generates [Mo]. Let \(\dim(X)=n+1\) and \(v\) be a boundary generic traversing vector field on \(X\). Let us recall the definition of the Morse stratification \(\{\partial_{j}^{\pm}X(v)\}_{j\in[1,n+1]}\) of \(\partial_{1}X\). We define the set \(\partial_{2}X(v)\) as the locus where \(v\) is tangent to \(\partial_{1}X\). It separates \(\partial_{1}X\) into \(\partial_{1}^{+}X(v)\) and \(\partial_{1}^{+}X(v)\). Let \(\partial_{3}X(v)\) be the locus where \(v\) is tangent to \(\partial_{2}X(v)\). For a boundary generic \(v\), \(\partial_{2}X(v)\) is a smooth submanifold of \(\partial_{1}X(v)\) and \(\partial_{3}X(v)\) is a submanifold that divides \(\partial_{2}X(v)\) into two regions, \(\partial_{2}^{+}X(v)\) and \(\partial_{2}^{-}X(v)\). Along \(\partial_{2}^{+}X(v)\), \(v\) points inside of \(\partial_{1}^{+}X(v)\), and along \(\partial_{2}^{-}X(v)\), \(v\) points inside of \(\partial_{1}^{-}X(v)\). This construction self-replicates until we reach finite sets \(\partial_{n+1}^{\pm}X(v)\). By definition, the boundary generic vector fields [K1] are the ones that satisfy certain nested transversality of \(v\) with respect to the boundary \(\partial_{1}X\), the transversality that guaranties that all the Morse strata \(\partial_{j}X(v)\) are regular closed submanifolds and all the strata \(\partial_{j}^{\pm}X(v)\) are compact submanifolds. For a traversing boundary generic \(v\), the map \(C_{v}:\partial_{1}^{+}X(v)\to\partial_{1}^{-}X(v)\) makes it possible to recover the Morse stratification \(\{\partial_{j}^{\pm}X(v)\}_{j>0}\) ([K4]). Let us describe now a good candidate for the subalgebra \(C^{\infty}(\partial_{1}X,v)\) in the algebra \(C^{\infty}(\partial X)\). We denote by \(\mathcal{L}_{v}^{(k)}\) the \(k\)-th iteration of the Lie derivative \(\mathcal{L}_{v}\). Let \(\mathsf{M}(v)\) be the subalgebra of smooth functions \(\psi:\partial_{1}X\to\mathbb{R}\) such that \((\mathcal{L}_{v}^{(k)}\psi)\big{|}_{\partial_{k+1}X(v)}=0\) for all \(k\leq n\) (by the Leibniz rule, \(\mathsf{M}(v)\) is indeed a subalgebra). Let us denote by \(\mathsf{M}(v)^{C_{v}}\) the subalgebra of functions from \(\mathsf{M}(v)\) that are constant on each (finite) \(C_{v}\)-trajectory \(\gamma^{\partial}:=\gamma\cap\partial_{1}X\subset\partial_{1}X\). **Conjecture 3.1**.: _Let \(v\) be a traversing and boundary generic vector field on a smooth compact \((n+1)\)-manifold \(X\). Then the algebra \(C^{\infty}(\partial_{1}X,v)\) coincides with the subalgebra \(\mathsf{M}(v)^{C_{v}}\subset C^{\infty}(\partial_{1}X)\)._ _In particular, \(C^{\infty}(\partial_{1}X,v)\) can be determined by the causality map \(C_{v}\) and the restriction of \(v\) to \(\partial_{2}X(v)\). \(\diamondsuit\)_ It is easy to check that \(C^{\infty}(\partial_{1}X,v)\subset\mathsf{M}(v)^{C_{v}}\); the challenge is to show that the two algebras coincide. The Holography Theorem (Corollary 2.1) has been established assuming Property \(\mathsf{A}\) from Definition 2.1. If one assumes the validity of Conjecture 3.1, then, by Corollary 3.1, we may drop Property \(\mathsf{A}\) from the hypotheses of the Holography Theorem. Indeed, the subalgebras \(C^{\infty}(\partial_{1}X,v)\) and \((f^{\partial})^{*}(C^{\infty}(\mathbb{R}))\) would acquire a description in terms of \(C_{v}\) and \(f^{\partial}\). This would deliver an independent proof of a natural generalization of Corollary 2.1. _Acknowledgments:_ The author is grateful to Vladimir Goldshtein for his valuable help with the analysis of spaces of smooth functions.
2306.07075
Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence
Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.
John J. Nay, David Karamardian, Sarah B. Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, Jungo Kasai
2023-06-12T12:40:48Z
http://arxiv.org/abs/2306.07075v1
# Large Language Models as Tax Attorneys: ###### Abstract Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.
2304.09621
Simple Security Proof of Mode-Pairing Quantum Key Distribution
Mode-pairing (MP) quantum key distribution (QKD) eliminates the requirements of phase locking and phase tracking compared with twin-field (TF) QKD while still surpassing the fundamental rate-distance limit of QKD. The complexity of the experimental implementation is reduced while the efficiency is also guaranteed. The security of MP-QKD is proved rigorously by examining the consistency of the states detailly between MP-QKD and the fixed-pairing scheme under all of Eve's possible interference, where the latter is equivalent to measurement-device-independent (MDI) QKD. Here we propose a simple and straightforward method to prove the information-theoretic security of MP-QKD. Specifically, an entanglement scheme for MP-QKD is proposed and its security is proved using entanglement purification. Then the security of MP-QKD can be guaranteed with the equivalence of the entanglement scheme and prepare-and-measure scheme for MP-QKD. With this approach, it is beneficial to analyze and understand the performance and security of MP-QKD. We explain why the pairing rounds in MP-QKD can be decoupled and determined by the measurement results announced by a third party, which is the key difference between MP-QKD and MDI-QKD. Moreover, we analyze the security of MP-QKD with the allowed optimal pairing strategy, which is significant for the secret key rate, under collective and coherent attacks.
Yi-Fei Lu, Yang Wang, Hong-Wei Li, Mu-Sheng Jiang, Xiao-Xu Zhang, Ying-Ying Zhang, Yu Zhou, Xiao-Lei Jiang, Chun Zhou, Wan-Su Bao
2023-04-19T12:59:43Z
http://arxiv.org/abs/2304.09621v1
# Simple Security Proof of Mode-Pairing Quantum Key Distribution ###### Abstract Mode-pairing (MP) quantum key distribution (QKD) eliminates the requirements of phase locking and phase tracking compared with twin-field (TF) QKD while still surpassing the fundamental rate-distance limit of QKD. The complexity of the experimental implementation is reduced while the efficiency is also guaranteed. The security of MP-QKD is proved rigorously by examining the consistency of the states detailly between MP-QKD and the fixed-pairing scheme under all of Eve's possible interference, where the latter is equivalent to measurement-device-independent (MDI) QKD. Here we propose a simple and straightforward method to prove the information-theoretic security of MP-QKD. Specifically, an entanglement scheme for MP-QKD is proposed and its security is proved using entanglement purification. Then the security of MP-QKD can be guaranteed with the equivalence of the entanglement scheme and prepare-and-measure scheme for MP-QKD. With this approach, it is beneficial to analyze and understand the performance and security of MP-QKD. We explain why the pairing rounds in MP-QKD can be decoupled and determined by the measurement results announced by a third party, which is the key difference between MP-QKD and MDI-QKD. Moreover, we analyze the security of MP-QKD with the allowed optimal pairing strategy, which is significant for the secret key rate, under collective and coherent attacks. * April 2023 _Keywords_: Quantum Key Distribution, Mode-Pairing, Information-Theoretic Security ## 1 Introduction Quantum key distribution (QKD) provides a method for distributing secret key bits with information-theoretically security that is guaranteed by the laws of quantum physics [1, 2, 3]. However, rigorous security requires certain implementation assumptions that may not be met in practical systems. Additionally, the practical performance, including secret key rate and distance, is limited by the channel loss. Many theoretical and experimental breakthroughs have been made to overcome these practical challenges [4, 5]. The decoy-state method [6, 7, 8] enables practical QKD with weak coherent pulses by characterizing the quantum channel with additional states. This approach can overcome photon-number-splitting (PNS) attacks [9, 10] and achieve a secret key rate comparable to that of the single-photon source. The measurement-device-independent (MDI) QKD [11, 12, 13, 14] improves the practical security of QKD by eliminating all possible security loopholes at the detection side. In MDI-QKD, Alice and Bob are both located at the source side, while an untrusted party named Charlie performs the measurement procedure. Charlie can only infer the parity of Alice and Bob's bits but not their specific values. From the perspective of entanglement purification, Alice and Bob's relationship is established through entanglement swapping with Charlie's Bell state measurement. The combination of the decoy-state method and MDI-QKD significantly enhances both practicality and practical security [15, 16]. The practical performance of QKD is limited by the optical loss in the quantum channel. There exists an upper bound on the secret key rate at a fixed distance, which decreases exponentially as the distance increases. For example, the PLOB bound [17] characterizes the fundamental rate-distance limit of QKD without quantum repeaters. However, the proposal of twin-field (TF) QKD [18, 19, 20, 21, 22, 23, 24, 25] has successfully overcome this limit based on the single-photon interference. The core of TF-QKD is entanglement swapping, which is similar to MDI-QKD but uses coherent states instead of single-photon states as carriers. Many experimental breakthroughs have been achieved at present [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. However, TF-QKD encodes the key information into the phase of coherent states and requires technologies for phase tracking and phase locking to align the phase reference frame and compensate the phase drift, which is complex and time-consuming. To remove these technologies from TF-QKD, mode-pairing (MP) QKD (also known as asynchronous MDI-QKD) was simultaneously proposed in Refs. [37] and [38], which can surpass the fundamental rate-distance limit of QKD. MP-QKD is a enhanced version of the time-bin encoding MDI-QKD [14] by eliminating the requirement that coincidence detection events are only sifted between neighboring rounds, i.e., the paired rounds are decoupled. Therefore, the secret key rate is promoted as numerous effective rounds without neighbors can be recycled, resulting in the scaling of \(O(\sqrt{\eta})\) when the maximal pairing interval \(l\rightarrow\infty\)[37]. This means that MP-QKD can achieve both practicality and efficiency. To demonstrate the power of MP-QKD, two experiments have been performed in the laboratory, [39, 40], which successfully surpass the PLOB bound. In MP-QKD, Alice and Bob need to pair two effective rounds, e.g. \(j\) and \(k\), as a pair (\(k=j+1\) in MDI-QKD). The decoupling of the pairing rounds \(j\) and \(k\) is a major difference between MP-QKD and MDI-QKD and is essential for the security of MP-QKD, since in MP-QKD the rounds \(j\) and \(k\) are determined by the third untrusted party. In fact, an improper pairing strategy may render MP-QKD unsafe. For this issue, Ref. [37] first proves the security of MP-QKD with the fixed-pairing strategy and then proves the equivalence to the MP-QKD by examining the initial states, private states, and all possible Eve's operations. We note that the security of MP-QKD is rigorously guaranteed with a proper pairing strategy. In this manuscript, we provide a simple security proof of MP-QKD directly by proposing an entanglement scheme that is equivalent to the prepare-and-measure scheme for MP-QKD. Therefore, we do not need to examine the states with Eve's operation. With this method, it is easy to understand why the rounds \(j\) and \(k\) are decoupled and how entanglement between Alice and Bob is established. Furthermore, it would be beneficial to examine other aspects of MP-QKD. For instance, in this manuscript we analyze the allowed and optimal pairing strategy under the i.i.d. assumption for both collective and coherent attacks. The manuscript is arranged as follows. In Sec. 2, we introduce the MP-QKD protocol and provide some notations. Additionally, we propose an entanglement scheme for MP-QKD, prove its security and reduce it to the prepare-and-measure scheme in Sec. 3. Then we present the decoy-state method for estimating the parameters in the secret key rate in Sec. 4. In Sec. 5, we analyze the efficiency of the optimal pairing strategy and the security under collective and coherent attacks. Finally, the conclusion is given in Sec. 6. ## 2 MP-QKD Protocol In time-bin encoding MDI-QKD, Alice and Bob prepare the paired states at the beginning, which means that the states in two paired rounds are correlated in the state preparation stage. However, in MP-QKD, the paired rounds are determined by Eve's measurement results and the states in different rounds are prepared independently because Alice and Bob can not predict the measurement results during the state preparation stage. However, Alice and Bob need to correlate them according to the measurement results, resulting in differences between MDI-QKD and MP-QKD. In this section, we present the MP-QKD protocol [37, 38] with pairing strategy and provide some notations. The detailed decoy-state method is omitted here and will be discussed in Sec. 4. **Box 1: MP-QKD [37, 38]** (1) State preparation. In the \(k\)-th round, Alice chooses a random bit \(a_{k}\in\mathbb{Z}_{2}\) and a random phase \(\theta_{k}^{a}\in[0,2\pi)\). Then Alice determines the \(Z\) and \(X\) window with the probability \(p_{z}\) and \(p_{x}=1-p_{z}\), respectively. In the \(Z\) window, Alice prepares a vacuum state \(|0\rangle\) or coherent state \(|e^{i\theta_{k}^{a}}\sqrt{\mu}\rangle\) when \(a_{k}=0\) or \(1\). In the \(X\) window, Alice prepares coherent states \(|e^{i\theta_{k}^{a}}\sqrt{\nu}\rangle\) or \(|-e^{i\theta_{k}^{a}}\sqrt{\nu}\rangle\) when \(a_{k}=0\) or \(1\). Similarly, Bob chooses \(b_{k}\) and \(\theta_{k}^{b}\) independently and performs the same procedure as Alice. Note that the intensity \(\nu\) in \(X\) windows may change among a few different values (one is non-zero) in different decoy-state methods as discussed in Sec. 4. Here, we only consider one non-zero intensity \(\nu\). (2) Measurement. Alice and Bob send the prepared states to the third party, Charlie, who is supposed to perform the interferometric measurements and announce the results \(L_{k},R_{k}\in\mathbb{Z}_{2}\) of two detectors. The result 1 or 0 denotes the detector clicks or not announced by Charlie. (3) Mode pairing. After repeating the above steps \(N\) times, Alice and Bob sift the effective rounds when \(L_{k}\oplus R_{k}=1\), and pair rounds indexed by \(j\) and \(k\) (\(j<k\)) according to the pairing strategy in Box 2. They then sift the pairs when both are \(Z\) (\(X\)) windows in two rounds and denote them as \(Z_{jk}\) (\(X_{jk}\)) pairs. (4) Basis sifting. Alice (Bob) assigns it as \(Z\) basis when \(a_{j}\oplus a_{k}=1\) in \(Z_{jk}\) pairs, and as \(X\) basis of all \(X_{jk}\) pairs. They sift the pairs when both are \(Z\) or \(X\) basis to perform the key mapping. Other pairs could be used for parameter estimation. (5) Key mapping. Alice (Bob) sets her key bits as \(\alpha_{jk}=a_{j}\overline{a}_{k}\) (\(\beta_{jk}=\overline{b}_{j}b_{k}\)) in \(Z\) basis, and as \(\alpha_{jk}=a_{j}a_{k}\oplus\overline{a}_{j}\overline{a}_{k}\) (\(\beta_{jk}=b_{j}b_{k}\oplus\overline{b}_{j}\overline{b}_{k}\)) in \(X\) basis. In \(X\) basis, Alice and Bob only maintain the pairs when \(\delta^{a}_{jk}=\delta^{b}_{jk}\) mod \(\pi\) and Bob flips his bits when \(|\delta^{a}_{jk}-\delta^{b}_{jk}|=\pi\), where \(\delta^{a(b)}_{jk}=\theta^{a(b)}_{j}-\theta^{a(b)}_{k}\). Besides, Bob flips his bits when \(g_{1}(\chi_{jk})=1\) in \(X\) basis, where \(g_{1}(x_{1},x_{2},x_{3},x_{4})=\bar{x}_{1}x_{2}x_{3}\bar{x}_{4}\oplus x_{1} \bar{x}_{2}\bar{x}_{3}x_{4}\) and \(\chi_{jk}=(L_{j},\,R_{j},\,L_{k},\,R_{k})\). (6) Parameter estimation. Alice and Bob estimate the pairing efficiency of pairs \(r_{p}\), and the proportion of \(Z\) basis of all \(Z\) pairs \(r_{z}\) through statistics. They then estimate the fraction \(q_{11}^{z}\) of \(Z\) basis when both are single-photon state (\(|01\rangle\) or \(|10\rangle\)), and corresponding phase error rate \(e_{11}^{x}\) with the decoy-state method in Sec. 4. They also estimate the bit-error rate \(E^{z}\) through random sampling when both are \(Z\) basis. The parameters \(r_{p}\), \(r_{z}\), and \(q_{11}^{z}\) are dependent on the pairing strategy, which will be estimated with the decoy-state method in Sec. 4 and analyzed in Sec. 5. (7) Key distillation. Alice and Bob perform error correction and privacy amplification to distill the final key bits. The secret key rate is given by [37, 41] \[R=r_{p}r_{z}\big{\{}q_{11}^{z}[1-h(e_{11}^{x})]-fh(E^{z})\big{\}}, \tag{1}\] where \(h(x)=-x\log_{2}x-(1-x)\log_{2}(1-x)\) is the binary Shannon entropy function, and \(f\) is the error correction efficiency. A proper pairing strategy proposed in Ref. [37] is shown in Box 2. The main idea is to pair neighboring effective rounds within the maximal pairing interval. We will analyze its efficiency and security in Sec. 5. **Box 2: Pairing Strategy [37]** **Input**: Charlie's announced detection results \(C_{k}=L_{k}\oplus R_{k}\) for \(k=1\) to \(N\); maximal pairing interval \(l\). **Output**: \(K\) pairs, \((F_{k},R_{k})\) for the \(k\)-th pair for \(k=1\) to \(K\). **Initialization**: \(k=1\); \(f=0\). **for \(i\in[N]\)do if \(C_{k}=1\)then if \(f=0\)then \(F_{k}\gets i\); \(f\gets 1\). else then \(f\gets 0\).** ## 3 Security Proof of MP-QKD To prove the security of MP-QKD, it is crucial to explain how Alice and Bob can correlate the states in paired rounds that are prepared independently, based on Eve's measurement results. In this section, we first present an entanglement scheme for MP-QKD, then prove its security, and finally reduce it to the prepare-and-measure scheme for MP-QKD. ### Entanglement Scheme for MP-QKD To provide an entanglement scheme for MP-QKD, we introduce ancillary systems that are kept locally at Alice and Bob to create extended states. Taking Alice as an example, the process is identical for Bob. In the \(Z\) windows, Alice determines a random bit as 0 or 1 and prepares the vacuum state \(\ket{0}\) or the coherent state \(\ket{e^{i\theta_{k}^{a}}\sqrt{\mu}}\) in MP-QKD, respectively. Hence, the states in the \(Z\) window can be expressed as \[\rho_{A_{k}}=\frac{1}{2}\big{[}\hat{P}\big{(}\ket{0}_{A_{k}}\big{)}+\hat{P} \big{(}\ket{e^{i\theta_{k}^{a}}\sqrt{\mu}}_{A_{k}}\big{)}\big{]}, \tag{2}\] where \(\hat{P}(\ket{x})=\ket{x}\bra{x}\) and the phase \(\theta_{k}^{a}\) is random and kept secret. We can imagine that Alice introduces a local ancillary single-mode system \(A_{k}^{\prime}\) to perform the above procedure in the \(Z\) window equivalently. Alice first chooses a random phase \(\theta_{k}^{a}\in[0,2\pi)\) and prepares an extended state \[\ket{\varphi}_{A_{k}^{\prime}A_{k}}=\frac{1}{\sqrt{2}}\big{(}\ket{0}\ket{0}+ \ket{1}\ket{e^{i\theta_{k}^{a}}\sqrt{\mu}}\big{)}_{A_{k}^{\prime}A_{k}}. \tag{3}\] Then Alice measures the ancillary system \(A_{k}^{\prime}\) in the basis \(\{\ket{s}\}_{s\in\mathcal{Z}_{2}}\) and sends the system \(A_{k}\) to Charlie. And she assigns the local bit \(a_{k}=0\) or 1 when the measurement results is \(\ket{0}\) or \(\ket{1}\). In the \(X\) windows, Alice prepares the following states \[\sigma_{A_{k}}=\frac{1}{2}\big{[}\hat{P}(\ket{e^{i\theta_{k}^{a}}\sqrt{\nu}}_ {A_{k}})+\hat{P}(\ket{-e^{i\theta_{k}^{a}}\sqrt{\nu}}_{A_{k}})\big{]}. \tag{4}\] Alice could prepare an extended state by introducing the local ancillary single-mode system \(A_{k}^{\prime}\) as \[\ket{\phi}_{A_{k}^{\prime}A_{k}}=\frac{1}{\sqrt{2}}\big{(}\ket{0}\ket{e^{i \theta_{k}^{a}}\sqrt{\nu}}+\ket{1}\ket{-e^{i\theta_{k}^{a}}\sqrt{\nu}}_{A_{k} ^{\prime}A_{k}}, \tag{5}\] and then performs the same as in the \(Z\) windows. Similarly, Bob could introduces the ancillary system \(B_{k}^{\prime}\) and prepares the extended states \(\ket{\varphi}_{B_{k}^{\prime}B_{k}}\) or \(\ket{\phi}_{B_{k}^{\prime}B_{k}}\) in \(Z\) or \(X\) windows, respectively. Alice and Bob's measurements on the ancillary systems \(A^{\prime}_{k}\) and \(B^{\prime}_{k}\) and Charlie's evolution on the composite systems \(A_{k}B_{k}\) in the basis \(\{|s\rangle\}_{s\in\mathbb{Z}_{2}}\) are independent and commute with each other. Therefore, it is equivalent if Alice and Bob postpone the measurements on the systems \(A^{\prime}_{k}\) and \(B^{\prime}_{k}\) after Charlie announces the results \(L_{k}\) and \(R_{k}\). The entanglement scheme for MP-QKD is given in Box 3. **Box 3: Entanglement Scheme MP-QKD** (i) State preparation. In the \(k\)-th round, Alice chooses a random phase \(\theta^{a}_{k}\) and determines the \(Z\) and \(X\) windows as in step-1. Then she prepares \(|\varphi\rangle_{A^{\prime}_{k}A_{k}}\) or \(|\phi\rangle_{A^{\prime}_{k}A_{k}}\) in the \(Z\) or \(X\) windows, respectively. Similarly, Bob chooses \(\theta^{b}_{k}\), determine windows and prepares \(|\varphi\rangle_{B^{\prime}_{k}B_{k}}\) or \(|\phi\rangle_{B^{\prime}_{k}B_{k}}\). (ii) Measurement. Alice and Bob send the systems \(A_{k}\) and \(B_{k}\) to Charlie, and Charlie performs the same as step-2 in Box 1. (iii) Mode pairing. This is the same as step-3 in Box 1. (iv) Basis sifting. Alice (Bob) conducts a positive operator-valued measure (POVM) on the composite systems \(A^{\prime}_{j}A^{\prime}_{k}\) (\(B^{\prime}_{j}B^{\prime}_{k}\)) in \(\mathbb{Z}_{jk}\) pairs with two elements, \[\begin{split} M_{0}&=|00\rangle\left\langle 00 \right|+|11\rangle\left\langle 11\right|,\\ M_{1}&=|01\rangle\left\langle 01\right|+|10\rangle \left\langle 10\right|,\end{split} \tag{6}\] and assigns it as \(Z\) basis if the measurement result is 1. The others are the same as step-4 in Box 1. (v) Key mapping. Alice (Bob) measures the composite systems \(A^{\prime}_{j}A^{\prime}_{k}\) (\(B^{\prime}_{j}B^{\prime}_{k}\)) in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) and obtains the result \(a_{j},a_{k}\) (\(b_{j},b_{k}\)) in \(Z\) and \(X\) bases. Then they set their key bits to be the same as step-5 in Box 1. (vi) Parameter estimation. This is the same as step-6 in Box 1. (vii) Key distillation. This is the same as step-7 in Box 1. ### Security Proof of Entanglement Scheme for MP-QKD We analyze the security of the entanglement scheme for MP-QKD in the following. In each round, the state sent to Charlie (renamed as Eve) is independent and identically distributed (i.i.d.), which can be expressed as \[\rho_{A_{k}B_{k}}=(p_{z}\rho_{A_{k}}+p_{x}\sigma_{A_{k}})\otimes(p_{z}\rho_{B _{k}}+p_{x}\sigma_{B_{k}}). \tag{7}\] Therefore, we assume that Eve applies the same quantum operation to the state \(\rho_{A_{k}B_{k}}\) in each round as \(\mathcal{E}(\rho_{A_{k}B_{k}})=\sum_{L,R\in\mathbb{Z}_{2}\zeta}E^{\zeta}_{L,R} \rho_{A_{k}B_{k}}E^{\zeta\dagger}_{L,R}\), with Kraus operators \(E^{\zeta}_{L,R}\) corresponding to measurement results \(L\), \(R\in\mathbb{Z}_{2}\) announced by Eve and Eve's results \(\zeta\). This is slightly different from the collective attack, which applies the same quantum operation to each pair but not the states in every round. It should be noted that the following analysis also applies directly to the collective attack. In what follows, we consider a simplified channel evolution as \[\mathcal{E}(\rho_{A_{k}B_{k}})=\sum_{L,R\in\mathbb{Z}_{2}}E_{L,R}\rho_{A_{k}B_{k} }E_{L,R}^{\dagger}, \tag{8}\] with Kraus operators \(E_{L,R}\) corresponding to measurement results \(L,R\in\mathbb{Z}_{2}\). The resulting state will be a pure state corresponding to the measurement results with this simplification, while it will be a mixed state without the simplification. There is no essential difference in the security proof process, and the same results can be obtained. Besides, the analysis can be generalized to the coherent attack as discussed in Sec. 5. The composite state in step-i is \(\ket{\varphi}_{A^{\prime}A_{jk}}\ket{\varphi}_{B^{\prime}B_{jk}}\triangleq \ket{\varphi}_{A^{\prime}_{j}A_{j}}\ket{\varphi}_{A^{\prime}_{k}A_{k}}\ket{ \varphi}_{B^{\prime}_{j}B_{j}}\ket{\varphi}_{B^{\prime}_{k}B_{k}}\) when both are \(Z\) pairs. Here, the subscript \(A^{\prime}_{jk}\), \(A_{jk}\) and \(A^{\prime}A_{jk}\) denotes the composite system \(A^{\prime}_{j}A^{\prime}_{k}\), \(A_{j}A_{k}\) and \(A^{\prime}_{jk}A_{jk}\) for Alice, and it is the same for Bob's systems. After the POVM in step-vi, the state collapses into a mixed state that corresponds to Eve's measurement results \(\chi\in\mathbb{Z}_{2}^{4}\) and Alice and Bob's POVM results \(s,t\in\mathbb{Z}_{2}\) as \[\rho^{\prime}_{jk}=\sum_{\chi\in\mathbb{Z}_{2}^{4};s,t\in\mathbb{Z}_{2}}\frac {1}{4}\hat{\rho}\Big{(}E_{\chi}\ket{\varphi_{s,\theta_{j}^{a},\theta_{k}^{a}} }_{A^{\prime}A_{jk}}\ket{\varphi_{t,\theta_{j}^{b},\theta_{k}^{b}}}_{B^{\prime} B_{jk}}\Big{)}, \tag{9}\] where \(E_{\chi}\) is the operator on systems \(A_{j}A_{k}B_{j}B_{k}\), which is a rewritten form of the operator \(E_{\chi_{1},\chi_{2}}\otimes E_{\chi_{3},\chi_{4}}\), and the unit evolution on the systems \(A^{\prime}_{jk}\) and \(B^{\prime}_{jk}\) is omitted. Here, the state \(\ket{\varphi_{s,\theta_{1},\theta_{2}}}\) is defined as \[\ket{\varphi_{s,\theta_{1},\theta_{2}}}=\frac{1}{\sqrt{2}}\sum_{t\in\mathbb{Z }_{2}}\ket{t}\ket{s+t}\ket{e^{i\theta_{1}}\sqrt{t\mu}}\ket{e^{i\theta_{2}} \sqrt{(s+t)\mu}}, \tag{10}\] where the state \(\ket{e^{i\theta}\sqrt{s\mu}}\) denotes the vacuum state \(\ket{0}\) when \(s=0\). In this way, the relevance is established between systems \(A_{jk}\) (\(B_{jk}\)) and \(A^{\prime}_{jk}\) (\(B^{\prime}_{jk}\)) explained as follows. In step-v, Alice only measures the systems \(A^{\prime}_{j}A^{\prime}_{k}\) when the POVM result is 1. Note that the phase \(\theta_{j}^{a}\) and \(\theta_{k}^{a}\) are two independent random phases that must be kept secret throughout. Alice could calculate the phase difference \(\delta_{jk}^{a}\) privately, or she could even announce it although it would be useless. Then the state \(\ket{\varphi_{1,\theta_{j}^{a},\theta_{k}^{a}}}_{A^{\prime}A_{jk}}\) is equivalent to the following mixed state as analyzed in A, \[\rho_{1,A^{\prime}A_{jk}}=\sum_{m\in\mathbb{N}}p_{\mu,m}\hat{P}\big{(}\ket{ \varphi_{1m,\beta_{jk}^{a}}}_{A^{\prime}A_{jk}}\big{)}, \tag{11}\] where the probability \(p_{\mu,m}=e^{-\mu}\mu^{m}/m!\), and the state \(\ket{\varphi_{1m,\delta}}\) is defined as \[\ket{\varphi_{1m,\delta}}=\frac{1}{\sqrt{2}}\big{(}\ket{01}\ket{0m}+e^{im \delta}\ket{10}\ket{m0}\big{)}. \tag{12}\] The entanglement has been established between the composite systems \(A_{jk}\) and \(A^{\prime}_{jk}\). Similarly, the state \(\ket{\varphi_{1,\theta_{j}^{b},\theta_{k}^{b}}}_{B^{\prime}B_{jk}}\) is equivalent a mixed state \(\rho_{1,B^{\prime}B_{jk}}\). Therefore, the state \(\rho^{\prime}_{jk}\) is equivalent to the following state when the POVM results are 1, \[\rho^{\prime}_{11,jk}=\sum_{\chi\in\mathbb{Z}_{2}^{4};m,n\in\mathbb{N}}p_{\mu,m }p_{\mu,n}\hat{P}\Big{(}E_{\chi}\ket{\varphi_{1m,\beta_{jk}^{a}}}_{A^{\prime}A _{jk}}\ket{\varphi_{1n,\delta_{jk}^{b}}}_{B^{\prime}B_{jk}}\Big{)}. \tag{13}\] It is equivalent to Alice having prepared the following state when the POVM result is 1, \[\rho_{11,A^{\prime}A_{jk}}=\sum_{m\in\mathbb{N}}p_{\mu,m}\hat{P}\Big{(}\ket{ \varphi_{1m,\delta^{a}_{jk}}}_{A^{\prime}A_{jk}}\Big{)}. \tag{14}\] Similarly, Bob has prepared the state \(\rho_{11,B^{\prime}B_{jk}}\) in an equivalent manner. The state \(\ket{\varphi_{11,\delta}}\) in systems \(A_{jk}\) (\(B_{jk}\)) is a mixture of the single-photon state \(\ket{01}\) and \(\ket{10}\), which can be used to generate secret key bits. By applying the tagging method [41], the secret key rate can be obtained as expressed in Eq. (1) following the GLLP analysis.The security of the final key bit rely on the accuracy of the purification process, specifically, the estimation of phase-flip error rate \(e_{11}^{x}\) of the state \(\ket{\varphi_{11,\delta^{a}_{jk}}}_{A^{\prime}A_{jk}}\ket{\varphi_{11,\delta^{ b}_{jk}}}_{B^{\prime}B_{jk}}\). In the following, we define and analyze how to estimate the phase-flip error rate. Other parameters can be obtained using the decoy-state method in Sec. 4. Define a basis \(X_{\theta}=\{\ket{\omega_{\theta}},\ket{\omega_{\theta+\pi}},\ket{\varpi_{0}}, \ket{\varpi_{1}}\}\) and denote the result indexes as 1, \(-1\), \(2\) and \(-2\). The elements in \(X_{\theta}\) basis are defined as \(\ket{\omega_{\theta}}=(\ket{01}+e^{i\theta}\ket{10})/\sqrt{2}\) and \(\ket{\varpi_{s}}=(\ket{00}+(-1)^{s}\ket{11})/\sqrt{2}\). Suppose Alice and Bob have measured the systems \(A^{\prime}_{jk}\) and \(B^{\prime}_{jk}\) of the (unnormalized) state in the bases \(X_{\delta^{a}_{jk}}\) and \(X_{\delta^{b}_{jk}}\) but not \(\{\ket{st}\}_{s,t\in\mathbb{Z}_{2}}\) in step-v, and obtain the results \(m^{a}_{jk}\), \(m^{b}_{jk}\in\{\pm 1,\pm 2\}\). The phase error is defined conditioned on the measurement results \(\chi_{jk}\) as two cases: (1) \(m^{a}_{jk}m^{b}_{jk}=1\) when \(g_{1}(\chi_{jk})=1\); (2) \(m^{a}_{jk}m^{b}_{jk}=-1\) when \(g_{2}(\chi_{jk})=1\). Here, we define \(g_{2}(\chi_{1},x_{2},x_{3},x_{4})=\bar{x}_{1}x_{2}\bar{x}_{3}x_{4}\oplus x_{1} \bar{x}_{2}x_{3}\bar{x}_{4}\). By taking the weighted average of the two cases, the phase-flip error rate can be shown as \[e_{11}^{x}=\frac{1}{p(\mathcal{X})}\sum_{(\chi,m_{0},m_{1})\in\mathcal{S}}p( \chi,m_{0},m_{1}), \tag{15}\] where we define the set \(\mathcal{X}=\{\chi\in\mathbb{Z}_{2}^{4}|g_{1}(\chi)+g_{2}(\chi)=1\}\) and the set \(\mathcal{S}=\{(\chi,m_{0},m_{1})|g_{1}(\chi)=m_{0}m_{1}=1\) or \(g_{2}(\chi)=-m_{0}m_{1}=1\}\). The element \(p_{x}(\chi,m_{0},m_{1})\) denotes the joint probability \(p_{x}(\chi_{jk}=\chi,m^{a}_{jk}=m_{0},m^{b}_{jk}=m_{1})\) when measuring the state in the bases \(X_{\delta^{a}_{jk}}\) and \(X_{\delta^{b}_{jk}}\). The molecules in Eq. (15) can be expressed as \[p_{x}(\chi,m_{0},m_{1})=\frac{1}{4}\mathrm{Tr}\big{[}\hat{P}\big{(}E_{\chi} \ket{\gamma_{1,\pi\Delta(m_{0})}}\ket{\gamma_{1,\pi\Delta(m_{1})}}\big{)} \big{]}, \tag{16}\] where \(\Delta(x)=(1-x)/2\) and the two-mode \(m\)-photon state is defined as \[\ket{\gamma_{m,\delta}}=\frac{1}{\sqrt{2^{m}}}\sum_{r=0}^{m}\sqrt{C_{m}^{r}}e^ {ir\delta}\ket{r,m-r}. \tag{17}\] And the denominator in Eq. (15) \(p(\mathcal{X})=s_{11}^{z}\) can be estimated with the decoy-state method in Sec. 4. We analyze how to estimate the parameters \(p_{x}(\chi,m_{0},m_{1})\) with \(X\) basis and other mismatching pairs. When both are \(X\) basis, the composite state prepared in step-i is \(\ket{\phi}_{A^{\prime}A_{jk}}\ket{\phi}_{B^{\prime}B_{jk}}\triangleq\ket{ \phi}_{A^{\prime}_{j}A_{j}}\ket{\phi}_{A^{\prime}_{k}A_{k}}\ket{\phi}_{B^{ \prime}_{j}B_{j}}\ket{\phi}_{B^{\prime}_{k}B_{k}}\). When Alice announces the phase difference \(\delta^{a}_{jk}\), the state \(\ket{\phi}_{A^{\prime}A_{jk}}\) is equivalent to the following mixed state as analyzed in A \[\sigma_{1,A^{\prime}A_{jk}}=\sum_{m\in\mathbb{N}}p_{2v,m}\hat{P}\big{(}\ket{ \phi_{1m,\delta^{a}_{jk}}}_{A^{\prime}A_{jk}}\big{)}, \tag{18}\] where the elements \[|\phi_{1m,\delta}\rangle=\frac{1}{\sqrt{2}}\sum_{st\in\{00,10\}}\frac{1}{\sqrt{2 }}\big{(}\,|st\rangle+(-1)^{m}\,|\bar{s}\bar{t}\rangle\,\big{)}\,\,|\gamma_{m, \delta+s\pi}\rangle\,. \tag{19}\] We can see the relationship between systems \(A_{jk}\) and \(A^{\prime}_{jk}\) is established by announcing the phase difference. In this way, they prepare the state \(|\gamma_{1,0(\pi)}\rangle\) with the probability \(p_{2\nu,1}/2\) equivalently when \(|\delta^{a(b)}_{jk}|=0\) or \(\pi\), which can be used to estimate the probability in Eq. (16). In fact, they could not measure the systems \(A_{jk}B_{jk}\) to distinguish \(|\gamma_{m,\theta}\rangle\) as the states had been sent to Eve. They also could not measure systems \(A_{jk}B_{jk}\) in step-i first because they could not predict the measurement results announced by Eve in step-ii for pair \(j\) and \(k\). However, they could measure the systems \(A^{\prime}_{jk}B^{\prime}_{jk}\) on the \(X_{0}\) basis (i.e., Bell basis) to infer the states in systems \(A_{jk}B_{jk}\) partially. Instead, Alice and Bob could measure the systems \(A^{\prime}_{jk}\) and \(B^{\prime}_{jk}\) in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) but not the \(X_{0}\) basis. To be specific, the state collapses into the following state corresponding to the measurement results \(\lambda\in\mathbb{Z}_{2}^{4}\) in the \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) basis in step-v \[\begin{split}\sigma^{\prime}_{jk}=\frac{1}{16}\sum_{\chi,\lambda \in\mathbb{Z}_{2}^{4};m,n\in\mathbb{N}}&\left[p_{2\nu,m}p_{2\nu,n}\hat{\mathcal{B}}\big{(}\,|\lambda\rangle_{A^{\prime}_{jk}B^{\prime}_{jk}} \,\big{)}\right.\\ &\left.\otimes\hat{\mathcal{B}}\Big{(}E_{\chi}\,|\gamma_{m, \delta^{a}_{jk}+(\lambda_{1}+\lambda_{2})\pi}\rangle_{A_{jk}}\,|\gamma_{n, \delta^{b}_{jk}+(\lambda_{3}+\lambda_{4})\pi}\rangle_{B_{jk}}\,\Big{)}\right].\end{split} \tag{20}\] Therefore, the joint probability \(q(\chi_{jk}=\chi,\lambda_{jk}=\lambda|\delta^{a}_{jk}=\delta_{1},\delta^{b}_{ jk}=\delta_{2})\) when measuring the state \(\sigma_{jk}\) of systems \(A^{\prime}_{jk}B^{\prime}_{jk}\) in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) conditioned on the phase difference \(\delta^{a}_{jk}\) and \(\delta^{b}_{jk}\) can be expressed as \[q(\chi,\lambda|\delta_{1},\delta_{2})=\frac{1}{16}\sum_{m,n\in\mathbb{N}}p_{2 \nu,m}p_{2\nu,n}Y^{x}_{\chi|m,n,f(\delta_{1},\delta_{2},\lambda)}, \tag{21}\] where \(f(\delta_{1},\delta_{2},\lambda)=[\delta_{1}+(\lambda_{1}+\lambda_{2})\pi, \delta_{2}+(\lambda_{3}+\lambda_{4})\pi]\). And \(Y^{x}_{\chi|m,n,\delta_{1},\delta_{2}}\) is defined as the counting rate of the results \(\chi\) when the prepared state is \(|\gamma_{m,\delta_{1}}\rangle\,|\gamma_{n,\delta_{2}}\rangle\), which can be shown as \[Y^{x}_{\chi|m,n,\delta_{1},\delta_{2}}=\Tr\big{[}\hat{\mathcal{B}}\big{(}E_{ \chi}\,|\gamma_{m,\delta_{1}}\rangle\,|\gamma_{n,\delta_{2}}\rangle\,\big{)} \big{]}. \tag{22}\] We note that the state \(|\gamma_{m,\delta}\rangle\) is independent of the intensity of the coherent state. Therefore, by sifting events with phase difference \(\delta^{a}_{jk},\delta^{a}_{jk}\in\{0,\pi\}\), the counting rate \(Y^{x}_{\chi|1,1,0(\pi),0(\pi)}\) or its upper bound can be estimated using the decoy-state method in Sec. 4. Now, the probability in Eq. (16) can be obtained as \[p(\chi,m_{0},m_{1})=\frac{1}{4}Y^{x}_{\chi|1,1,\pi\Delta(m_{0}),\pi\Delta(m_{1 })}. \tag{23}\] In this way, the molecules of phase-flip error rate in Eq. (15) can be estimated. By combining the decoy-state method in Sec. 4, we can calculate the secret key rate in Eq. (1). ### Reduction to Prepare-and-Measure for MP-QKD We prove the security of MP-QKD by showing the equivalence of the prepare-and-measure scheme and the entanglement scheme for MP-QKD. The key is to first move the POVM in step-iv and the measurements in step-v to step-i, and then reduce it to MP-QKD. In the entanglement scheme for MP-QKD, the entanglement between the systems \(A^{\prime}_{j}A_{j}\) and \(A^{\prime}_{k}A_{k}\) in \(Z_{jk}\) pairs is established by performing the POVM in step-iv. However, the POVM must be removed because there is no such operation in the prepare-and-measure scheme. The common method to prove the equivalence is by proving that the POVM can be performed in the state preparation step. However, the location \(i\) and \(j\) in the entanglement scheme for MP-QKD are determined by the measurement results announced by Charlie in step-ii. This means Alice and Bob could not perform the POVM in step-i ahead because they could not predict the measurement results. Besides, the measurements in \(Z_{jk}\) or \(X_{jk}\) pairs in step-v should also be removed. We provide a detailed analysis of how this can be accomplished in the following. In the entanglement scheme for MP-QKD, Alice (Bob) will perform the POVM in step-iv and measurement in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) on the systems \(A^{\prime}_{jk}\) (\(B^{\prime}_{jk}\)) in step-v after Eve's evolution on systems \(A_{jk}B_{jk}\). Actually, since the two measurements are done continuously, they could measure the ancillary systems only once by combining the two measurements as long as they can infer the POVM results 0 or 1. When measuring the ancillary systems in two steps, the measurement result state is \(|00\rangle\) or \(|11\rangle\) (\(|01\rangle\) or \(|10\rangle\)) in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) only when the POVM result is 0 (1). Therefore, we can infer the POVM results from the results measured in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\). We note the combined measurement is just the measurement in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\). Therefore, the entanglement scheme for MP-QKD in Sec. 3.1 is equivalent to the following _entanglement scheme for MP-QKD II_ in Box 4 by eliminating the POVM in step-iv and advancing the measurement in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) in step-v to step-iv: ``` (i')-(ii') Same as steps i to ii in Box 3. (iii') Same as step-iii in Box 3. (iv') Basis sifting. Aice (Bob) measures the composite systems \(A^{\prime}_{j}A^{\prime}_{k}\) (\(B^{\prime}_{j}B^{\prime}_{k}\)) in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) and obtains the results \(a_{j}\), \(a_{k}\) (\(b_{j}\), \(b_{k}\)). Others are the same as step-iv in Box 3. (v')-(vii') Same as steps v to vii in Box 3. ``` We note the measurement in \(X\) basis in step-v is remains unchanged, but has only been advanced to step-iv'. In the step-iv', the measurement on the systems \(A^{\prime}_{jk}\) (\(B^{\prime}_{jk}\)) in the basis \(\{|st\rangle\}_{s,t\in\mathbb{Z}_{2}}\) is equivalet to the measurement on the systems \(A^{\prime}_{j}\) and \(A^{\prime}_{k}\) (\(B^{\prime}_{j}\) and \(B^{\prime}_{k}\)) in the basis \(\{|s\rangle\}_{s\in\mathbb{Z}_{2}}\), separately. Therefore, _entanglement scheme MP-QKD II_ is equivalent to the following _entanglement scheme MP-QKD III_ in Box 5: **Box 5: Entanglement Scheme MP-QKD III** (i")-(ii") Same as steps i to ii in Box 3. (iii") Same as step-iii in Box 3. (iv") Basis sifting. Aice (Bob) measures the composite systems \(A^{\prime}_{j}\) and \(A^{\prime}_{k}\) (\(B^{\prime}_{j}\) and \(B^{\prime}_{k}\)) in the basis \(\{|s\rangle\}_{s\in\mathbb{Z}_{2}}\) separately and obtains the results \(a_{j}\), \(a_{k}\) (\(b_{j}\), \(b_{k}\)). Others are the same as step-iv in Box 3. (v")-(vii") Same as steps v to vii in Box 3. The measurement on every ancillary systems \(A^{\prime}_{k}\) and \(B^{\prime}_{k}\) in the basis \(\{|s\rangle\}_{s\in\mathbb{Z}_{2}}\) is commute with Eve's evolution on the systems \(A_{k}\) and \(B_{k}\). Thus, Alice could prepares the states \(|\varphi\rangle_{A^{\prime}_{k}A_{k}}\) in Eq. (3), measures the system \(A^{\prime}_{k}\) in the basis \(\{|s\rangle\}_{s\in\mathbb{Z}_{2}}\) and then sends the system \(A_{k}\) to Charlie. Hence, the _entanglement scheme MP-QKD III_ is equivalent to the following _entanglement scheme MP-QKD IV_ in Box 6: **Box 6: Entanglement Scheme MP-QKD IV** (1') State preparation. This is the same as step-i in Box 3. Then Alice (Bob) measures the system \(A^{\prime}_{k}\) (\(B^{\prime}_{k}\)) in the basis \(\{|s\rangle\}_{s\in\mathbb{Z}_{2}}\) and obtains the result \(a_{k}\) (\(b_{k}\)). (2')-(7') Same as steps ii to vii in Box 3. Additionally, the preparation of the systems \(A^{\prime}_{k}\) and \(B^{\prime}_{k}\) can be removed as discussed in Sec. 3.1. Therefore, the MP-QKD protocol is equivalent to the _entanglement scheme MP-QKD_ in Box 3 and the security proof is completed. In MP-QKD protocol, the POVM is eliminated and the states are i.i.d with no relationship between the rounds \(j\) and \(k\). In fact, the announcement will link two rounds. The intensities may be \((0,\mu)\) or \((\mu,0)\) in the \(Z\) basis, and the phase difference is announced in the \(X\) basis, which means the states in a pair are not independent. Actually, we can establish entanglement through POVM measurement and announcing the phase difference as discussed in Sec. 3.2. ## 4 Decoy-State Method In this section, we introduce the decoy-state method for estimating the phase-flip error rate \(e^{x}_{11}\), the counting rate \(s^{z}_{11}\), and three parameters \(r_{p}\), \(r_{z}\) and \(q^{z}_{11}\). The core of the decoy-state method is to characterize the quantum channel by introducing states with different intensities [6, 7, 8]. In MP-QKD, there will be at least three intensities, i.e., the \(\mu\), \(\nu\), and \(0\), which are enough to estimate the required parameters. It may be better if they properly prepare more intensity states. In MP-QKD with three intensities, only the events with \(a_{j}\oplus a_{k}=b_{j}\oplus b_{k}=1\) in \(Z_{jk}\) pairs can be used to distill key bits. Unfortunately, they could not pair the rounds satisfying \(a_{j}\oplus a_{k}=b_{j}\oplus b_{k}=1\) actively as discussed in Sec. 5. There will be some pairs with \(a_{j}\oplus a_{k}=0\) corresponding to the POVM results \(0\) in \(Z\) pairs. Besides, there will be mismatching pairs, e.g., when one is \(X\) pair but the other is \(Z\) pair. We analyze how to apply the decoy-state method using these events. We first introduce the method for estimating the phase-flip error rate \(e_{11}^{x}\). According to Eq. (20), the counting rate of the results \(\chi\in\mathbb{Z}_{2}^{4}\) in \(X\) basis when the measurements results on the systems \(A^{\prime}_{jk}B^{\prime}_{jk}\) are \(\lambda\in\mathbb{Z}_{2}^{4}\) can be expressed as, \[Q_{2\nu,2\nu}^{x}(\chi|\lambda)=\sum_{m,n\in\mathbb{N}}p_{2\nu,m}p_{2\nu,n}Y_{ \chi|m,n,f(\hat{\sigma}_{jk}^{a},\delta_{jk}^{b},\lambda)}^{x}, \tag{24}\] where the element can be written as \(Y_{\chi|m,n}^{x}\) when \(m=n=0\) because it is independent on the phases. When one selects \(a_{j}=a_{k}=0\) in \(Z_{jk}\) pairs and the other select \(X\) basis, we can obtain the counting rate \[Q_{0,2\nu}^{x}(\chi|\lambda)=\sum_{n\in\mathbb{N}}p_{2\nu,n}Y_{\chi|0,n,f(o, \delta_{jk}^{b},\lambda)}^{x}, \tag{25}\] where we denote \(\delta_{jk}^{a}\) as \(o\) since the vacuum state is independent of the phase. When both select \(a_{j}=a_{k}=b_{j}=b_{k}=0\) in \(Z_{jk}\) pairs, the counting rate \(Q_{0,0}^{x}(\chi)=Y_{\chi|0,0}^{x}\), where the subscript \(f(\delta_{jk}^{a},\delta_{jk}^{b},\lambda)\) is omitted. In this way, we can bound \(Y_{\chi|1,1,f(\delta_{jk}^{a},\delta_{jk}^{b},\lambda)}\) as \[Y_{\chi|1,1,f(\delta_{jk}^{a},\delta_{jk}^{b},\lambda)}^{x}\leq\frac{1}{p_{2 \nu,1}^{2}}\Big{\{}Q_{2\nu,2\nu}^{x}(\chi|\lambda)-p_{2\nu,0}\big{[}Q_{0,2\nu} ^{x}(\chi|\lambda)+Q_{2\nu,0}^{x}(\chi|\lambda)\big{]}+p_{2\nu,0}^{2}Q_{0,0}^{ x}(\chi)\Big{\}}. \tag{26}\] Combining Eqs. (15), (23) and (26), we can calculate the upper bound of the phase-flip error rate \(e_{11}^{x}\). Now, we will analyze how to estimate the counting rate \(s_{11}^{z}\). In step-v, Alice and Bob will measure the systems \(A^{\prime}_{jk}\) and \(B^{\prime}_{jk}\) of the (unnormalized) state \(E_{\chi_{jk}}\ket{\varphi_{11,\delta_{jk}^{a}}}_{A^{\prime}A_{jk}}\ket{\varphi _{11,\delta_{jk}^{b}}}_{B^{\prime}B_{jk}}\) in the basis \(\{st\}_{s,t\in\mathbb{Z}_{2}}\). The effective counting rate is defined as the probability when \(g_{1}(\chi_{jk})+g_{2}(\chi_{jk})=1\). We note that the counting rate is independent of the phase difference \(\delta_{jk}^{a}\) and \(\delta_{jk}^{b}\). Hence, we use \(p_{z}(\chi,\lambda)\) to denote the probability \(p_{z}(\chi_{jk}=\chi,\lambda_{jk}=\lambda|\delta_{jk}^{a},\delta_{jk}^{b})\) when the prepared state is \(\ket{\varphi_{11,\delta_{jk}^{a}}}_{A^{\prime}A_{jk}}\ket{\varphi_{11,\delta_{ jk}^{b}}}_{B^{\prime}B_{jk}}\). The probability \(p_{z}(\chi,\lambda)\) can be expressed as \[p_{z}(\chi,\lambda)=\frac{1}{4}\mathrm{Tr}\big{[}\hat{P}\big{(}E_{\chi}\ket{ \lambda}\big{)}\big{]}=\frac{1}{4}Y_{\chi_{1},\chi_{2}|\lambda_{1},\lambda_{2} }^{z}Y_{\chi_{3},\chi_{4}|\lambda_{3},\lambda_{4}}^{z}, \tag{27}\] where \(Y_{\chi_{1},\chi_{2}|\lambda_{1},\lambda_{2}}^{z}\) denotes the counting rate of the results \((L_{j},R_{j})=(\chi_{1},\chi_{2})\) when the state in systems \(A_{j}B_{j}\) sent to Eve is \(|\lambda_{1}\lambda_{2}\rangle\), and can be expressed as \[Y_{\chi_{1},\chi_{2}|\lambda_{1},\lambda_{2}}^{z}=\mathrm{Tr}\big{[}\hat{P} \big{(}E_{\chi_{1},\chi_{2}}\ket{\lambda_{1}\lambda_{2}}\big{)}\big{]}. \tag{28}\] Here, we assume that Eve applies the same quantum operation to the state \(\rho_{A_{k}B_{k}}\) in every round, which simplifies the estimation method. However, we can directly estimate \(\mathrm{Tr}[\hat{P}(E_{\chi}\ket{\lambda})]\), which corresponds to the collective attack. The counting rate \(s_{11}^{z}\) can be defined as \[s_{11}^{z}=\sum_{\chi,\lambda\in X}p_{z}(\chi,\lambda). \tag{29}\] The lower bound of \(s_{11}^{z}\) can be estimated using the vacuum state and phase-randomized coherent state. We can estimate the counting rate \(s_{11}^{z}\) using the parameters \(Y_{\chi_{1},\chi_{2}|\lambda_{1},\lambda_{2}}^{z}\) with rounds, but not pairs, as shown in Eq. (27). If Alice selects \(X\) window but Bob selects \(Z\) window and set \(a_{k}=0\) in the \(k\)-th round, the counting rate of obtaining the results \((L_{k},R_{k})=(\chi_{1},\chi_{2})\) can be shown as \[Q^{z}_{\nu,0}(\chi_{1},\chi_{2})=\sum_{m\in\mathbb{N}}p_{\nu,m}Y^{z}_{\chi_{1}, \chi_{2}|m,0}. \tag{30}\] There are some rounds when Alice and Bob both select \(Z\) windows with \(a_{k}\oplus b_{k}=1\) that can not be used to distill key bits. For example, it happens when Alice and Bob's POVM results are different. For these rounds, we can obtain the counting rate as \(Q^{z}_{\mu,0}(\chi_{1},\chi_{2})\). Also, the counting rate \(Q^{z}_{0,0}(\chi_{1},\chi_{2})\) can be obtained using the rounds in \(Z_{jk}\) pairs with \(a_{j}=a_{k}=b_{j}=b_{k}=0\). As long as \(\mu\neq\nu\), we can bound \(Y^{z}_{\chi_{1},\chi_{2}|1,0}\) as \[Y^{z}_{\chi_{1},\chi_{2}|1,0}\geq\frac{p_{\mu,2}Q^{z}_{\nu,0}(\chi_{1},\chi_{ 2})-p_{\nu,2}Q^{z}_{\mu,0}(\chi_{1},\chi_{2})-(p_{\mu,2}p_{\nu,0}-p_{\nu,2}p_{ \mu,0})Q^{z}_{0,0}(\chi_{1},\chi_{2})}{p_{\mu,2}p_{\nu,1}-p_{\nu,2}p_{\mu,1}}. \tag{31}\] Similarly, the lower bound of \(Y^{z}_{\chi_{1},\chi_{2}|0,1}\) can be obtained. Therefore, the lower bound of the counting rate \(s^{z}_{11}\) can be estimated using Eqs. (27), (29), (28) and (31). Besides, there are three parameters \(r_{p}\), \(r_{z}\), and \(q^{z}_{11}\) about the pairing strategy needed to be estimated. Based on the method in Ref. [37], the pairing efficiency of \(Z\) pairs of this pairing strategy is \(r_{p}=r(p_{\rm eff},l)\), where \(r(p,l)\) is defined as \[r(p,l)=\left\{\frac{1}{p[1-(1-p)^{l}]}+\frac{1}{p}\right\}^{-1}, \tag{32}\] and \(p_{\rm eff}\) is the probability that a round is an effective round, which can be obtained as \[\begin{split} p_{\rm eff}=\sum_{\chi_{1}\in Z_{2}}& \left\{\frac{p_{z}^{2}}{4}\sum_{\lambda_{1},\lambda_{2}\in Z_{2}}Q^{z}_{ \lambda_{1}\mu,\lambda_{2}\mu}(\chi_{1},\bar{\chi}_{1})+\frac{p_{z}p_{x}}{2} \sum_{\lambda_{1}\in Z_{2}}Q^{z}_{\lambda_{1}\mu,\nu}(\chi_{1},\bar{\chi}_{1}) \\ &+\frac{p_{z}p_{x}}{2}\sum_{\lambda_{2}\in Z_{2}}Q^{z}_{\nu, \lambda_{2}\mu}(\chi_{1},\bar{\chi}_{1})+p_{x}^{2}Q^{z}_{\nu,\nu}(\chi_{1}, \bar{\chi}_{1})\right\}.\end{split} \tag{33}\] Note that not all \(Z\) pairs are assigned as \(Z\) basis, but only the pairs with measurements result satisfying \(\lambda\in\mathcal{X}\). Therefore, the proportion of \(Z\) basis of all pairs is \[r_{z}=\frac{p_{z}^{4}}{16p_{\rm eff}^{2}}\sum_{\lambda\in\mathcal{X};\chi_{1},\chi_{2}\in Z_{2}}Q^{z}_{\lambda_{1}\mu,\lambda_{2}\mu}(\chi_{1},\bar{\chi}_{ 1})Q^{z}_{\lambda_{3}\mu,\lambda_{4}\mu}(\chi_{2},\bar{\chi}_{2}). \tag{34}\] Among all \(Z\) basis, the fraction \(q^{z}_{11}\) when Alice and Bob's state is \(|01\rangle\) or \(|10\rangle\) can be estimated as \[q^{z}_{11}=\frac{p_{z}^{4}p_{\mu,1}^{2}s_{11}^{z}}{4r_{z}p_{\rm eff}^{2}}. \tag{35}\] In this way, we can calculate the secret key rate in Eq. (1) with the decoy-state method. ## 5 Analysis of Pairing Strategy In this section, we analyze the permitted pairing strategy and its security of MP-QKD under collective and coherent attacks, and compare the efficiency of different pairing strategies. The pairing is a core component of MP-QKD, which decouples the rounds \(j\) and \(k\) and results in a quadratic improvement compared to MDI-QKD. A proper pairing strategy can lead to a high secret key rate, but not all pairing strategies are permitted as discussed below. The maximal pairing interval \(l\) represents the maximum distance between two paired rounds that satisfy condition \(k-j\leq l\). This interval is determined by the phase drift rate, which we consider as a parameter without analyzing how to determine \(l\) based on experiments. Detailed experimental analysis can be found in Refs. [37, 38]. We first analyze the allowed pairing strategy under the collective attack. According to the _entanglement scheme MP-QKD_ in Box 3, Alice and Bob can sift the effective \(Z\) and \(X\) rounds when both are \(Z\) and \(X\) windows with \(L_{k}\oplus R_{k}=1\) under the collective attack, which means they can actively pair two \(Z\) (\(X\)) rounds. But not all \(Z_{jk}\) pairs can generate raw key bits, but only those with POVM result 1. The \(Z_{jk}\) pairs with POVM result 0, i.e., \(a_{j}\oplus a_{k}=0\), is useless for the raw key bits. This means they could not actively pair the rounds satisfying \(a_{j}\oplus a_{k}=1\) or \(b_{j}\oplus b_{k}=1\). But the rounds in those pairs with POVM result 0 can be used to estimate parameters in the decoy-state method as discussed in Sec. 4. Besides, there is no limit on the pairing interval of \(Z\) pairs as the error rate is independent of the phase drift. Therefore, we give the pairing strategy that is secure under the collective attack in Box 7. ``` Input: Charlie's announced detection results \(C_{k}=L_{k}\oplus R_{k}\) for \(k=1\) to \(N\); maximal pairing interval \(l\); types of Alice and Bob's windows (\(X\) or \(Z\)) in \(N\) rounds. Output: \(K\)\(X\) pairs, (\(F_{k}^{x},R_{k}^{x}\)) for the \(k\)-th \(X\) pair for \(k=1\) to \(K\); \(M\)\(Z\) pairs, (\(F_{m}^{z},R_{m}^{z}\)) for the \(m\)-th \(Z\) pair for \(m=1\) to \(M\). Initialization: \(k=1\), \(m=1\); \(f_{x}=0\), \(f_{z}=0\). for\(i\in[N]\)do if\(C_{k}=1\)then if both Alice and Bob are \(X\) windows then if\(f_{x}=0\)then \(F_{k}^{x}\gets i\); \(f_{x}\gets 1\). elsethen if\(i-F_{k}^{x}\leq l\)then \(R_{k}^{x}\gets i\); \(k\gets k+1\); \(f_{x}\gets 0\). elsethen \(F_{k}^{x}\gets i\). endif endif elseif both Alice and Bob are \(Z\) windows then if\(f_{z}=0\)then ``` \begin{tabular}{|c|} \hline \(F_{m}^{z}\gets i\); \(f_{z}\gets 1\). \\ \hline \(\mathbf{else}\) \\ \(R_{m}^{z}\gets i\); \(m\gets m+1\); \(f_{z}\gets 0\). \\ \hline \(\mathbf{endif}\) \\ \hline \(\mathbf{endif}\) \\ \hline \end{tabular} With this pairing strategy, the result pairs can only be \(X\) or \(Z\) pairs. Therefore, the pairing efficiency of \(Z\) pairs of this pairing strategy is \(r_{p}^{z}=r(p_{\mathrm{eff},z},l)\), where \(p_{\mathrm{eff},z}\) is the probability that a round is an effective \(Z\) rounds, which can be calculated as \[p_{\mathrm{eff},z}=\frac{p_{z}^{2}}{4}\sum_{\lambda_{1},\lambda_{2},\chi_{1} \in\mathbb{Z}_{2}}Q_{\lambda_{1}\mu,\lambda_{2}\mu}^{z}(\chi_{1},\bar{\chi}_{ 1}). \tag{36}\] The proportion of \(Z\) basis of all paired \(Z\) pairs is \(r_{z}^{*}=p_{\mathrm{eff}}^{2}r_{z}/p_{\mathrm{eff},z}^{2}\), and the fraction when Alice and Bob's states is \(|01\rangle\) or \(|10\rangle\) is \(q_{11}^{z*}=p_{\mathrm{eff}}^{2}q_{11}^{z}/p_{\mathrm{eff},z}^{2}\). In this way, we could calculate the secret key rate \(R^{*}\) with the pairing strategy in Box 7 by replacing the parameters \(r_{p}r_{z}\) and \(q_{11}^{z}\) with \(r_{p}^{z}r_{z}^{*}\) and \(q_{11}^{z*}\) in Eq. (1) as below \[R^{*}=r_{p}^{z}r_{z}^{*}\big{\{}q_{11}^{z*}[1-h(e_{11}^{x})]-fh(E^{z})\big{\}}. \tag{37}\] We compare the efficiency of these two pairing strategies in Fig. 1 through the ratio \(R^{*}/R\). We set the intensities \(\mu=0.429\), \(\nu=0.038\) according to Ref. [39]. The results show that the pairing efficiency of the pairing strategy in Box 7 is higher than that in Box 2 at a short distance, but this advantage gradually weakens as distance increases. Taking \(l=2\)E3 as an example given in Ref. [39], the change of the ratio \(R^{*}/R\) can be divided into three stages. The ratio \(R^{*}/R\) is higher than 2.3 when the distance is less than 182 km, then decreases rapidly between approximately 182 to 350 km, and approaches 1 gradually after 350 km. The reason for the slow change in the first and third stages is that there are too many and too few effective rounds due to the attenuation, respectively. In the second stage, the number of effective rounds is between the first and third stages, and the effectiveness rapidly decreases. Therefore, the ratio \(R^{*}/R\) with a large pairing interval \(l\) is higher than that with a small pairing interval. And the ratio \(R^{*}/R\) is higher than 2.35 within 500 km when \(l\) is large enough as 2E7. However, the pairing strategy in Box 7 is only secure against collective attacks and is prohibited under the coherent attack. In fact, they could only pair rounds based on measurement results rather than bit or basis information. The pairing of \(Z\) pairs also should be within the maximal interval though the bit-flip error rate is independent of it. Besides, once a round is paired with another, it cannot be unpaired and re-paired with the third round. We explain this and discuss the security against coherent attacks, which have no limit on Eve's eavesdropping ability. A common method is to prove the security against collective attacks and then generalize it to coherent attacks, assuming the original states are i.i.d. [42, 43]. And the loss in secret key rate of the generalization is insignificant, especially in the asymptotic regime. In the following, we analyze the pairing strategy to examine the i.i.d. assumption of MP-QKD. The prepared state of Alice and Bob in each round is i.i.d., which satisfies the i.i.d. assumption of most other QKD protocols for generalization to coherent attacks. However, improper pairing strategies may violate the i.i.d. assumption of MP-QKD even if each state is i.i.d. in every round. For example, if Alice and Bob perform the allowed mode strategy under collective attacks, which pairs the neighboring effective \(Z\) (\(X\)) rounds without (with) restriction \(k-j\leq l\), the i.i.d. assumption is violated. If an effective round is paired with another round with \(k-j>l\), it must be \(Z\) pairs. A proper pairing strategy is shown in Box 2. We note that this is an optimal pairing strategy, provided the i.i.d. assumption holds. From the above analysis, we can see that a basic assumption of pairing strategy is to pair both \(Z\) (\(X\)) rounds within the maximal pairing interval. But if Alice actively pairs two effective \(Z\) (\(X\)) rounds according to their windows, then the pairs may be correlated and the i.i.d. assumption is violated. For example, if Alice pairs 4 effective rounds \((1,2,3,4)\) as \((1,3)\) and \((2,4)\), then one of the two pairs is \(Z\) and the other is \(X\). Therefore, it is wise to pair neighboring effective rounds within the maximal pairing interval in order to minimize the impact of the phase drift. Besides, it is beneficial to improve the pairing efficiency of this pairing strategy. For example, considering 4 effective rounds as \((1,2,l,l+3)\), we will obtain two pairs as \((1,2)\) and \((l,l+3)\) by pairing neighboring effective rounds but may obtain only one pair as \((1,l)\) using another strategy. Below, we analyze the i.i.d. assumption of this pairing strategy. Note that we only care about the i.i.d. assumption of the original paired states, not the final states processed by Eve. It is evident that all the paired states are independent under the pairing strategy in Box 2 since all time windows are independent. Additionally, as they are paired Figure 1: The ratio of the secret key rate \(R^{*}\) and \(R\) with different pairing interval \(l\) based on the pairing strategy in Box 7 and 2, respectively. without the encoding information, the paired states are identically distributed. Therefore, the i.i.d. assumption is satisfied under the pairing strategy in Box 2, and the security against collective attacks, which has been proved in Sec. 3, can be reduced to coherent attacks using the de Finetti theorems [42, 43]. Finally, we should improve the pairing strategy based on encoding information. However, the i.i.d. assumption may be violated, resulting in partial or complete correlation between pairs. Therefore, the security of the effective pairing strategy needs to be researched based on other methods, e.g., the entropy accumulation method [44, 45]. ## 6 Conclusion MP-QKD is a promising protocol that enjoys both practicality and efficiency simultaneously. Its security has been rigorously proven in previous work by examining the consistency of Alice and Bob's states between MP-QKD and the fixed-pairing scheme, as well as proving the equivalence between the latter and MDI-QKD. In this manuscript, we present a simple and direct security proof of the MP-QKD protocol. Specifically, we propose an entanglement scheme that is equivalent to the MP-QKD protocol. With this entanglement scheme, we can conveniently analyze the MP-QKD without needing to examine Alice and Bob's states and Eve's possible interference. As an application, we can directly understand why the paired rounds can be decoupled, which is essential for the security of MP-QKD as they are determined by Eve. Besides, we analyze the security of MP-QKD against collective and coherent attacks and explore the allowed and optimal pairing strategy that significantly impact the secret key rate. The research was supported by National Key Research and Development Program of China (2020YFA0309702); National Natural Science Foundation of China (62101597, U2130205); China Postdoctoral Science Foundation (2021M691536); Natural Science Foundation of Henan Province (202300410532; 202300410534); Anhui Initiative in Quantum Information Technologies. ## Appendix A Detailed Calculation To obtain Eq. (11), we omit the subscript and denote it as \(|\varphi_{1,\theta_{1},\theta}\rangle\) for simplicity. Defining the parameter \(\delta=\theta_{1}-\theta\) and integrating \(|\varphi_{1,\theta_{1},\theta}\rangle\) on \(\theta\) over \([0,2\pi)\), we will obtain \[\rho_{1}=\frac{1}{2\pi}\int_{0}^{2\pi}\hat{P}\big{(}\,|\varphi_{1,\theta_{1},\theta}\rangle\,\big{)}d\theta=\frac{1}{2}\sum_{s,t\in\mathbb{Z}_{2}}|s \bar{s}\rangle\,\langle t\bar{t}|\otimes\rho_{s,t,\theta_{1},\theta}, \tag{14}\] where \[\begin{split}\rho_{s,t,\theta_{1},\theta}=&\frac{1}{2\pi} \int_{0}^{2\pi}|e^{i\theta_{1}}\sqrt{s\mu}\ \rangle\ \langle e^{i\theta_{1}}\sqrt{t\mu}|\otimes|e^{i\theta}\sqrt{\bar{s}\mu}\rangle \ \langle e^{i\theta}\sqrt{\bar{t}\mu}|\ d\theta\\ =&\sum_{j,k,m,n\in\mathbb{N}}e^{-\mu}e^{i(j-k) \delta}\frac{\mu^{(j+k+m+n)/2}s^{j}t^{k}\bar{s}^{m}\bar{t}^{n}}{\sqrt{j!k!m!n!} }\ |jm\rangle\ \langle kn|\,\frac{1}{2\pi}\int_{0}^{2\pi}e^{i(j-k+m-n)\theta}d\theta\\ =&\sum_{m\in\mathbb{N}}p_{\mu,m}e^{i(s-t)m\delta}\ |sm,\bar{s}m\rangle\ \langle tm,\bar{t}m|\.\end{split} \tag{12}\] Therefore, we obtain \[\rho_{1}=\sum_{m\in\mathbb{N}}p_{\mu,m}\frac{1}{2}\sum_{s,t\in\mathbb{Z}_{2}}| s\bar{s}\rangle\ \langle t\bar{t}|\otimes e^{i(s-t)m\delta}\ |sm,\bar{s}m\rangle\ \langle tm,\bar{t}m|=\sum_{m\in \mathbb{N}}p_{\mu,m}\hat{P}\big{(}\,|\varphi_{1m,\delta}\rangle\,\big{)}. \tag{13}\] To obtain Eq. (18), we omit the subscript and denote it as \(|\phi_{\theta_{1},\theta}\rangle\). Defining the parameter \(\delta=\theta_{1}-\theta\) and integrating \(|\phi_{\theta_{1},\theta}\rangle\) on \(\theta\) over \([0,2\pi)\), we will obtain \[\sigma_{1}=\frac{1}{2\pi}\int_{0}^{2\pi}\hat{P}\big{(}\,|\phi_{\theta_{1}, \theta}\rangle\,\big{)}d\theta=\frac{1}{4}\sum_{l\in\mathbb{Z}_{2}^{4}}| \lambda_{1}\lambda_{2}\rangle\ \langle\lambda_{3}\lambda_{4}|\otimes\sigma_{\lambda,\theta_{1},\theta}, \tag{14}\] where \[\begin{split}\sigma_{\lambda,\theta_{1},\theta}=&\frac {1}{2\pi}\int_{0}^{2\pi}|(-1)^{\lambda_{1}}e^{i\theta_{1}}\sqrt{\nu}\rangle\ \langle(-1)^{\lambda_{2}}e^{i\theta_{1}}\sqrt{\nu}|\otimes|(-1)^{\lambda_{3}}e ^{i\theta}\sqrt{\nu}\rangle\ \langle(-1)^{\lambda_{4}}e^{i\theta}\sqrt{\nu}|\ d\theta\\ =&\sum_{j,k,m,n\in\mathbb{N}}e^{-2\nu}e^{i(j-k) \delta}\frac{\gamma^{(j+k+m+n)/2}(-1)^{\lambda\cdot(j,k,m,n)}}{\sqrt{j!k!m!n!} }\ |jm\rangle\ \langle kn|\int_{0}^{2\pi}e^{i(j-k+m-n)\theta}d\theta\\ =&\sum_{m\in\mathbb{N}}e^{-2\nu}\nu^{m}\sum_{j,k\in \mathbb{Z}_{m}}e^{i(j-k)\delta}\frac{(-1)^{\lambda\cdot(j,k,m-j,m-k)}}{\sqrt{j! k!(m-j)!(m-k)!}}\ |j,m-j\rangle\ \langle k,m-k|\.\end{split} \tag{15}\] Therefore, we obtain \[\begin{split}\sigma_{1}=&\sum_{m\in\mathbb{N}}e^{-2 \nu}\nu^{m}\frac{1}{4}\sum_{l\in\mathbb{Z}_{2}^{4}}|\lambda_{1}\lambda_{2} \rangle\ \langle\lambda_{3}\lambda_{4}|\otimes\sum_{j,k\in\mathbb{Z}_{m}}\frac{e^{i(j-k )\delta}(-1)^{\lambda\cdot(j,k,m-j,m-k)}}{\sqrt{j!k!(m-j)!(m-k)!}}\ |j,m-j\rangle\ \langle k,m-k|\\ =&\sum_{m\in\mathbb{N}}e^{-2\nu}\frac{(2\nu)^{m}}{m!}\hat{P}\bigg{[}\frac{1}{2}\sum_{s,t\in\mathbb{Z}_{2}}|st\rangle\ \frac{1}{\sqrt{2^{m}}}\sum_{j\in\mathbb{Z}_{m}}e^{ij\delta}\sqrt{C_{m}^{j}}(- 1)^{sj+t(m-j)}\ |j,m-j\rangle\ \bigg{]}\\ =&\sum_{m\in\mathbb{N}}p_{2\nu,m}\hat{P}\bigg{[} \frac{1}{2}\sum_{s,t\in\mathbb{Z}_{2}}|st\rangle\ |\gamma_{m,\delta}\rangle\ \bigg{]}\\ =&\sum_{m\in\mathbb{N}}p_{2\nu,m}\hat{P}\big{(}\,| \phi_{1m,\delta}\rangle\,\big{)}.\end{split} \tag{16}\]
2310.15397
Classes of Gaussian States for Squeezing Estimation
This study explores a detailed examination of various classes of single- and two-mode Gaussian states as key elements for an estimation process, specifically targeting the evaluation of an unknown squeezing parameter encoded in one mode. To quantify the efficacy of each probe, we employ the concept of Average Quantum Fisher Information (AvQFI) as a robust metric to quantify the optimal performance associated with specific classes of Gaussian states as input. For single-mode probes, we identify pure squeezed single-mode states as the optimal choice and we explore the correlation between Coherence and AvQFI. Also, we show that pure two-mode squeezed states exhibit behavior resembling their single-mode counterparts for estimating the encoded squeezing parameter, and we studied the interplay between entanglement and AvQFI. This paper presents both analytical and numerical results that encompass all the studied classes, offering valuable insights for quantum estimation processes.
Leonardo A. M. Souza
2023-10-23T22:57:52Z
http://arxiv.org/abs/2310.15397v1
# Classes of Gaussian States for Squeezing Estimation ###### Abstract This study explores a detailed examination of various classes of single- and two-mode Gaussian states as key elements for an estimation process, specifically targeting the evaluation of an unknown squeezing parameter encoded in one mode. To quantify the efficacy of each probe, we employ the concept of Average Quantum Fisher Information (AvQFI) as a robust metric to quantify the optimal performance associated with specific classes of Gaussian states as input. For single-mode probes, we identify pure squeezed single-mode states as the optimal choice and we explore the correlation between Coherence and AvQFI. Also, we show that pure two-mode squeezed states exhibit behavior resembling their single-mode counterparts for estimating the encoded squeezing parameter, and we studied the interplay between entanglement and AvQFI. This paper presents both analytical and numerical results that encompass all the studied classes, offering valuable insights for quantum estimation processes. Introduction Estimation theory encompasses methods and principles used for making predictions based on limited or incomplete data [1]. This theory, often called Metrology, is vital for technological advances in various fields [1; 2]. The core idea of an estimation strategy, be it classical or quantum, is to achieve the the best accuracy in determining a parameter encoded in the system. This often involves using multiple copies of the system for encoding and conducting a precise measurement to approach the true parameter value [1; 2; 3; 4; 5; 6]. In estimation theory, we can utilize classical or quantum systems for an estimation task and compare their performance, in order to determine if there is a "quantum advantage" in estimating unknown parameters. The extensive literature on this subject delves comparisons between classical and quantum systems, shedding light on their individual strengths and responses within the domain of estimation tasks [3; 4; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. It is important to emphasize that within the Estimation Theory, the estimation precision of a parameter is quantified by the Fisher Information (FI), or by its quantum version, the Quantum Fisher Information (QFI) [3; 4]. Typically, higher FI or QFI values correspond to better parameter precision. Also, one may indeed obtain a precision incrementation using quantum resources [7; 16; 19; 20; 21; 22; 23]. Quantum Metrology, or quantum estimation theory, the use of quantum systems and measurement apparatus to perform an estimation task, plays a pivotal role in our technological advancements, particularly exemplified by the Laser Interferometer Gravitational-Wave Observatory (LIGO) [24]: LIGO's profound success in detecting gravitational waves, as predicted by Einstein's theory of relativity, can potentially achieve even greater success by relying on quantum-based precision measurements [25]. Additionally, the utilization of squeezed light in the LIGO experiment can significantly enhance the sensitivity of the detector [26; 27; 28]. Furthermore, it is fundamental in the design and optimization of quantum-enhanced sensors [29; 30; 31] and metrology techniques, both theoretical and experimental [21]. Within the scope of estimation theory using quantum systems and measurements, one can specify the analysis in discrete and continuous variables (the latter is commonly abbreviated to CV) [7; 21]. Finally, for continuous variables systems it is possible to subdivide the analysis for non-Gaussian states (i.e. those that cannot be completely discribed by their first and second statistical moments) and Gaussian states (the group we will be concerned in this work, where the first and second statistical moments completely describe the state, that in turn possess a Gaussian characteristic function, by definition) [7; 9]. While in each studied case a specific quantum system can perform better or worse than other kind of state (for example, [7]), there is no "rule" dictating that some'special' quantum system is, in fact and in general, the golden state to perform all tasks better than classical ones. As already mentioned [26; 27; 28], the use of squeezed light in technological devices are of great importance nowadays. With this motivation as our driving force, in this work we focus our attention in the Quantum estimation problem within a Gaussian scenario: single- or two-mode states subjected to Gaussian dynamics. We are interested in the estimation of an unknown squeezing parameter encoded in one of the modes. We follow the strategy proposed by the authors in [9], where the mode to be encoded is sent to an Squeezer, and the other mode is an ancilla. The precision of estimating the squeezing parameter is quantified using the Average Quantum Fisher Information (AvQFI), which is computed by averaging over the phase acquired by the mode during its evolution. In [9] the authors studied this problem both in complete generality and also for some classes of Gaussian states. Here we explicitly studied in details the most important classes of single- and two-mode Gaussian states as probes for this estimation proccess. For single-mode states we studied the interplay of the AvQFI and a _Coherence_ quantifier, as defined in [32]. Finally, for two-mode Gaussian states, we analliticaly and numerically investigate the AvQFI as a function of Entanglement. This paper is organized as following: in section II we review in great details the formalism of Gaussian states, quantum estimation theory and the estimation strategy used in this work (following closely [9]); in section III we present our results, sorting by each class of Gaussian state we have chosen. Still in section III we studied the AvQFI as a function of Coherence (for single-mode states), and as a function of the Logarithmic Negativity (for two-mode states). Finally, we conclude our work in section IV. ## II Preliminaries Our work is focused on estimating a parameter, specifically the squeezing parameter encoded in one mode of a Gaussian State. Therefore, in this section, we will review some important concepts concerning Gaussian States and Quantum Estimation Theory. ### Gaussian States In this work we are interested in single- or two-mode Gaussian States subjected to the so called Gaussian dynamics, i.e. dynamics that preserve the "Gaussianity" of the state ([33]). A CV system of two modes \(A\) and \(B\) (with annihilation operators \(\hat{a}\) and \(\hat{b}\) respectively), can be defined by the quadrature vector \(\hat{\mathbf{O}}=\{\hat{q}_{A},\hat{p}_{A},\hat{q}_{B},\hat{p}_{B}\}\), where \(\hat{q}_{k}=(\hat{a}_{k}+\hat{a}_{k}^{\dagger})\) and \(\hat{p}_{k}=i(\hat{a}_{k}^{\dagger}-\hat{a}_{k})\), where \(k=A,B\) (assuming natural units, \(\hbar=2\)). We encourage the reader to pay attention to equations and definitions, since one can find both works with \(\hbar=1\) and \(\hbar=2\) in the literature. The quadratures obey the canonical commutation relations \([\hat{O}_{j},\hat{O}_{k}]=2i\Omega_{jk}\), with the two-mode symplectic form \[\mathbf{\Omega}=\left(\begin{array}{cc}0&1\\ \text{-1 0}\end{array}\right)^{\mathbf{\Theta}\,2}. \tag{1}\] The relations for single-mode Gaussian states are, obviously, the same, changing the dimension of the vectors and matrices. A Gaussian state \(\rho_{AB}\)[34; 35; 36; 37] is represented by a Gaussian characteristic function in phase space, and is completely characterized by its first and second statistical moments of the quadrature vector, given respectively by the displacement vector \(\mathbf{\varepsilon}_{AB}=(\varepsilon_{j})\) and the covariance matrix \(\mathbf{\sigma}_{AB}=(\sigma_{jk})\), where \(\varepsilon_{j}=\langle\hat{O}_{j}\rangle\) and \(\sigma_{jk}=\frac{1}{2}\langle\hat{O}_{j}\hat{O}_{k}+\hat{O}_{k}\hat{O}_{j} \rangle-\langle\hat{O}_{j}\rangle\langle\hat{O}_{k}\rangle\). A _bona fide_ condition satisfied by all physical Gaussian states is the Robertson-Schrodinger uncertainty relation, given by \[\mathbf{\sigma}_{AB}+i\mathbf{\Omega}\geq 0. \tag{2}\] For our purpose (since we will deal with mode B as an ancilla, as one can see soon in section II.3), a general covariance matrix for a two-mode Gaussian state can be written as [9]: \[\mathbf{\sigma}_{AB}=\left(\begin{array}{cccc}a_{1}&g&c&0\\ g&b_{1}&0&d\\ c&0&b_{2}&0\\ 0&d&0&b_{2}\end{array}\right), \tag{3}\] where the coeficients are defined such that (3) satisfies the physical constraint (2). Its worth recalling that, by local symplectic operations (equivalent to local changes of basis on the state), every two-mode covariance matrix can be transformed to a standard form, with diagonal \(2\times 2\) subblocks that can be written as: \[\mathbf{\sigma}_{AB}=\left(\begin{array}{cc}\mathbf{\alpha}&\mathbf{\gamma}\\ \mathbf{\gamma}^{T}&\mathbf{\beta}\end{array}\right), \tag{4}\] where \(\alpha=\text{diag}\{a,a\}\), \(\beta=\text{diag}\{b,b\}\), \(\gamma=\text{diag}\{c,d\}\), such that \(a,b\geq 1\), \(c\geq|d|\geq 0\). For future purposes, we define here the symplectic invariants: \(A=\det\mathbf{\alpha};B=\det\mathbf{\beta};C=\det\mathbf{\gamma};D=\det\mathbf{\sigma}_{AB}\). The total mean number of excitations (proportional to the total mean energy) of a two-mode Gaussian state can be defined as: \(E\equiv\bar{n}_{A}+\bar{n}_{B}=2\bar{n}\), where \(\bar{n}_{A}=(\text{tr}[\mathbf{\alpha}]-2)/4+(\varepsilon_{x,A}^{2}+\varepsilon_{ p,A}^{2})/4\) and \(\bar{n}_{B}=(\text{tr}[\mathbf{\beta}]-2)/4+(\varepsilon_{x,B}^{2}+\varepsilon_{ p,B}^{2})/4\) are the mean number of excitations in modes \(A\) and \(B\) respectively (considering the displacement vector as non-null, since we will deal with such states soon), and \(\bar{n}_{i}\) denotes the mean number of excitations per mode. Throughout this manuscript, we will consistently uphold the physical requirement that any initial state must be subject to a _finite_ mean energy constraint. The various categories of Gaussian states that we address in this study will be detailed (including their covariance matrices, displacement vectors, etc.) in dedicated sections that follow. To provide a comprehensive background on Gaussian states for the reader, ensuring a proper understanding of this work, we introduce two quantities that quantify important characteristics of Gaussian states. First, we will discuss an entanglement quantifier for Gaussian states, specifically the Logarithmic Negativity. The _Logarithmic Negativity_, \(\mathcal{E}_{N}\), is a decreasing function of the smallest symplectic eigenvalue \(\tilde{\nu}\) of the partial transpose of the covariance matrix: \[\mathcal{E}_{N}=\max\{0,-\ln\tilde{\nu}\}. \tag{5}\] Here \(\tilde{\nu}\) are the symplectic eigenvalues of the _partially transposed covariance matrix_ (for details we suggest the reader to check chapter 3 of [34]), and one can show that the \(\tilde{\nu}\) can be written in function of the symplectic invariants [8]: \[2\tilde{\nu}^{2}=H-\sqrt{H^{2}-4D} \tag{6}\] with \(H=A+B-2C\). The Logarithmic Negativity is a measure of Entanglement in composed Gaussian systems. Finally, we introduce a coherence measure that we will use to analyze single-mode states. Coherence is typically associated with the concepts of interference and superposition states. However, how can we quantify a state's ability to formally exhibit _Coherence_? This question was addressed a few years ago, in the discrete case by [38] and for Gaussian states by [32]. Given a N-mode Gaussian state \(\rho\), with covariance matrix \(\mathbf{\sigma}\) and displacement vector \(\mathbf{\varepsilon}\), the _Coherence_ of the state can be quantified by [32]: \[C(\rho)=-S(\rho)+\sum_{i=1}^{N}[(\bar{n_{i}}+1)\log_{2}(\bar{n_{i}}+1)-\bar{n_ {i}}\log_{2}\bar{n_{i}}], \tag{7}\] where \(\bar{n}_{i}\) is the mean photon number of the \(i\)-esim state, and \(S(\rho)\) is the entropy of the state \(\rho\): \[S(\rho)=-\sum_{i=1}^{N}\left[\left(\frac{\nu_{i}-1}{2}\right)\log_{2}\left( \frac{\nu_{i}-1}{2}\right)-\left(\frac{\nu_{i}+1}{2}\right)\log_{2}\left( \frac{\nu_{i}+1}{2}\right)\right]. \tag{8}\] In equation (8), \(\{\nu_{i}\}_{i=1}^{N}\) are the symplectic eigenvalues of the Covariance Matrix (please do not confuse \(\nu\) with \(\tilde{\nu}\), since the latter represents the symplectic eigenvalues of the partially transposed covariance matrix). In this framework, maximally coherent states are pure states. Furthermore, it is intuitive to observe that states exhibiting squeezing can have Coherence values (as defined in equation 7) greater than those of the so-called coherent states. As a final remark, we stress that we will differentiate _Coherence_ (the measure of Coherence as in Equation 7) from coherent (concerning coherent states) by capitalizing the first letter of the former. ### Quantum Estimation Theory For completeness, this subsection is dedicated to introducing Quantum Estimation Theory and the concept of Quantum Fisher Information (QFI). We encourage readers with a keen interest in exploring this topic further to consult the references such as [3; 4; 5; 6]. The fundamental concept behind any estimation strategy, whether classical or quantum, is to obtain information about a parameter encoded in the system with the highest possible accuracy. Typically, this information is acquired by sending N copies of the system to the encoding stage. In the final stage, the most precise measurement is performed on the system, aiming to obtain a value for the parameter that approaches the 'actual' value. Consequently, an estimation strategy is grounded in statistical theory. In this scenario, the idea is to encounter a probability distribution that depends on a parameter, denoted here as \(p(x|\epsilon)\). The objective is to achieve the most accurate estimate of the parameter \(\epsilon\) by repeatedly sampling the random variable \(x\), which follows the distribution \(p(x|\epsilon)\). An important principle in classical estimation theory, known as the _Cramer-Rao bound_, asserts that the variance of any unbiased estimator \(\hat{\epsilon}\) for the parameter \(\epsilon\) must satisfy the following inequality: \[\text{VAR}\left(\hat{\epsilon}\right)\geq\frac{1}{NF_{\epsilon}^{M}}. \tag{9}\] where \(N\) is the number of samplings, \(\text{VAR}\left(\hat{\epsilon}\right)\) is the variance of the estimator \(\hat{\epsilon}\) and \(F_{\epsilon}^{M}\) is the classical Fisher information (where we explicitly showed the dependence of \(F_{\epsilon}^{M}\) on the measurement M), defined by: \[F_{\epsilon}^{M}=\int\text{d}x\,p(x|\epsilon)\left[\partial_{\epsilon}\log p(x |\epsilon)\right]^{2}. \tag{10}\] In a quantum version of the previously mentioned situation, the parameter \(\epsilon\) is encoded in a quantum state \(\rho_{\epsilon}\), typically by applying a quantum map \(\Phi_{\epsilon}\) to a known input probe \(\rho\): \(\rho_{\epsilon}=\Phi_{\epsilon}[\rho].\) To acquire information about the system and, consequently, about \(\epsilon\), a generic Positive Operator-Valued Measurement (POVM) must be executed on \(\rho_{\epsilon}\). The maximum precision achievable within the bounds of quantum mechanics for unbiased estimation of \(\epsilon\) is attained through optimization across all possible POVMs. This approach leads to the derivation of the quantum Cramer-Rao bound, which stipulates: \[\text{VAR}(\hat{\epsilon})\geq\frac{1}{NH_{\epsilon}(\rho)}. \tag{11}\] Here, \(H_{\epsilon}\) represents the Quantum Fisher Information (QFI) linked to the encoded state \(\rho_{\epsilon}\), derived from \(\rho\). The Quantum Fisher Information is defined as: \[H_{\epsilon}[\rho]=\text{Tr}\left[\rho_{\epsilon}L_{\epsilon}^{2}\right]. \tag{12}\] \(L_{\epsilon}\) is the so called symmetric logarithmic derivative (SLD), an Hermitian operator that satisfies the relation: \[\rho_{\epsilon}L_{\epsilon}+L_{\epsilon}\rho_{\epsilon}=2\,\partial_{\epsilon }\rho_{\epsilon}. \tag{13}\] It can be showed [5; 39] that the Quantum Fisher Information (QFI) is linked to the second-order expansion of the Bures distance, or equivalently, the Uhlmann fidelity \[\mathcal{F}(\rho_{1},\rho_{2})=\left(\mathrm{Tr}\left[\sqrt{\sqrt{\rho_{1}}\rho_{2} \sqrt{\rho_{1}}}\right]\right)^{2}:\] \[H_{\epsilon}[\rho]=8\lim_{\mathrm{d}\epsilon\to 0}\frac{1-\sqrt{\mathcal{F}(\rho_{ \epsilon},\rho_{\epsilon+\mathrm{d}\epsilon})}}{\mathrm{d}\epsilon^{2}}. \tag{14}\] It's worth noticing two interesting aspects: (i) In general, Fisher Information is less than or equal to Quantum Fisher Information: \(F_{\epsilon}^{M}\leq H_{\epsilon}\). (ii) By harnessing quantum resources such as entanglement, squeezing, "genuine" quantum states like Fock states, and others (as exemplified in [23; 7; 22]), one can surpass the Standard Limit of estimation (equation 9) (often referred to as the shot noise limit or standard quantum limit), and achieve the Heisenberg limit, i.e., reaching the saturation of equation 11. Moreover, employing quantum resources can lead to a quadratic improvement in estimation problems compared to using classical resources. While this quadratic gain is not a strict rule, it serves as a guiding principle in quantum system research aimed at parameter estimation. We can attribute the following interpretation to Fisher information: it quantifies the sensitivity of a probability distribution (or a quantum state, in the quantum version) to small changes in the parameter \(\epsilon\). If we make a small alteration \(\theta\) to the probability distribution, and it becomes substantially different from the original distribution, this results in a higher Fisher information. Consequently, by the Cramer-Rao bound (and the Bures distance showed above), we can infer that we are moving away from the original probability distribution (or the original state), and therefore Fisher information quantifies the error with respect to the estimator: the higher the Fisher information, more the Fisher information will "capture" any deviation from the original distribution or state, the lower the variance with respect to the estimator. ### Estimation Strategy In this work we follow the estimation strategy as proposed in [9], depicted in Figure 1. Initially, the state is prepared as a single- or two-mode Gaussian state, \(\rho_{AB}\), depending on the specific case under consideration. In this context, we designate mode A as the mode where the squeezing parameter will be encoded, and mode B as an ancilla. Naturally, mode B will not be considered when studying a single-mode Gaussian probe. In the next step, mode A is sent to a Squeezer, while mode B evolves freely. Just before the action of the Squeezer, the evolved state is given by: \((\mathcal{R}_{A}\otimes\mathcal{R}_{B})\rho_{AB}(\mathcal{R}_{A}\otimes \mathcal{R}_{B})^{\dagger}\), where \(\mathcal{R}_{A(B)}=e^{-i\theta_{A(B)}(t)a^{\dagger}a}\) is a unitary single-mode rotation operator acting on mode A(B), and acquired during the 'flight' between the stages: \[\mathcal{R}_{\theta}^{(A)}[\rho_{A}]=U_{\theta}^{(A)}[\rho_{A}]=e^{-i\theta a^{ \dagger}a}\rho_{A}e^{i\theta a^{\dagger}a}=\left(\begin{array}{cc}\cos\theta& \sin\theta\\ -\sin\theta&\cos\theta\end{array}\right).\] After the first 'flight', the Squeezer \(\mathcal{S}_{\epsilon}^{(A)}=\exp[\frac{\epsilon}{2}(a^{2}-(a^{\dagger})^{2})]\) acts on mode A, encoding the parameter \(\epsilon\) that we are interested in estimate. Explicitly: \[\mathcal{S}_{\epsilon}^{(A)}[\rho_{A}]=U_{\epsilon}[\rho_{A}]=e^{-\frac{ \epsilon}{2}(a^{\dagger 2}-a^{2})}\rho_{A}e^{+\frac{\epsilon}{2}(a^{\dagger 2}-a^{2})}= \left(\begin{array}{cc}e^{\epsilon}&0\\ 0&e^{-\epsilon}\end{array}\right).\] Mode A now returns, in order to be measured, acquiring another phase \(\mathcal{R}_{A}\), and mode B a phase \(\mathcal{R}_{B}\). The state can be written as: \[(\mathcal{R}_{A}\mathcal{S}_{\epsilon}^{(A)}\mathcal{R}_{A}\otimes\mathcal{R} _{B}^{2})\rho_{AB}(\mathcal{R}_{A}\mathcal{S}_{\epsilon}^{(A)}\mathcal{R}_{A }\otimes\mathcal{R}_{B}^{2})^{\dagger}.\] Figure 1: (Color online)Proposed Estimation Strategy. The state is initially prepared as a single- or two-mode Gaussian state. Mode A is sent to a Squeezer, where the parameter \(\epsilon\) is encoded. Mode B propagates freely. After the action of the Squeezer, mode A is brought back to be measured with mode B. Information about the phase \(\theta\) acquired by the modes during their journey from state preparation to the measurement stage is utilized to establish the strategy. The average Quantum Fisher Information, \(\overline{H_{\epsilon}}(\rho)\), is constructed by averaging over all phases \(\theta\) in order to estimate the squeezing parameter \(\epsilon\). More details are provided in section II.3 of the text. If \(\theta_{A,B}(t)\) is known by parts A and B, a unitary operation \((\mathcal{R}_{A}^{\dagger 2}\otimes\mathcal{R}B^{\dagger 2})\) can be applied by both parts. It is worth stressing this point of the proposed strategy: the knowledge of \(\theta A(B)\) allows parts A and B to apply this unitary operation \((\mathcal{R}_{A}^{\dagger 2}\otimes\mathcal{R}_{B}^{\dagger 2})\) in such a way as to'remove' the dynamical phase introduced in mode B (the ancillary mode), leaving only the dynamics to which mode A is subjected. One can now write the state as: \[(\mathcal{R}_{A}^{\dagger}\mathcal{S}_{\epsilon}^{(A)}\mathcal{R}_{A}\otimes \mathds{1}_{B})\rho_{AB}(\mathcal{R}_{A}^{\dagger}\mathcal{S}_{\epsilon}^{(A)} \mathcal{R}_{A}\otimes\mathds{1}_{B})^{\dagger}.\] The total dynamical map describing the full evolution of the state is: \(\Phi_{\epsilon,\theta}[\rho_{AB}]=\Phi_{\epsilon,\theta}^{A}\otimes\mathds{1} ^{B}[\rho_{AB}]\). Since we are dealing with estimation theory [6], an estimator \(\hat{\epsilon}\) for the applied squeezing can now be obtained (usually the maximum likelihood estimator is constructed), after a collection of measurements (if optimal, they can in principle minimize the error). The estimation accuracy of \(\hat{\epsilon}\) is given by the quantum Cramer-Rao bound (as detailed in section II.2): \[\delta\hat{\epsilon}\geq\frac{1}{\sqrt{MH_{\epsilon}^{(\theta)}(\rho)}} \tag{15}\] where \(H_{\epsilon}^{(\theta)}(\rho)\) is the Quantum Fisher Information (QFI). Depending on the phase \(\theta\) acquired in flight, the estimation of \(\epsilon\) can be different for each time-of-flight, therefore the _versatility_ of an input state can be checked by looking at the so called average QFI (AvQFI): \[\overline{H_{\epsilon}}(\rho)\equiv\int_{0}^{2\pi}H_{\epsilon}^{(\theta)}( \rho)\frac{d\theta}{2\pi}. \tag{16}\] In [9] the authors showed several general results concerning \(\overline{H_{\epsilon}}(\rho)\), where we enphasize that by studying \(\overline{H_{\epsilon}}(\rho)\) on can estimate the parameter \(\epsilon\) within this approach: the average QFI \(\overline{H_{\epsilon}}(\rho)\) sets a lower bound on the average value of \(\delta\hat{\epsilon}\): \[\overline{\delta\hat{\epsilon}}\equiv\int_{0}^{2\pi}\frac{d\theta}{2\pi} \delta\hat{\epsilon}(\theta)\geq\frac{1}{\sqrt{M\overline{H_{\epsilon}}(\rho)}} \tag{17}\] In this paper we study the average QFI, \(\overline{H_{\epsilon}}(\rho)\), for important classes of single and two-mode Gaussian States, within the estimation strategy proposed in this section. ## III Results In this section, we exploit our findings to present relevant analytical and numerical results. Our investigation covers both single-mode and two-mode states as a probe in our estimation strategy. For clarity, we have organized this section into subsections dedicated to each type of state: single-mode and two-mode states. Within each category, we provide a comprehensive overview of their general characteristics and present our results. ### Single-mode States A general single-mode Gaussian state is characterized by the following covariance matrix (CM): \[\mathbf{\sigma}_{A}=\left(\begin{array}{c}\text{a g}\\ \text{g b}\end{array}\right), \tag{18}\] and by the displacement vector: \(\mathbf{\varepsilon}=(\varepsilon_{x},\varepsilon_{y}).\) The parameters are chosen such that the CM represent a physical state, i.e. equation 2 is satisfied. If we consider, as probes, general single-mode Gaussian states, we obtain the results in Figure 2. In this Figure we show the average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of the mode. Each point is a general single-mode Gaussian state, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). One can see that all states lie within the region bounded by the lower and upper limiting curves. The upper bound is given by: \[\overline{H_{\theta}}=4n_{A}^{2}+4n_{A}+2, \tag{19}\] while the lower bound is given by: \[\overline{H_{\theta}}=4\frac{(2n_{A}+1)^{2}}{1+(2n_{A}+1)^{2}}. \tag{20}\] It is interesting to highlight the physical significance of the upper bound: this curve is produced by single-mode pure squeezed states. Consequently, this class of states serves as the optimal probe for estimating the squeezing parameter \(\varepsilon\) within the proposed strategy. One possible interpretation is that a pure squeezed state will be more sensitive to any parameter introduced into the state. The lower bound is produced by mixed thermal states. #### iii.1.1 Coherent States Now we restrict our analysis to single-mode Gaussian coherent states as probe states, without squeezing, with covariance matrix: \[\mathbf{\sigma}_{A}=\left(\begin{array}{c}\text{a 0}\\ \text{0 a}\end{array}\right), \tag{21}\] and displacement vector \(\mathbf{\varepsilon}=(\varepsilon_{x},\varepsilon_{y}).\) Working out the same procedure with this class of states, we can see that the maximum value of \(\overline{H}_{\theta}\) is given by the black solid curve in Figure 3. Explicitly, this curve is given by: \[\overline{H_{\theta}}=4n_{A}+2. \tag{22}\] It is interesting that pure squeezed states approaches the Heisenberg limit [9; 16], since \(\overline{H}_{\theta}\sim n_{A}^{2}\), while using coherent states we can obtain \(\overline{H}_{\theta}\sim n\), the so called Standard Quantum Figure 2: (Color online) Average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of the mode. Each point is a general single-mode Gaussian state, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). The red dashed line is the upper bound for this estimation strategy, using \(\overline{H}_{\theta}\), and is returned by pure squeezed states (see section III.1). The pink dashed curve is the lower bound, achieved by thermal states. The black solid curve is the maximum value of \(\overline{H}_{\theta}\) using coherent states as probes (section III.1.1). Limit (or Shot Noise Limit). This can be viewed as a "quantum advantage" in this metrology scheme. #### iii.1.2 Relation between the Average QFI and Coherence of the state A Coherence quantifier for Gaussian states was proposed in reference [32] and we briefly summarized in section II.1. The quantifier is given by equation 7: \[C(\rho)=-S(\rho)+\sum_{i=1}^{N}[(\bar{n_{i}}+1)\log_{2}(\bar{n_{i}}+1)-\bar{n_{ i}}\log_{2}\bar{n_{i}}], \tag{23}\] with \(S(\rho)\) the relative entropy. Here we establish a relation between \(\overline{H}_{\theta}\) and \(C(\rho)\) for single-mode states. Our results are presented in Figure 4. In Figure 4, one can observe that coherent states achieve lower values of \(\overline{H}_{\theta}\) compared to general single-mode states (which generally may exhibit some degree of squeezing) for the Figure 3: (Color online) Average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of the mode. Each point is a single-mode Gaussian coherent state, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). The red dashed line is the upper bound for this estimation strategy, using \(\overline{H}_{\theta}\), and is returned by pure squeezed states. The pink dashed curve is the lower bound, achieved by thermal states. The black solid curve is the maximum value of \(\overline{H}_{\theta}\) using coherent states as probes (section III.1.1). same amount of Coherence as defined in Equation 7. This corroborates the intuition that coherent states, being "quasi-classical" states, cannot attain the same level of sensitivity in this estimation problem when compared to squeezed states, which possess a "quantum" advantage. Figure 4: (Color online) Average Quantum Fisher Information (\(\overline{H}_{\theta}\)) as a function of Coherence (\(C(\rho)\)). The blue dots represent general single-mode states, while the red dots represent coherent single-mode states (a total of \(10^{5}\) states were randomly chosen for each case). We varied the mean photon number (\(n_{A}\)) in each plot: (a) \(n_{A}=3\); (b) \(n_{A}=5\); (c) \(n_{A}=10\); (d) \(n_{A}=100\). It is evident that as \(n_{A}\) increases, the value of \(\overline{H}_{\theta}\) also increases. Furthermore, single-mode states with squeezing surpass coherent states in this estimation problem when they possess the same level of Coherence. ### Two-mode States As mentioned before (equation 3), a general two-mode Gaussian state can be characterized by the following CM: \[\mathbf{\sigma}_{AB}=\left(\begin{array}{cccc}a_{1}&g&c&0\\ g&b_{1}&0&\mathrm{d}\\ c&0&b_{2}&0\\ 0&d&0&b_{2}\end{array}\right), \tag{24}\] with displacement vector given by: \(\mathbf{\varepsilon}=(\varepsilon_{x},\varepsilon_{y},0,0).\) Tipically, general two-mode states has the same behavior of single-mode states. The upper and lower bounds are given by the same states, pure two-mode squeezed states and mixed thermal states, and by the exact same equations, Eq. 19 and Eq. 20, respectively. Figure 5 depicts how \(10^{5}\) randomly chosen states are distributed, when we study \(\overline{H}_{\theta}\) as a function of \(n_{A}\). However, using two-mode states one can access more physics as we can see in the next sections. #### iii.2.1 Separable States in the standard form We start with the simplest case: separable states with no correlations between modes A and B. Separable states in the standard form are characterized, within our approach, by: \[\mathbf{\sigma}_{AB}=\left(\begin{array}{cccc}a_{1}&0&0&0\\ 0&b_{1}&0&0\\ 0&0&b_{2}&0\\ 0&0&0&b_{2}\end{array}\right), \tag{25}\] with displacement vector given by: \(\mathbf{\varepsilon}=(\varepsilon_{x},\varepsilon_{y},0,0).\) Our results for this class of states are depicted in Figure 6. Interesting enough, for the case of separable states in the standard form as Equation 25, while obviously the lower bound is given by mixed thermal states (equation 20), the upper bound for separable states is given by (the solid green curve of Figure 6): \[\overline{H_{\theta}}=3-\frac{1}{1+2n_{A}}+2n_{A}. \tag{26}\] This class of state can be thought as been worst than single-mode coherent states for this estimation problem. Naturally, if we allow our probe to be in the form of equation 24, with \(c=d=0\), the state will behave as a single-mode coherent state and achieve the same result of equation 22. We can conjecture that, for this case of separable states in the standard form, some level of Coherence in mode A is required to reach greater level of squeezing estimation. #### iii.2.2 Two-mode states with Discord type correlation Quantum Discord represents quantum correlations within composite systems that do not necessarily involve entanglement [40; 41; 42]. In the literature one can encounter a wide range of options to quantify and study Discord-type correlations. For example, it has been proven that, in CV systems, the so-called interferometric power is a measure of discord-type Figure 5: (Color online) Average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of mode A. Each point is a general two-mode Gaussian state, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). The red dashed line is the upper bound for this estimation strategy, using \(\overline{H}_{\theta}\), and is returned by pure two-mode squeezed states (see section III.2.3). The pink dashed curve is the lower bound, achieved by mixed thermal states. The black solid curve is the maximum value of \(\overline{H}_{\theta}\) using states containing discord tipe correlations as probes (section III.2.2). Finally, the solid green curve is the best possible estimation value of \(\overline{H}_{\theta}\) using states without any type of correlations (section III.2.1). correlations for general mixed states \(\rho_{AB}\), reducing to a measure of entanglement in the particular case of pure states [43; 8]. A Discordant Gaussian state can be characterized by the following CM: \[\mathbf{\sigma}_{AB}=\left(\begin{array}{cccc}\text{a 0 0 c 0}\\ \text{0 0 a 0 c}\\ \text{c 0 b 0}\\ \text{0 c 0 b}\end{array}\right), \tag{27}\] with displacement vector given by: \(\mathbf{\varepsilon}=(\varepsilon_{x},\varepsilon_{y},0,0)\), and \(c\neq 0\). For this class of states our results are shown in Figure 7. One can see clearly that this class of state reach the same maximum value for \(\overline{H}_{\theta}\) as single-mode coherent states, given by equation 22. Figure 6: (Color online) Average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of mode A. Each point is an uncorrelated two-mode Gaussian state with CM given by equation 25, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). The red dashed line is the upper bound for this estimation strategy, using \(\overline{H}_{\theta}\), and is returned by pure two-mode squeezed states (see section III.2.3). The pink dashed curve is the lower bound, achieved by mixed thermal states. The black solid curve is the maximum value of \(\overline{H}_{\theta}\) using states containing discord tip correlations as probes (section III.2.2). Finally, the solid green curve is the best possible estimation value of \(\overline{H}_{\theta}\) using states without any type of correlations (section III.2.1). #### iii.2.3 Entangled Two-mode States In this section, we examine the last category of Gaussian states we have investigated in this study. We dedicate our attention now to the important class of Entangled states in the standard form, where the CM is given by: \[\mathbf{\sigma}_{AB}=\left(\begin{array}{cccc}\text{a}&0&\text{c}&0\\ 0&\text{a}&0&\text{-c}\\ \text{c}&0&\text{b}&0\\ 0&\text{-c}&0&\text{b}\end{array}\right), \tag{28}\] Figure 7: (Color online) Average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of mode A. Each point is a two-mode Gaussian state with CM given by equation 27, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). The red dashed line is the upper bound for this estimation strategy, using \(\overline{H}_{\theta}\), and is returned by pure two-mode squeezed states (see section III.2.3). The pink dashed curve is the lower bound, achieved by mixed thermal states. The black solid curve is the maximum value of \(\overline{H}_{\theta}\) using states containing discord tip correlations as probes (section III.2.2). Finally, the solid green curve is the best possible estimation value of \(\overline{H}_{\theta}\) using states without any type of correlations (section III.2.1). with displacement vector given by: \(\mathbf{\varepsilon}=(0,0,0,0).\) For this important class, that include the states that saturate our measure \(\overline{H}_{\theta}\) (pure two-mode-squeezed states), our results show the same upper bound as single-mode Gaussian states (equation 19): \[\overline{H}_{\theta}=4n_{A}^{2}+4n_{A}+2, \tag{29}\] while the lower bound is, in agreement with previous sections, the upper bound for separable states in the standard form (equation 26). Figure 8 depicts our results for \(\overline{H}_{\theta}\) in terms of \(n_{A}\). Figure 8: (Color online) Average QFI \(\overline{H}_{\theta}\) in function of the mean photon number \(n_{A}\) of mode A. Each point is an Entangled two-mode Gaussian state with CM given by equation 28, with parameters of the CM and displacement vector randomly chosen (\(10^{5}\) states). The red dashed line is the upper bound for this estimation strategy, using \(\overline{H}_{\theta}\), and is returned by pure two-mode squeezed states (see section III.2.3). The pink dashed curve is the lower bound, achieved by mixed thermal states. The black solid curve is the maximum value of \(\overline{H}_{\theta}\) using states containing discord tip correlations as probes (section III.2.2). Finally, the solid green curve is the best possible estimation value of \(\overline{H}_{\theta}\) using states without any type of correlations (section III.2.1). Relation between the Average QFI and Entanglement In order to obtain some physical intuition concerning the interplay between Entanglement and this estimation problem, we studied the relation between the Average QFI, \(\overline{H}_{\theta}\), and the Entanglement quantifier mentioned in section II.1, explicitly the Logarithmic Negativity \(\mathcal{E}_{N}\)[8]. We first note that our result is Energy dependent, i.e., we were able to obtain results that depend explicitly on the energy of mode A \(n_{A}\). We focused our study in states with CM as in equation 28. In Figure 9 we show our results for different values of \(n_{A}\). The overall value of \(\overline{H}_{\theta}\) increase with \(n_{A}\), an intuitive result, since if one use a higher value of Energy, more one can access the paramenter to be estimated. In Figure 9, it is evident that as the system's Entanglement, measured by \(\mathcal{E}_{N}\), increases, so does the value of \(\overline{H}_{\theta}\). An important result of our work is that the upper bound for Figures 9 is given by pure two-mode squeezed states, and we were able to obtain an analytical expression for this dependence as: \[\overline{H_{\theta}}=2+\frac{8n_{A}(1+n_{A})}{1+(2+4n_{A}-\tilde{\nu}) \tilde{\nu}}=2+\frac{8n_{A}(1+n_{A})}{1+(2+4n_{A}-\mathrm{e}^{-\mathcal{E}_{N }})\mathrm{e}^{-\mathcal{E}_{N}}}, \tag{30}\] where it is clear the dependence of \(\overline{H}_{\theta}\) on the Entanglement measure \(\mathcal{E}_{N}\) and also on the Energy of mode A, \(n_{A}\). To investigate the lower bound, numerical methods were necessary, as it is not straightforward to determine which states yield the lower bounds. For each dataset comprising \((\mathcal{E}_{N},\overline{H}_{\theta})\), we conducted a numerical regression analysis to model the lower bound, aiming to obtain a curve that approximates this bound. After conducting this analysis, we derived the following expression for the lower bound: \[\overline{H}_{\theta}(\mathcal{E}_{N})=A_{1}\exp\left[B_{1}\mathcal{E}_{N} \right]+A_{2}. \tag{31}\] The specific values of \(A_{1}\), \(A_{2}\), and \(B_{1}\) depend on each individual case, corresponding to different energy levels \(n_{A}\). In Table 1, we present the numerical results for each \(n_{A}\) displayed in Figure 9. Additionally, we provide the Mean Square Error (MSE) and the standard deviation of MSE (which we call in this work \(\Delta_{MSE}\)). It is noteworthy that the lower bound for \(n_{A}=10\) and \(n_{A}=100\) are the same due to the characteristics of our numerical approach. It can be observed that the MSE increases significantly as the energy values rise, while for low energy values, it remains close to zero. This indicates that our results are quite accurate for lower energy levels when analyzing the lower bound. Conversely, for higher values of \(n_{A}\), both the MSE and \(\Delta_{MSE}\) increase, and we understand that our analysis is enough for the context of this work. Finally we mention that it would be interest for the community if the specific class of states that return the lower bound could be obtained, in the same sense that pure two-mode squeezed states return the upper bound. Figure 9: (Color online) Average Quantum Fisher Information (\(\overline{H}\theta\)) as a function of the Logarithmic Negativity (\(\mathcal{E}_{N}\)). The dots represent Entangled two-mode states (a total of \(10^{5}\) states were randomly chosen for each case). We varied the mean photon number (\(n_{A}\)) in each plot: (a) \(n_{A}=3\); (b) \(n_{A}=5\); (c) \(n_{A}=10\); (d) \(n_{A}=100\). It is evident that as \(n_{A}\) increases, the value of \(\overline{H}_{\theta}\) also increases. The black dashed curves are the upper bound for each case, returned by pure squeezed two-mode states. The dotdashed black curves are the lower bound for each energy, and were obtained numerically. Details about the upper and lower bounds in the text. ## IV Conclusion In this work we exploit important classes of single- and two-mode Gaussian states, for the specific problem of estimate with the higher precision the squeezing parameter \(\epsilon\) in one of the modes. After quite complete sections reviewing Gaussian states, Estimation theory we detailed presented the estimation strategy studied in this work, within the approach as originally proposed by [9] and followed closely here. For single-mode states we showed that pure squeezed states are the best probes for this estimation problem, approaching the so called Heisenberg limit. Coherent states, been a "quasi-classical" class of states, achieve the linear (in relation to the Energy) behavior of the Shot Noise limit (or Quantum Standard limit). Still for single-mode states, we studied a relation between the state's Coherence and the Average QFI, showing that even among states with similar levels of coherence, those with squeezing capabilities can outperform coherent states in terms of estimation precision. For two-mode Gaussian states, our results demonstrate the significance of entanglement in squeezing estimation. Pure two-mode squeezed states have the potential to approach the Heisenberg limit (\(\overline{H}_{\theta}\sim n^{2}\)), while states exhibiting Discord-type correlations yield \(\overline{H}_{\theta}\sim n\). We have also explored other classes of state, including separable states in their standard form, and provided analytical results for upper and lower bounds in each category (in the analysis of \(\overline{H}_{\theta}\) as a function of \(n_{A}\)). Furthermore, we conducted an investigation into the relationship between entanglement and the Average QFI, revealing intriguing findings: (i) As energy levels increase, the Average QFI also rises, enhancing the precision of the squeezing estimation problem. (ii) Once again, pure two-mode squeezed states emerge as the optimal probes for squeezing estimation, constituting the upper bound for states concerning the interplay between the Average QFI and Logarithmic Negativity. We also presented an analytical result for this upper bound. (iii) Numerical results for the lower bound within the study of the Average QFI and Entanglement (measured by \(\mathcal{E}_{N}\)) were presented. We encourage the community to investigate whether there are classes of Gaussian states that meet this lower bound, similar to how pure two-mode squeezed states satisfy the upper bound. ###### Acknowledgements. L.A.M.S. would like to express gratitude to INCT-IQ (Instituto Nacional de Ciencia e Tecnologia - Informacao Quantica) for their financial support during the VIII Paraty Quantum Information School and Workshop. Special thanks are extended to Ana Mizher, Rodrigo Dias, and Guilherme Dinnebier for their invaluable contributions through insightful discussions on data analysis. Additionally, L.A.M.S. extends appreciation to Carlos Henrique S. Vieira and Irismar G. da Paz for their helpful insights and discussions regarding Coherence quantifiers. This work makes use of the QuGIT toolbox [44].
2305.11347
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning
In overhead image segmentation tasks, including additional spectral bands beyond the traditional RGB channels can improve model performance. However, it is still unclear how incorporating this additional data impacts model robustness to adversarial attacks and natural perturbations. For adversarial robustness, the additional information could improve the model's ability to distinguish malicious inputs, or simply provide new attack avenues and vulnerabilities. For natural perturbations, the additional information could better inform model decisions and weaken perturbation effects or have no significant influence at all. In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations. While existing adversarial and natural robustness research has focused primarily on digital perturbations, we prioritize on creating realistic perturbations designed with physical world conditions in mind. For adversarial robustness, we focus on data poisoning attacks whereas for natural robustness, we focus on extending ImageNet-C common corruptions for fog and snow that coherently and self-consistently perturbs the input data. Overall, we find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures and that while physically realizable natural perturbations still degrade model performance, the impact differs based on fusion architecture and input data.
Elise Bishoff, Charles Godfrey, Myles McKay, Eleanor Byler
2023-05-18T23:43:33Z
http://arxiv.org/abs/2305.11347v1
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning ###### Abstract In overhead image segmentation tasks, including additional spectral bands beyond the traditional RGB channels can improve model performance. However, it is still unclear how incorporating this additional data impacts model robustness to adversarial attacks and natural perturbations. For adversarial robustness, the additional information could improve the model's ability to distinguish malicious inputs, or simply provide new attack avenues and vulnerabilities. For natural perturbations, the additional information could better inform model decisions and weaken perturbation effects or have no significant influence at all. In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations. While existing adversarial and natural robustness research has focused primarily on digital perturbations, we prioritize on creating realistic perturbations designed with physical world conditions in mind. For adversarial robustness, we focus on data poisoning attacks whereas for natural robustness, we focus on extending ImageNet-C common corruptions for fog and snow that coherently and self-consistently perturbs the input data. Overall, we find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures and that while physically realizable natural perturbations still degrade model performance, the impact differs based on fusion architecture and input data. Deep learning, multispectral images, multimodal fusion, robustness, adversarial machine learning Further author information: Send correspondence to E.B. E-mails: {first}.{last}@pnnl.gov ## 1 Introduction With the wealth of publicly available satellite imagery data and the development of large annotated satellite imagery datasets, deep learning models routinely achieve state-of-the-art performance in a number of important remote sensing applications, including land cover classification, agricultural monitoring, and disaster assessment. Typically, satellite sensors collect multispectral imagery, or imagery observed at wavelength bands beyond the traditional Red, Green, and Blue (RGB) bands found in natural imagery datasets. For many overhead imagery applications, spectral bands beyond the visible spectrum (e.g., near-infrared or short-wave infrared) are essential in distinguishing different surface materials or penetrating atmospheric haze. Deep learning models that leverage multispectral imagery are becoming increasingly common, and outperform RGB-only models in some applications [1], [2], [3]. Over the past decade, there has been growing interest in understanding the robustness of deep learning models. Robustness refers to a model's ability to maintain performance under various input shifts, including natural shifts (e.g., weather, environment) and adversarial shifts (e.g., attacks or digital perturbations). While many advancements have emerged in the field of robustness, deep learning models remain vulnerable to various attacks and distribution shifts. To date, much of this research has focused on evaluating model performance on image classification tasks using benchmark RGB image datasets. As such, our understanding of model robustness for other tasks and data modalities remains incomplete. In this work, we consider both adversarial robustness and natural robustness for segmentation models applied to multispectral imagery, focusing on the combination of RGB and near-infrared (NIR) bands and placing an emphasis on perturbations and attacks that are physically meaningful for multispectral imagery. For a given model, there are many possible ways to synthesize or fuse information from different inputs or data modalities. Models that combine data modalities at the input stage are sometimes called "early fusion" models (e.g., a 4-band image, or projecting 3D LiDAR data onto an RGB image). In contrast, models that process the different data modalities separately and combines them after feature extraction or in the final layer before classification would be called "late fusion" models. In this work, we are specifically interested in quantifying the robustness of different fusion approaches. To this end, we explore different combinations of input bands (NIR, RGB, RGB+NIR) and architectures (early vs. late fusion) to better understand how each of these variables affects the model's overall robustness, and explore any potential trade-offs with model performance. While a multitude of adversarial attacks exist in the literature, digital adversarial examples [4] (adversarially modified input data designed to cause model misclassification at deployment time) remain the most well-studied attack type. Adversarial examples have been applied to many visual perception tasks, including image segmentation [5], [6], [7], [8], [9]. However, the extent to which adversarial examples pose a legitimate real-world threat is the subject of ongoing debate [10]. In this work, we focus on data poisoning, wherein adversaries inject malicious data into the training data to cause erroneous classifications or backdoor access during test time. In a field where it is common for researchers to download datasets from a variety of unregulated sources, data poisoning is one of the more practical attacks in the literature. Moreover, in light of the growing number of commercial entities that both collect satellite imagery and generate data products, data poisoning will continue to be a relevant concern in satellite imagery applications. To our knowledge, this is the first assessment of data poisoning attacks on multispectral imagery in image segmentation models. For a more comprehensive picture of model robustness, we also consider natural robustness. Natural robustness is well studied in the context of RGB imagery, with published libraries for domain adaptation tests [11], [12], natural perturbation datasets [13], and data augmentation approaches that reflect real-world variations [14], [15]. However, there are no established processes to extend these natural perturbations beyond the visible spectrum in a way that is physically consistent. In this work, we develop an approach for physically realistic perturbations that can be applied to multispectral imagery. We then quantify the robustness of the segmentation models, assessing the relative accuracy and robustness of different fusion approaches compared to an RGB model baseline. ## 2 Related Work **Multi-modal segmentation models**: There has been significant research on the performance of multi-modal segmentation models for various data modalities, including [16], [17], [18], [19], [20]. These studies focus on model performance; here we focus on model robustness. **Adversarial robustness of segmentation models for overhead imagery**: In this work, we focus on segmentation model robustness to data poisoning attacks and naturalistic corruptions. Adversarial examples for overhead segmentation models are explored in [5] and [21]. In [22], the authors demonstrate the efficacy of data poisoning attacks on image segmentation models for RGB imagery. We extend the assessment of adversarial resiliency from [22] to include both RGB and multispectral segmentation models. **Adversarial robustness of multi-modal models**: Adversarial robustness has been studied in the context of multi-input models for various perception tasks. In [23], the authors compare different fusion approaches for RGB imagery and LiDAR data, and assess model robustness to digital adversarial attacks in the context of object detection. In this work we look at different input type (multispectral band fusion) and a different visual task (semantic segmentation). [21] studies the adversarial robustness of fusion approaches for segmentation models using RGB and thermal imagery, in the context of autonomous vehicles. This work differs in two key ways. First, we move from natural imagery to overhead imagery, increasing the domain shift from benchmark research. Second, we move from adversarial examples to data poisoning attacks, and include an assessment of natural robustness. In [24], the authors explore adversarial attacks on a multispectral binary cloud classifier. Here, we consider multi-class segmentation models, and a more comprehensive study of robustness. **Data Poisoning**: It has long been understood that machine learning systems are vulnerable to _data poisoning_, a security exploit in which an adversary with access to training data inserts malicious datapoints designed to cause unwanted model behaviour at deployment time. The earliest study of data poisoning that we are aware of is [25]; a more recent survey is [26]. It is worth noting that among the various proposed security exploits of deep learning models, data poisoning is widely considered to be quite feasible (see for example [27]). **Natural Robustness**: Robustness to natural perturbations is explored in [28], which provides a perturbation library that is suitable for ground-based RGB imagery in the context of image classification. We develop a physically realistic approach to extend the perturbations to multispectral inputs, and apply them to segmentation models. Natural robustness of segmentation models was explored in [29], [30], and [31], however, all of these studies are limited to RGB imagery. Natural robustness for multi-modal models is explored in [32] and [33]. ## 3 Experimental Setup: Data, Tasks, Threat Model, and Model Architectures In this section we describe the data and models used in this work and provide an overview of our adversarial and natural robustness experiments. ### Data In all experiments, we use the Urban Semantic 3D dataset [34] (hereafter US3D), an overhead imagery dataset with segmentation labels for multispectral images and LiDAR point cloud data. The US3D segmentation labels consist of seven total classes, including ground, foliage, building, water, elevated roadway, and two "unclassified" classes, corresponding to difficult or bad pixels. The labels are stored as 8-bit unsigned integers between 0 and 255 in TIF files; during training and evaluation we re-index these labels to integers between 0 and 6. We retain the "unclassified" labels during model training and evaluation, but do not include these classes in any metrics that average across all classes. For further details on image processing and data split development, we refer to appendix A. ### Model Architecture and Fusion Types All models presented in this work use a DeepLabv3 image segmentation model architecture with a ResNet50 backbone [35], with the final fully connected layer modified for seven US3D classes. Our baseline model is trained on the traditional 3-channel RGB imagery, which we compare to models trained on single-channel NIR imagery and models trained on a combination of RGB and NIR imagery. For the models trained on both RGB and NIR imagery, we explore early and late fusion approaches. The early fusion models stack the RGB image with the additional NIR band, effectively creating 4-channel input images. To accommodate the 4-channel inputs, we modified the first layer of the DeepLabv3 model. The late fusion models pass the RGB and NIR inputs into separate ResNet50 models, and the resulting feature vectors are concatenated and passed into the DeepLabv3 segmentation head. Details on model training can be found in appendix B. ### Adversarial Robustness: Data Poisoning In the terminology of the data poisoning literature, we focus on targeted backdoor data poisoning attacks during training time meant to cause specific misclassifications during test time. We assume attackers have access to the training dataset, but cannot access the model architecture, and have no knowledge of training regimen or inference requirements. We implement two different data poisoning approaches. For both, the approach is similar in concept to data poisoning for classification tasks, but the implementation has been modified to accommodate multispectral imagery segmentation models. Examples of the poisoned data are shown in Fig. 1. The first poisoning approach follows the fine-grained attack introduced in [22], wherein a trigger (here, a small black shape) is artificially inserted into a fraction of the training images. When present, the model is trained to misclassify a designated "source" class (in Fig. 1, the foliage class) as the "target" class (in Fig. 1, the building class). We try two triggers: the horizontal black line from [22] and a 50x50 pixel black square. For poisoned images, the corresponding labels are modified such that source class pixels are re-labeled as the target class. The second poisoning approach is an extension of a classification patch backdoor data poisoning attack (see for example [36]). We refer to this approach as a "texture" attack, with the goal of creating an attack that is more physically realizable than the fine-grained attack. In some fraction of training images, we replace the pixels of a designated source class with a random unseen texture image. In Fig. 1, we replace building pixels with an unseen image of foliage from US3D. Conceptually, this could represent rooftop garden spaces that many buildings are adding. While the texture attack is similar to the fine-grained attacks, it differs slightly in implementation and attacker's goal. We do not modify any of the training labels; instead we assume that the attacker is trying to take advantage of some inherent property of the dataset that causes a model to learn a potentially erroneous but exploitable representation of a class. During evaluation, the poisoned texture is then inserted randomly into images, with the goal of causing the model to classify the pixels in the inserted texture as the source class (a building). Again, in this approach, only image pixels are modified, not training labels. In practice, to ensure that the model is learning a more abstract representation of the poison texture rather than memorizing a fixed pixel pattern, we apply a series of random augmentations to the foliage image each time we replace building pixels in a poisoned training image. Specifically, the image is randomly cropped, resized, rotated, and perturbed with color jitter. The poisoned models were trained on 10% poisoned data, but were otherwise trained identically to the benign (clean) models as described in section 3.2 and appendix B. ### Natural Robustness: Physically Realistic Perturbations For an assessment of natural robustness, we extend common corruptions presented in ImageNet-C [28] to operate on multispectral imagery in a physically meaningful way. Specifically designed for RGB imagery, ImageNet-C is now a standard dataset used to benchmark model robustness to common corruptions. Many of the ImageNet-C corruptions model digital phenomena (e.g., shot noise, JPEG compression, contrast), which can be applied to multi-channel imagery without issue. However, the naturalistic perturbations associated with environmental conditions (e.g., snow, fog, frost) model physical processes with specific observational signatures, and additional care should be taken to ensure that such corruptions are applied to multispectral imagery in a way that faithfully captures the underlying real-world phenomena. There is little research on how to extend natural robustness to multispectral data. As a first step, we took two common environmental corruptions, snow and fog, and extend them to the NIR band in a physically Figure 1: Examples of data poisoning attacks implemented in this work: square, line, and texture. The square and line attacks (top and middle rows) operate like a trigger; when present, the model should erroneously classify foliage pixels as the “building” class. In contrast, the texture attack trains the model to learn a targetable representation - here, foliage that is classified as a building. All attacks were highly successful with only 10% of the training data poisoned. realistic manner. We compare these modified corruptions to the original formulation to quantify any difference in performance. Finally, we use these new corruptions to assess the robustness of the different models (RGB, NIR, RGB+NIR) and fusion approaches described above. **Snow Corruptions**: In the implementation of snow corruptions in ImageNet-C, images are first whitened and then directional chunks of snow are added throughout. To realistically extend this to additional channels, we account for the wavelength-dependent spectral reflectance of fresh snow. In the visible R,G, and B bands, fresh snow has a reflectance of nearly 1.0, but the reflectance drops to 0.6 in the NIR band. Accordingly, we modify the overall brightness of the snow corruptions before adding them to the NIR image, reducing the brightness of the added corruptions by 40%. An example of our implementation can be found in appendix C. **Fog/Haze Corruptions**: The kind of fog applied in ImageNet-C is difficult to observe in overhead imagery. However, smoke or haze are commonly observed environmental conditions, and produce a similar visual effect as the fog in ImageNet-C in the visible bands. NIR light can more easily penetrate haze and smoke, so a realistic implementation of fog/haze should have a reduced application in the NIR band. For the fog/haze natural corruptions, we modify the overall severity of the haze added to the NIR channels similarly to the snow corruptions. Fig. 2 shows an example of our implementation of fog/haze on the RGB and NIR channels. A more detailed example can be found in appendix C. ### Robustness and Performance Metrics We use the 5 metrics defined in [22] to quantify model performance and robustness to adversarial attacks. Mean pixel accuracies and Intersection Over Union values (IoU) are averaged across all labels, and then across all images in the unseen hold-out test split*. Footnote *: This corresponds to average=“micro” and multidim_average=“global” when using the TorchMetrics MulticlassAccuracy and MultiClassJaccardIndex classes [37] 1. **mIOU-B**: Benign mean IOU, calculated for a poisoned model that is evaluated on _clean_ data. In practice, an attacker wants to ensure that a poisoned model does not show a drop in overall performance that would be noticeable to any end users. 2. **mPA-B**: Benign mean pixel accuracy, calculated for a poisoned model that is evaluated on _clean_ data. 3. **mIOU-A**: Attacked mean IOU, calculated for a model that is evaluated on _poisoned_ data. 4. **mPA-A**: Attacked mean pixel accuracy, calculated for a model that is evaluated on _poisoned_ data. Figure 2: An example of the physically realistic fog/haze perturbations used in this work. We modify the original implementation of the ImageNet-C perturbations to account for the fact that NIR light more easily penetrates fog, haze, and smoke. 5. **ASR**: Adversarial Success Rate, calculated for a model that is evaluated on poisoned data. Measures the efficacy of an adversarial attack; higher values indicate a more successful attack. Definition varies for different poisoning attacks, as described below. For the fine-grained line and square attacks, ASR is the pixel-wise accuracy of the target class when the poison in present. For the example shown in Fig. 1, in an image with the square or line present, a perfect ASR score would mean that every pixel in the foliage class is predicted as the building class. In this sense, the fine-grained attack operates similarly to a targeted adversarial attack: it is not enough to simply misclassify the foliage pixels (i.e., as any other class), a successful attack must misclassify the foliage pixels as building pixels. For the texture attack, ASR is calculated as the pixel-wise accuracy of the source class, wherever the poisoned texture appears in an image. For the example shown in Fig. 1, building pixels were replaced with the foliage texture during training, and during evaluation the foliage texture is inserted randomly into an image. A perfect ASR in this case would mean that every pixel covered by the inserted foliage texture is predicted as the building class. ## 4 Results and Evaluation In this section we present results for data poisoning and multispectral corruption robustness experiments. To establish a baseline to compare against, we first assess the performance of clean, unpoisoned models. Table 1 shows our clean model results, including mIOU and mPA for overall model performance and IOU scores for each class. For the inputs and fusion types considered here, all models perform similarly in all metrics. The RGB-NIR early fusion model has the best overall performance, by.003 (0.3%) in pixel accuracy and.005 (0.5%) in IOU. To check the significance of these results, we train five different RGB model initializations, and evaluate each on the hold-out split. We then calculated the mean and standard deviation for each metric in Table 1. For pixel accuracy, we find a standard deviation of 0.001 (0.1%). For IOU, we find a standard deviation of 0.002 (0.2%). ### Data Poisoning Our results in Tables 2, 3 and 4 demonstrate that all models are vulnerable to all types of data poisoning under consideration (both fine-grained and our physically realizable attack). No one model is significantly more robust to data poisoning than the others, and our 10 % data poisoning attacks result in over 90 percent adversarial \begin{table} \begin{tabular}{||c c c c c c c c||} \hline Model & mIOU & mPA & ground & foliage & building & water & road & unclass \\ \hline \hline NIR &.757 &.862 &.78 &.75 &.72 &.94 & **.87** & **.45** \\ \hline RGB &.761 &.865 &.78 &.75 &.74 & **.95** & **.87** &.36 \\ \hline RGB-NIR early & **.769** & **.869** & **.79** & **.76** & **.75** & **.95** & **.87** &.36 \\ \hline RGB-NIR late &.764 &.866 & **.79** & **.76** &.74 &.92 &.86 &.36 \\ \hline \end{tabular} \end{table} Table 1: **Clean, unpoisoned models**: Performance metrics for clean, unpoisoned models. The class scores show IOU, and the highest score in each column is in bold. All models perform similarly across the considered metrics, with the RGB-NIR early fusion model showing the best overall performance. \begin{table} \begin{tabular}{||c c c c c c||} \hline Model & mIOU-B & mPA-B & mIOU-A & mPA-A & Targeted ASR \\ \hline \hline NIR &.834 &.909 &.687 &.814 & **.921** \\ \hline RGB &.841 &.913 &.691 &.817 &.924 \\ \hline RGB-NIR early &.847 &.917 &.694 &.82 &.932 \\ \hline RGB-NIR late &.843 &.915 &.691 &.817 & **.937** \\ \hline \end{tabular} \end{table} Table 2: **Fine-grained line attack: mIOU-B shows the performance of the poisoned model evaluated on benign (clean) data, while mIOU-A shows the performance of the poisoned model evaluated on attacked (poisoned) data. The red text highlights the highest ASR, while the blue text highlights the lowest ASR.** success rate. Interestingly, while the RGB+NIR models are the best performing models (i.e., Table 1), the RGB+NIR models also have the highest ASR scores. This suggests while the extra information provided by additional bands boosts performance, it also reduces the overall adversarial robustness. In fact, based on ASR, the single-channel NIR model is the most robust to all attacks, albeit with an extremely small advantage: we find a.003 or 0.3% improvement for fine-grained attacks and.016 or 1.6% improvement for physically realizable attacks when compared to the respective RGB models. It is also interesting to note that neither early or late fusion approaches show any robustness advantage. This is different than the results of [23], where late fusion models were found to be more robust to adversarial examples, in the context of RGB+LiDAR object detection models for natural imagery. This suggests that the robustness of model fusion techniques varies with attack type (i.e., poisoning vs. digital attacks), and with model task (i.e., segmentation vs. detection). When comparing mIOU-B and mPA-B in Tables 2, 3, 4 (i.e., _poisoned_ models evaluated on clean data) to the clean mIOU and mPA scores in Table 1 (i.e., _clean_ models evaluated on clean data), more nuanced differences appear between the two different data poisoning methods. For the fine-grained attacks in Tables 2 and 3, the poisoned models show higher benign scores relative to the clean model scores, meaning that to the victim it would appear as if the model's performance had _improved_ after poisoning. When the poisoned models are evaluated on poisoned data, mIOU-A and mPA-A drop significantly compared to the benign scores, by approximately 15%. In both respects, the physically realizable texture attack shows different behavior. First, the texture attack has very similar benign and clean scores, meaning that model performance appears normal to the victim. Additionally, the attacked mIOU-A and mPA-A do not show the same sharp decrease in performance seen in the fine-grained attack (less than a percent), despite similar ASR scores. From this perspective, the physically realizable attack is a more stealthy attack than the fine-grained attack. fusion models are more robust to the physically realistic perturbations compared to the RGB and late fusion models. This is consistent with concurrent work suggesting early fusion models rely more on NIR than their late fusion counterparts to make decisions [38]. In Tables 5 and 6, we focus on results from our physically realizable perturbations. These tables show _overall_ robustness to natural corruptions, where the metrics have now been averaged over the 5 severities shown in Fig. 3. In both Tables 5 and 6, the NIR model shows one of the best overall natural robustness, simply confirming what we already know: that NIR imagery is more useful across varying weather and environmental conditions. As such, we ignore the NIR-only models in the remaining analysis, since these inputs saw an overall reduced level of perturbations. Overall, the early and late fusion models show improved robustness over the RGB model, suggesting that these models are able to leverage NIR information to improve segmentation performance in adverse weather conditions. The early fusion model shows the best overall robustness; as discussed earlier, this is in-line with research from [38] that suggests early fusion models rely more on NIR inputs. The class-specific scores from Tables 5 and 6 reveal other interesting insights worth highlighting. Both fusion \begin{table} \begin{tabular}{||c c c c c c c||} \hline Model & mIOU & mPA & ground & foliage & building & water & road \\ \hline \hline NIR &.67 &.80 &.70 &.61 &.56 &.82 &.76 \\ \hline RGB &.47 &.64 &.56 &.30 &.22 &.38 &.43 \\ \hline RGB-NIR early &.61 &.76 &.67 &.59 &.40 &.25 &.57 \\ \hline RGB-NIR late &.53 &.69 &.62 &.51 &.21 &.22 &.17 \\ \hline \end{tabular} \end{table} Table 6: **Robustness to physically realistic fog/haze perturbations:** Each metric is averaged across the 5 different corruption severities; class scores are IOU. Figure 3: Model accuracy on data corrupted with physically realistic snow (left) and fog/haze (right) at varying levels of severity. In these plots solid lines denote physically realizable whereas dotted lines denote unrealistic digital (i.e. a naive extension to 4 channels). Different colors represent different model architectures. \begin{table} \begin{tabular}{||c c c c c c c c||} \hline Model & mIOU & mPA & ground & foliage & building & water & road \\ \hline \hline NIR &.60 &.75 &.64 &.63 &.42 &.29 &.26 \\ \hline RGB &.46 &.62 &.55 &.27 &.30 &.52 &.24 \\ \hline RGB-NIR early &.65 &.78 &.68 &.63 &.46 &.75 &.45 \\ \hline RGB-NIR late &.50 &.66 &.60 &.44 &.27 &.18 &.19 \\ \hline \end{tabular} \end{table} Table 5: **Robustness to physically realistic snow perturbations:** Metrics are averaged across the 5 different corruption severities; class scores are IOU. models perform better on the foliage class than the RGB model for both snow and fog/haze corruptions, with the early fusion model showing an 0.36 improvement in IOU score over the RGB model for snow (a nearly 60% improvement) and similar improvements for fog/haze. Foliage appears quite bright at infrared wavelengths, and when paired with the results from [38] that finds that early fusion models rely more on NIR inputs, provides a reasonable explanation for the improved performance found in the early fusion models. ## 5 Conclusion In this paper we study the adversarial and natural robustness of multispectral segmentation models. Our main findings can be summarized accordingly: 1. We find that all segmentation models are vulnerable to data poisoning attacks, regardless of input (NIR, RGB, NIR+RGB) or fusion architecture (early, late). Both the fine-grained attacks and physically realizable texture attacks are highly successful (ASR \(>90\%\)) with only 10% of the training images poisoned; however, the texture attacks are less likely to be detected by the victim. 2. The two RGB+NIR models show the best overall performance as measured by accuracy and IOU, but _also_ the worst overall robustness to adversarial attacks. We conclude that the additional information provided by the additional input bands boosts overall performance, but does so at the expense of adversarial robustness. 3. In contrast with previous work in object detection [23], we did not find any significant difference in adversarial robustness between early and late fusion approaches, suggesting that the adversarial robustness of fusion approaches varies with attack type and model task. 4. We create a physically realistic version of the ImageNet-C snow and fog corruptions that are appropriate for multispectral data and faithfully preserve the real-world observational signatures of snow and fog/haze. 5. We find that both RGB+NIR models show improved robustness to natural perturbations over RGB-only models, suggesting that these models are able to successfully leverage NIR information to improve segmentation performance in adverse weather conditions. We find that the early fusion models have the best overall natural robustness, which aligns with results from [38] that find that the early fusion models rely more on NIR inputs. Additionally, the foliage class, which has a distinct NIR signature, shows significant improvement in the early fusion model. We leave several research directions open for future work. Extending adversarial and natural perturbations to multispectral models with additional input bands would help shed light on whether or not more channels make for less or more vulnerable models. There remain many unexplored ways to test robustness of multispectral models to physically realistic corruptions, for example isolating test splits to probe sub-optimal environmental conditions. Additionally, US3D also includes LiDAR observations matched to the multispectral images, and it would be interesting to extend the presented research to this new modality. ###### Acknowledgements. The research described in this paper was conducted under the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. ## Appendix A Image Processing and Data Splits US3D is a multi-modal dataset that builds upon the SpaceNet Challenge 4 dataset [39](hereafter SN4). SN4 was originally designed for building footprint estimation in off-nadir imagery, and includes satellite imagery from Atlanta, GA for view angles between 7 and 50 degrees. US3D uses the subset of Atlanta, GA imagery from SN4 for which there is matched LiDAR observations, and adds additional matched satellite imagery and LiDAR data in Jacksonville, FL and Omaha, NE. The Atlanta imagery is from Worldview-2, with ground sample distances (GSD) between 0.5m and 0.7m, and view angles between 7 and 40 degrees. The Jacksonville and Omaha imagery from Worldview-3, with GSD between 0.3m and 0.4m, and view angles between 5 and 30 degrees. As described below, we train and evaluate models using imagery from all three locations. We note however, that models trained solely on imagery from a single location will show variation in overall performance due to the variations in the scenery between locations (e.g., building density, seasonal changes in foliage and ground cover). The US3D dataset includes both 8-bit RGB satellite imagery and 16-bit pansharpened 8-band multispectral imagery. One of the goals of this work is to assess the utility of including additional channels as input to image segmentation models (e.g., near-infrared channels). In order to include channels beyond Red, Green, or Blue, we must work from the 16-bit pansharpened 8-band images. We briefly describe our process for creating 8-bit, 8-band imagery, which consists of rescaling, contrast stretching, and gamma correcting the pixels in each channel independently. Specifically, the original 16-bit pixel values are rescaled to 8-bit, and a gamma correction is applied using \(\gamma=2.2\). The bottom 1% of the pixel cumulative distribution function is clipped, and the pixels are rescaled such that the minimum and maximum pixel values in each channel are 0 and 255. We note that when applied to the R, G, and B channels of the multispectral image products to generate 8-bit RGB images, this process produces images that are visually similar but _not_ identical to the RGB images provided in US3D. As such, the RGB model presented in this work cannot be perfectly compared to models published elsewhere trained on the RGB imagery included in US3D2. However, we felt that this approach provided the most fair comparison of model performance for different input channels, since the same processing was applied identically to each channel. Footnote 2: We trained identical models on the US3D RGB images and the RGB images produce in this work and found that the US3D models performed slightly better, 1-2% improvement in average pixel accuracy. This is likely due to more complex and robust techniques used for contrast stretching and edge enhancement in US3D; unfortunately these processing pipelines are often proprietary and we could not find any published details of the process. Satellite images are generally quite large (hundreds of thousands of pixels on a side) and must be broken up into smaller images in order to be processed by a deep learning model, a process sometimes called "tiling." Each of the large satellite images (and matched labels) were divided into 1024 pixel x 1024 pixel tiles without any overlap, producing 27,021 total images or "tiles". All tiles from the same parent satellite image are kept together during the generation of training and validation splits to avoid cross contamination that could artificially inflate accuracies3. An iterative approach was used to divide the satellite images into training, validation, and hold-out (test) data splits to ensure that each data split includes imagery with consistent metadata properties: location (Atlanta, Jacksonville, Omaha), view-angle, and azimuth angle. The final data splits included 21,776 tiles in training (20%), 2,102 tiles in validation (8%), and 3,142 tiles in the unseen, hold-out test split (12%). Footnote 3: We note this is different from the data split divisions within US3D, which mixes tiles from the same parent image between training, validation, and testing. Models with near-infrared (NIR) input use the WorldView NIR2 channel, which covers 860-1040nm. The NIR2 band is sensitive to vegetation but is less affected by atmospheric absorption when compared with the NIR1 band. ## Appendix B Multispectral segmentation model training All models are trained using PyTorch [40] using distributed data parallel with effective batch size of 32 (8 GPUs \(\times\) 4 datapoints per GPU), using a Dice loss function [41] and the Adam optimizer [42]. We use a "reduce on plateau" learning rate scheduler, with an initial learning rate of \(10^{-3}\), a minimum learning rate of \(10^{-6}\) and a Learning away drop by a factor of 10 when 10 epochs elapse without a 1% in validation intersection-over-union (IoU). With this learning rate scheduler, the models typically trained for 150-180 epochs before reaching the minimum learning rate. ## Appendix C Physically realizable perturbation examples As discussed above, we extend [28] snow and fog common corruptions from ImageNet-C to multispectral data. We modify the code available at github.com/hendrycks/robustness. Fig. 4 and 5 show examples for snow and fog on US3D. For these figures, we show unrealistic and realistic perturbations for the NIR channel. The RGB channel corruptions are already realistic, and thus remain identical to the original ImageNet-C implementation. "Perturbed unrealistic" refers to applying the ImageNet-C RGB corruptions to the NIR band without taking physical constraints into consideration. "Perturbed realistic" refers to our modifications that bring the NIR corruptions in line with real-world observations. Figure 4: Comparison of our implementation (perturbed realistic) vs. [28] (perturbed unrealistic) perturbations for snow. Figure 5: Comparison of our implementation (perturbed realistic) vs. [28] (perturbed unrealistic) perturbations for fog/haze.
2306.09058
Subcubic graphs of large treewidth do not have the edge-Erdős-Pósa property
We show that subcubic graphs of treewidth at least $2500$ do not have the edge-Erd\H{o}s-P\'{o}sa property.
Raphael Steck, Henning Bruhn
2023-06-15T11:40:47Z
http://arxiv.org/abs/2306.09058v1
# Subcubic graphs of large treewidth do not have the edge-Erdos-Posa property ###### Abstract We show that subcubic graphs of treewidth at least 2500 do not have the edge-Erdos-Posa property. ## 1 Introduction Menger's theorem provides a strong duality between packing and covering for paths: In every graph \(G\), there are either \(k\) disjoint paths between predefined sets \(A,B\subseteq V(G)\), or there is a set \(X\subseteq V(G)\) of size at most \(k\) such that \(G-X\) contains no \(A\)-\(B\) path. Relaxed versions of this result exist for many sets of graphs, and we call this duality the _Erdos-Posa property_. In this article, we focus on the edge variant: A class \(\mathcal{F}\) has the _edge-Erdos-Posa property_ if there exists a function \(f:\mathbb{Z}_{+}\to\mathbb{R}\) such that for every graph \(G\) and every integer \(k\), there are \(k\) edge-disjoint subgraphs of \(G\) each isomorphic to some graph in \(\mathcal{F}\) or there is an edge set \(X\subseteq E(G)\) of size at most \(f(k)\) meeting all subgraphs of \(G\) isomorphic to some graph in \(\mathcal{F}\). The edge set \(X\) is called the _hitting set_. If we replace vertices with edges in the above definition, that is, if we look for a vertex hitting set or vertex-disjoint graphs, then we obtain the _vertex-Erdos-Posa property_. The class \(\mathcal{F}\) that is studied in this article arises from taking minors: For a fixed graph \(H\), we define the set \(\mathcal{F}_{H}=\{G\,:\,H\text{ is a minor of }G\}\). Any graph \(G\in\mathcal{F}_{H}\) is called an _\(H\)-expansion_. The vertex-Erdos-Posa property for \(\mathcal{F}_{H}\) is well understood: Robertson and Seymour [7] proved that the class \(\mathcal{F}_{H}\) has the vertex-Erdos-Posa property if and only if \(H\) is planar. While both the vertex- and the edge-Erdos-Posa property are false for all non-planar graphs \(H\) (see for example [6]), the situation is much more mysterious for planar graphs. For some simple planar graphs \(H\) such as long cycles [2] or \(K_{4}\)[1], \(\mathcal{F}_{H}\) still has the edge-Erdos-Posa property, while for some others, for example subcubic trees of large pathwidth [3], it does not. For most planar graphs, it is unknown whether the edge-Erdos-Posa property holds or not. For an overview of results on the Erdos-Posa-property, we recommend the website of Jean-Florent Raymond [5]. We partially fill this gap by proving that for every subcubic graph of large treewidth \(H\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property. Note that while it was known that large walls do not have the edge-Erdos-Posa property (claimed without proof in [3]), this does not imply our main result as, unlike the vertex-Erdos-Posa property, is not known whether the edge variant is closed under taking minors. **Theorem 1**.: _For subcubic graphs \(H\) of treewidth at least \(2500\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property._ To prove Theorem 1, we only use treewidth to deduce that \(H\) contains a large wall, for which we use the linear bound provided by Grigoriev [4]. So in fact, we show the following theorem: **Theorem 2**.: _For subcubic graphs \(H\) that contain a wall of size \(250\times 250\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property._ There is room for improvement in the theorem. Requiring the graph \(H\) to be subcubic simplifies the argument considerably, but we suspect it is not necessary. Moreover, we believe that with a more careful but somewhat tedious analysis the wall size could be dropped to about \(30\times 30\). Still, this seems unlikely to be close to be best possible. Indeed, walls of size \(6\times 4\) do not have the edge-Erdos-Posa property [8]. (Whether graphs containing \(6\times 4\)-walls have the property is not known.) ## 2 Construction There is only one known tool to prove that a set \(\mathcal{F}_{H}\) of \(H\)-expansions that satisfies the vertex-Erdos-Posa property does not have the edge-Erdos-Posa property: The _Heinlein Wall_, after [3], shown at size \(5\) in Figure 1. For any integer \(n\in\mathbb{Z}_{+}\), we define \([n]=\{1,\ldots,n\}\). A Heinlein Wall \(W\) of size \(r\in\mathbb{Z}_{+}\) is the graph consisting of the following: * For every \(j\in[r]\), let \(P^{j}=u_{1}^{j}\ldots u_{2r}^{j}\) be a path of length \(2r-1\) and for \(j\in\{0\}\cup[r]\), let \(z_{j}\) be a vertex. Moreover, let \(a^{*}\), \(b^{*}\) be two further vertices. * For every \(i,j\in[r]\), add the edges \(z_{j-1}u_{2i-1}^{j},z_{j}u_{2i}^{j},z_{i-1}z_{i},a^{*}u_{1}^{j}\) and \(b^{*}u_{2r}^{j}\). We define \(c^{*}=z_{0}\) and \(d^{*}=z_{r}\). We call the vertices \(a^{*},b^{*},c^{*}\) and \(d^{*}\)_terminals_ of \(W\), while the vertices \(z_{j},j\in\{0\}\cup[r]\) are called _bottleneck vertices_. Additionally, we define \(W^{0}=W-\{a^{*},b^{*},c^{*},d^{*}\}\). An _(\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkage_ is the vertex-disjoint union of an \(a^{*}\)-\(b^{*}\) path with a \(c^{*}\)-\(d^{*}\) path. We need an easy observation: Figure 1: A Heinlein Wall of size \(5\). **Lemma 3** (Bruhn et al [3]).: _There are no two edge-disjoint (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkages in a Heinlein Wall._ For \(m,n\in\mathbb{Z}_{+}\), an _elementary grid_ of size \(m\times n\) is a graph with vertices \(v_{i,j}\) for all \(i\in[m],j\in[n]\) and edges \(v_{i,j}v_{i+1,j}\)\(\forall i\in[m-1],j\in[n]\) as well as \(v_{i,j}v_{i,j+1}\)\(\forall i\in[m],j\in[n-1]\). A _grid_ is a subdivision of an elementary grid. A wall is the subcubic variant of a grid. We define an elementary wall as an elementary grid with every second vertical edge removed. That is, an elementary wall of size \(m\times n\) is an elementary grid of size \((m+1)\times(2n+2)\) with every edge \(v_{i,2j}v_{i+1,2j}\,,i\in[m],i\) is odd, \(j\in[n+1]\) and every edge \(v_{i,2j-1}v_{i+1,2j-1}\,,i\in[m],i\) is even, \(j\in[n+1]\) being removed. Additionally, we remove all vertices of degree \(1\) and their incident edges. The \(i^{th}\)_row_ of an elementary wall is the induced subgraph on \(v_{i,1},\ldots,v_{i,2n+2}\) for \(i\in[m+1]\) (ignore the vertices that have been removed); this is a path. There is a set of exactly \(n+1\) disjoint paths between the first row and the \((m+1)^{\text{th}}\) row. These paths are the _columns_ of an elementary wall. The _bricks_ of an elementary wall are its \(6\)-cycles. (See Figure 2) A _wall_ is defined as the subdivision of an elementary wall. However, elementary walls have some vertices of degree \(2\) on the outer face of the wall. As we never want to distinguish between graphs that only differ by subdivision of edges, we avoid some annoying technicalities by slightly modifying the above definition. We define a _wall'_ of size \(m\times n\) as the subdivision of an elementary wall of size \(m\times n\) with all degree \(2\) vertices being contracted. (See Figure 3) Throughout, we will use this slightly modified definition of a wall. The key properties of a wall, such as large treewidth and planarity, carry over to a wall'. The definition of rows, columns and bricks in an elementary wall carries over Figure 3: An elementary wall of size \(4\times 3\) and a wall’ of the same size. Figure 2: An elementary wall of size \(8\times 8\). to a wall' in a natural way (with some truncation of the first and last row and column). For brevity of notation, we define an \(n\)_-wall'_ as a wall' of size \(n\times n\). The _outecycle_ of a wall' \(W\) is the cycle \(C\) contained in \(W\) that contains the first and last row and first and last column. Two vertices \(u,v\) of \(W\) are \(d\)_-apart in \(W\)_ if every \(u\)-\(v\) path in \(W\), every \(u\)-\(C\) path and every \(v\)-\(C\) path in \(W\) intersects at least \(d+1\) rows or at least \(d+1\) columns of \(W\). We extend the definition to bricks by saying that two bricks \(B_{1},B_{2}\) of \(W\) are \(d\)_-apart in \(W\)_ if every pair of one vertex from \(B_{1}\) and one vertex from \(B_{2}\) is \(d\)-apart in \(W\). Note that if \(v_{1},v_{2}\) are \(d\)-apart and if \(v_{1}\) lies in the brick \(B_{1}\), and \(v_{2}\) in the brick \(B_{2}\) then \(B_{1},B_{2}\) are \((d-2)\)-apart. Note, furthermore, that if \(W\) is part of a planar graph \(G\) then there are no shortcuts in \(G\). That is, if \(u,v\) are \(d\)-apart in \(W\) then there is also no \(u\)-\(v\) path _in \(G\)_ that meets fewer than \(d+1\) rows and columns of \(W\), and the same holds true for paths from \(u\) or \(v\) to the outercycle. To apply Menger's theorem, for \(n\in\mathbb{Z}_{+}\) and vertex sets \(A\) and \(B\) in a graph \(G\), we define an \(n\)_-separator_ as a vertex set \(X\subseteq V(G)\) of size \(|X|\leq n\) such that there is no \(A\)-\(B\) path in \(G-X\). We will usually apply this for one side being a single vertex, that is \(A=\{a\}\), in which case we additionally require that \(a\not\in X\). ## 3 Large treewidth results How do we prove our main result? Let \(H\) be a planar subcubic graph of treewidth \(\geq 2500\). Given a size \(r\) of a hypothetical hitting set, we show that there is a graph \(Z\) that neither contains two edge-disjoint subdivisions of \(H\), nor admits an edge set \(U\) of size \(|U|\leq r\) such that \(Z-U\) is devoid of subdivisions of \(H\). That then proves that \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property. Since \(H\) has treewidth \(\geq 2500\), it contains a grid-minor of size at least \(501\times 501\)[4] and thus a wall' \(M\) of size at least \(250\times 250\). We pick two edges \(e_{1}\) and \(e_{2}\) of \(M\) such that both of them are incident with a branch vertices of degree \(3\) of \(M\) and such that _every pair of one endvertex from \(e_{1}\) and one endvertex from \(e_{2}\) is \(70\)-apart in \(M\)._ (1) As \(H\) is planar and \(M\) large enough it is possible to find such edges \(e_{1},e_{2}\). We denote the endvertex of \(e_{1}\) that is also a branch vertices of degree \(3\) of \(M\) by \(a\), and the other endvertex by \(b\) (which may, or not, be a branch vertex, too). For \(e_{2}\), we call its endvertices \(c\) and \(d\), where \(c\) is chosen to be a branch vertex of degree \(3\) of \(M\). Given a positive integer \(r\), we define \(Z\) as follows: * start with a copy of \(H-\{e_{1},e_{2}\}\), where we denote the copy of a vertex \(h\) of \(H\) by \(h^{*}\); * replace every edge \(g^{*}h^{*}\) in the copy of \(H-\{e_{1},e_{2}\}\) by \(2r\) internally disjoint \(g^{*}\)-\(h^{*}\) paths of length \(2\); and * add a Heinlein wall \(W\) of size \(2r\), where the terminals \(a^{*},b^{*}\) of \(W\) are identified with the endvertices of \(e_{1}\), and where the terminals \(c^{*},d^{*}\) are identified with the endvertices of \(e_{2}\). We extend the mapping \(V(H)\to V(Z)\) defined by \(h\mapsto h^{*}\) to sets of vertices in \(H-\{e_{1},e_{2}\}\): for a vertex set \(J\subseteq V(H)\), we set \(J^{*}=\{h^{*}:h\in J\}\). To better to distinguish between \(H\) and \(Z\), we use the first half of the alphabet (\(a\)-\(m\)) for vertices, vertex sets and graphs that are part of \(H\), while the second half of the alphabet (\(o\)-\(z\)) is reserved for objects belonging to \(Z\). Starred letters of the first half (\(a^{*}\)-\(m^{*}\)) are used for vertices and objects in \(Z\) that have counterparts in \(H\). We define \(M^{*}\) to be an arbitrary subdivision of \(M-\{e_{1},e_{2}\}\) in \(Z\) such that the set of its branch vertices is precisely \((V(M))^{*}\) and such that each subdivided edge of \(M^{*}\) consists of one of the \(2r\) paths originating from multiplying the corresponding edge of \(M-\{e_{1},e_{2}\}\). Note that \(M^{*}\) is a wall' except for \(e_{1},e_{2}\), and note that \(M^{*}\) is disjoint from \(W^{0}\). Let us first prove the first half of Theorem 1: there is no small edge hitting set in \(Z\). **Lemma 4**.: _For every edge set \(U\) in \(Z\) of size \(|U|\leq r\), the graph \(Z-U\) contains a subdivision of \(H\)._ Proof.: As for every edge \(gh\in E(H)\setminus\{e_{1},e_{2}\}\), the vertices \(g^{*}\) and \(h^{*}\) are linked by \(2r\) internally disjoint paths, we may easily find a subdivision of \(H-\{e_{1},e_{2}\}\) in \(Z-U\). Moreover, \(U\) is too small to meet all (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkages in the Heinlein wall \(W\). Thus, the subdivision of \(H-\{e_{1},e_{2}\}\) can be extended to one of \(H\) in \(Z-U\). The harder part of Theorem 1 is to prove that there can be no two edge-disjoint subdivisions of \(H\) in \(Z\). We will prove: **Lemma 5**.: _Every subdivision of \(H\) in \(Z\) contains an (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkage in \(W\)._ Recall that, by Lemma 3, any two such linkages share an edge. Thus, once we have shown the above lemma, we then have finished the proof of the theorem. When we talk about a subdivision of \(H\) in \(Z\), we implicitly assume that an _embedding_ of \(H\) into \(Z\) is fixed: a function \(\Phi\) that maps every vertex of \(H\) to the corresponding branch vertex in \(Z\), and that maps every edge of \(H\) to the corresponding subdivided edge in \(Z\). We will extend such an embedding \(\Phi\) to subgraphs of \(H\) in the obvious way. In particular, \(\Phi(H)\) then denotes the subdivision of \(H\) in \(Z\). Figure 4: Construction of the counterexample graph \(Z\) For the remainder of this article, we assume \(\Phi\) to be a fixed embedding of \(H\) in \(Z\). We will prove Lemma 5 for this fixed embedding of \(H\). The main difficulty is that we do not know how \(H\) embeds in \(Z\). In order to get some control on what is mapped where by \(\Phi\), we concentrate on a set of vertices that are well connected to large walls'. We will later see that only a small number of them can be mapped into \(W\). We define a _\(3\)-fan_ from a vertex \(v\) to a set \(S\) as the union of three non-trivial paths from \(v\) to \(S\) that are disjoint except for their first vertex \(v\). Set \[B=\{h\in V(H)\,:\,\text{there is $10$-wall' $M^{\prime}$ and}\] \[\text{a $3$-fan from $h$ to the branch vertices of degree $3$ of $M^{\prime}$}\}.\] Note that \(M\) is also a \(10\)-wall'. **Lemma 6**.: _Not all branch vertices of any \(10\)-wall' can be contained in \(W\)._ Proof.: It is easy to check that a Heinlein Wall has pathwidth at most \(5\), and thus also treewidth at most \(5\). Therefore, it cannot contain a \(10\)-wall' since the latter has treewidth at least \(10\). As we are only ever interested in branch vertices of degree \(3\), we will call those _proper branch vertices_. Moreover, a _proper branch vertex_ of \(M^{*}\) is the image under the \(*\)-map of a proper branch vertex of \(M\). Note that every proper branch vertex of every \(10\)-wall' \(M^{\prime}\) is in \(B\): Indeed, every proper branch vertex in \(M^{\prime}\) is connected to its three adjacent proper branch vertices of \(M^{\prime}\), and those paths form the desired \(3\)-fan. In particular, this implies that every proper branch vertex of \(M\) is in \(B\). Recall that by choice of \(e_{1}\) and \(e_{2}\), this includes \(a\) and \(c\). For \(b\) and \(d\), we do not know, but the following lemma helps to deal with them. **Lemma 7**.: _Let \(h^{*}\in V(Z-W)\) and let \(T\subseteq Z\) be a \(3\)-fan from \(h^{*}\) to the union of the proper branch vertices of \(M^{*}\) with \(\{b^{*},d^{*}\}\). Then there is also a \(3\)-fan from \(h\) to proper branch vertices of \(M\) in \(H\)._ Proof.: To prove the lemma, we need to show two things: First, we need to find a \(3\)-fan that is disjoint from \(W^{0}\) so we can pull it back to \(H\). Second, we need to get rid of \(b\) and \(d\) and find a \(3\)-fan that connects \(h\) with proper branch vertices of \(M\) only. Since the terminals \(a^{*}\) and \(c^{*}\) of \(W\) are proper branch vertices of \(M^{*}\) and since \(h^{*}\not\in V(W)\), we can shorten the \(3\)-fan \(T\) to obtain a \(3\)-fan that is disjoint from \(W^{0}\) but still connects \(h^{*}\) with proper branch vertices of \(M^{*}\) or \(b^{*}\) or \(d^{*}\) if necessary. Since this \(3\)-fan is disjoint from \(W^{0}\), we can find a corresponding \(3\)-fan \(F\) in \(H\) that connects \(h\) with proper branch vertices of \(M\) or \(b\) or \(d\). By Menger's theorem, we may assume that \(h\) can be separated in \(H\) from the proper branch vertices of \(M\) by a set \(K\subseteq V(H)\setminus\{h\}\) of at most two vertices; otherwise we are done. In particular, the \(3\)-fan \(F\) has to contain at least one of \(b\) and \(d\); let us say it contains \(b\). Moreover, the \(h\)-\(b\) path \(L_{b}\) in \(F\) cannot meet \(K\) as \(K\) already has to meet the two other paths in the \(3\)-fan \(F\). We are done if \(b\) is a proper branch vertex itself. Thus we may assume that there is a unique subdivided edge \(E\) of \(M\) that contains \(b\) in its interior. One endvertex of \(E\) is \(a\). The set \(K\) also has to separate \(b\) from the endvertices of \(E\) (as we can reach \(b\) from \(h\) via \(L_{b}\) without meeting \(K\)), which implies \(K\subseteq V(E)\), and \(a\in K\) as \(b\) is a neighbour of \(a\). This implies \(a\in V(F)\). Now consider the \(h\)-\(a\) path \(L_{a}\) in \(F\), and observe that \(L_{a}\) is internally disjoint from \(K\) as \(a\in K\). Furthermore, since \(b\not\in V(L_{a})\), the penultimate vertex of \(L_{a}\) is a neighbour \(g\neq b\) of \(a\). Then, as \(H\) is subcubic, \(g\) lies on a subdivided edge \(E^{\prime}\) of \(M\) that is not \(E\). By extending \(hL_{a}g\) along \(E^{\prime}\) to the endvertex of \(E^{\prime}\) that is not \(a\), we obtain a path from \(h\) to a proper branch vertex of \(M\) that avoids \(E\). Since \(K\subseteq V(E)\), that path also avoids \(K\), a contradiction. The next lemma gives us control over \(\Phi\), at least for the set \(B\). **Lemma 8**.: \(\Phi(B)\subseteq B^{*}\cup V(W)\)_._ Proof.: Consider a vertex \(z\in\Phi(B)\setminus V(W)\). First observe that, by definition of \(B\), every vertex in \(\Phi(B)\) has degree at least 3 in \(Z\). Thus, for \(z\) there is a vertex \(h\) of \(H\) with \(z=h^{*}\). We will show that \(h\in B\), which then implies \(z=h^{*}\in B^{*}\). As \(h^{*}\in\Phi(B)\), there is a vertex \(g\in B\) with \(h^{*}=\Phi(g)\). Since \(g\in B\), there is a 3-fan in \(\Phi(H)\) connecting \(h^{*}\) to the set of proper branch vertices of a 10-wall' \(R\subseteq\Phi(H)\). We define \(O\) to be the union of this fan and \(R\). If \(O\) is disjoint from the proper branch vertices of \(M^{*}\) and also disjoint from \(b^{*}\) and \(d^{*}\), then it is also disjoint from \(W^{0}\) and we can find a corresponding wall' and fan in \(H\), implying that \(h\in B\). (When pulling back from \(Z\) to \(H\), paths between proper branch vertices of \(R\) can become shorter, so that the resulting graph in \(H\) may be missing some of the required degree 2 vertices to be considered a wall; this is precisely the reason why we make do with walls'.) Therefore, we conclude that \(O\) contains some proper branch vertex of \(M^{*}\) (and thus potentially also a part of \(W^{0}\)) or that \(O\) contains \(b^{*}\) or \(d^{*}\). Next, suppose that there is no 2-separator that separates \(h^{*}\) from all proper branch vertices of \(M^{*}\) and from \(b^{*}\) and \(d^{*}\) in \(Z\). By Menger's theorem, there is thus a 3-fan from \(h^{*}\) to the proper branch vertices of \(M^{*}\) or \(b^{*}\) or \(d^{*}\). We apply Lemma 7 to obtain a 3-fan in \(H\) from \(h\) to proper branch vertices of \(M\) only, which proves \(h\in B\). We conclude that there is a 2-separator \(\{x,y\}\subseteq V(Z-h^{*})\) that separates \(h^{*}\) from all proper branch vertices of \(M^{*}\) and all terminals of \(W\). As every vertex of degree 3 in \(O\) is connected via three internally disjoint paths to \(h^{*}\), we deduce that there is an \(x\)-\(y\) path \(P\) in \(O\) that contains all vertices that are separated by \(\{x,y\}\) from \(h^{*}\) in \(O\) and such that all interior vertices of \(P\) have degree 2 in \(O\). As \(O\) contains a proper branch vertex of \(M^{*}\) or a terminal, the path \(xPy\) must contain a vertex from \((V(M))^{*}\). Pick \(p,q\) to be the first respectively the last vertex of \((V(M))^{*}\) on \(P\), and choose a \(p\)-\(q\) path \(Q\) in \(M^{*}\). Note that \(Q\) is disjoint from \(O-pPq\) since \(O\cap M^{*}\subseteq pPq\). Moreover, note that \(Q\) is disjoint from \(W^{0}\) as \(M^{*}\) is disjoint from \(W^{0}\). Replacing \(pPq\) by \(Q\), we obtain a new graph \(O^{\prime}\) that is the union of a 10-wall' \(R^{\prime}\) with a 3-fan from \(h^{*}\) to the branch vertices of \(R^{\prime}\) that is disjoint from \(W^{0}\). We then also find in \(H\) a 3-fan from \(h\) to the branch vertices of a 10-wall', which again leads to \(h\in B\). With the next two lemmas we show that, with only a few exceptions, a vertex in \(B\) is mapped to a vertex in \(B^{*}\) under \(\Phi\). **Lemma 9**.: \(|B^{*}\setminus\Phi(B)|=|\Phi(B)\cap(V(W)\setminus B^{*})|\)__ Proof.: By Lemma 8, we have \[|\Phi(B)\cap B^{*}|+|\Phi(B)\cap(V(W)\setminus B^{*})| =|\Phi(B)|=|B|=|B^{*}|\] \[=|B^{*}\cap\Phi(B)|+|B^{*}\setminus\Phi(B)|.\] **Lemma 10**.: \(|B^{*}\setminus\Phi(B)|\leq 52\)_._ Proof.: By Lemma 9, it suffices to show that \(|\Phi(B)\cap(V(W)\setminus B^{*})|\leq 52\). We show that \(|\Phi(B)\cap V(W^{0})|\leq 48\), which proves the above claim since \(V(W)\setminus B^{*}\) may differ from \(V(W^{0})\) only in the \(4\) terminals of \(W\). Let \(z\in\Phi(B)\cap V(W^{0})\), and let \(h\) be such that \(z=\Phi(h)\). By definition of \(B\), \(h\) has a \(3\)-fan to proper branch vertices of a \(10\)-wall' \(M^{\prime}\) in \(H\). By Lemma 6, some proper branch vertex of \(M^{\prime}\) needs to be mapped outside \(W\) under \(\Phi\). Then, however, there is a \(3\)-fan \(T_{z}\subseteq\Phi(H)\) from \(z\) to a set of vertices in \(Z-W\). This \(3\)-fan must contain at least three terminals of \(W\), and thus at least one of \(a^{*}\) and \(b^{*}\). Since \(z\in V(W^{0})\), it lies in one or possibly two blocks of \(W-\{a^{*},b^{*}\}\). We say that a block \(O\) of \(W-\{a^{*},b^{*}\}\)_owns_ a vertex \(z\in\Phi(B)\cap V(W^{0})\) if \(z\) is incident in \(\Phi(H)\) with at least two edges of \(O\). As each \(z\in\Phi(B)\cap V(W^{0})\) has degree \(3\), every vertex in \(\Phi(B)\cap V(W^{0})\) is owned by exactly one block of \(W-\{a^{*},b^{*}\}\). Now, assume that the block \(O\) owns \(z\). If \(z\) is not a bottleneck vertex, then the three paths in \(T_{z}\) cannot all leave \(O\) through its two bottleneck vertices: one such path traverses an edge between \(O\) and \(a^{*}\) or \(b^{*}\). The same happens if \(z\) is a bottleneck vertex: then the two paths in \(T_{z}\) with an edge in \(O\) cannot both leave \(O\) through the remaining bottleneck vertex. Therefore, whenever a block \(O\) owns a vertex in \(\Phi(B)\), there must be an edge between \(O\) and \(\{a^{*},b^{*}\}\) in \(\Phi(H)\). As \(a^{*},b^{*}\) both have degree at most \(3\) in \(\Phi(H)\), at most six blocks may own vertices in \(\Phi(B)\). How many vertices in \(\Phi(B)\) may be owned by a block \(O\) of \(W-\{a^{*},b^{*}\}\)? Every \(z\in\Phi(B)\cap V(W^{0})\) that is not a bottleneck vertex must have a bottleneck vertex as its neighbour in \(\Phi(H)\) since \(z\) has degree \(3\), see Figure 1. As each bottleneck vertex has degree at most \(3\) in \(\Phi(H)\), we conclude that each block contains at most six non-bottleneck vertices of \(\Phi(B)\). Together with the two bottleneck vertices, we obtain \(\leq 8\) vertices of \(\Phi(B)\) per block. As at most six blocks may own vertices in \(\Phi(B)\), we obtain at most \(48\) vertices in blocks of \(W-\{a^{*},b^{*}\}\). Together with the terminals, this yields \(|\Phi(B)\cap(V(W)\setminus B^{*})|\leq 52\). Define \(B_{M}\) to be the set of all vertices in \(H\) that send a \(3\)-fan to proper branch vertices of \(M\). We note that \(B_{M}\) contains all proper branch vertices of \(M\), and \(B_{M}\subseteq B\). **Lemma 11**.: _Let \(h^{*}\) be a vertex in \(Z-W\) with a \(3\)-fan \(T\subseteq Z\) to vertices in \(B_{M}^{*}\). Then \(h\in B_{M}\)._ Proof.: Suppose there is a set \(X\) of at most two vertices that separates \(h^{*}\) from all proper branch vertices of \(M^{*}\) in \(Z\). Because \(X\) cannot separate \(h^{*}\) from all three endvertices of \(T\), there exists a path \(P\) in \(Z-X\) between \(h^{*}\) and some vertex \(g^{*}\in B_{M}^{*}\). As there is, by definition, a \(3\)-fan from \(g\) to proper branch vertices of \(M\) in \(H\), there is also \(3\)-fan from \(g^{*}\) to proper branch vertices of \(M^{*}\) in \(Z\), and then, as \(|X|\leq 2\), also a path \(Q\) from \(g^{*}\) to a proper branch vertex of \(M^{*}\) in \(Z-X\). However, \(P\cup Q\) is disjoint from \(X\) but contains a path from \(h^{*}\) to a proper branch vertex of \(M^{*}\), which is impossible. Therefore, by Menger's theorem, there is a \(3\)-fan \(T^{\prime}\) from \(h^{*}\) to proper branch vertices of \(M^{*}\). By Lemma 7, we obtain \(h\in B_{M}\). In conjunction with Lemma 10, the next lemma will be used to repair \(M\), that is to prove that \(\Phi(H)\) contains most proper branch vertices of \(M^{*}\) and sufficient subdivided edges in between them. **Lemma 12**.: _Let \(g,h\in B_{M}\), and let \(L\) be a \(g\)-\(h\) path in \(H-e_{1}-e_{2}\). Let \(P\) be a \(g^{*}\)-\(h^{*}\) path in \(Z\) such that \(V(P)\cap(V(H))^{*}=(V(L))^{*}\) and such that \(P\) is disjoint from \(B^{*}\setminus\Phi(B)\). For every vertex \(i^{*}\in V(P)\) that is a terminal, we furthermore require that \(i^{*}\) has degree \(2\) in \(\Phi(H)-W^{0}\). Let \(\mathcal{S}\) be the set of all \(B^{*}_{M}\)-paths in \(Z\) that are disjoint from the interior of \(P\) and that have at most one endvertex with \(P\) in common. Then there is a \(g^{*}\)-\(h^{*}\) path \(Q\) in \(\Phi(H)-W^{0}\) that is internally disjoint from every path in \(\mathcal{S}\)._ Proof.: We do induction on the number \(n\) of internal vertices of \(P\) that lie in \(B^{*}_{M}\). Because it is shorter, we start with the induction step. Thus, assume that \(n>0\), ie that \(P\) contains an internal vertex \(k^{*}\in B^{*}_{M}\). We split the path \(P\) into \(P_{1}=g^{*}Pk^{*}\) and \(P_{2}=k^{*}Ph^{*}\), and observe that both paths have fewer than \(n\) internal vertices in \(B^{*}_{M}\). As subpaths of \(P\), the paths \(P_{1}\) and \(P_{2}\) still satisfy the conditions of the lemma. Now induction yields a \(g^{*}\)-\(k^{*}\) path \(Q_{1}\subseteq\Phi(H)-W^{0}\) and a \(k^{*}\)-\(h^{*}\) path \(Q_{2}\subseteq\Phi(H)-W^{0}\). Let \(Q\) be a \(g^{*}\)-\(h^{*}\) path contained in \(Q_{1}\cup Q_{2}\subseteq\Phi(H)-W^{0}\). Consider a path \(S\in\mathcal{S}\), and suppose that \(S\) meets \(Q\) in an internal vertex of \(Q\). We first note that \(S\) cannot contain \(k^{*}\) as any path in \(\mathcal{S}\) is disjoint from the interior of \(P\). Thus, \(S\) meets an internal vertex of \(Q_{1}\) or of \(Q_{2}\), say of \(Q_{1}\). This, however, is impossible as \(S\) is disjoint from the interior of \(P_{1}\), and may have at most one endvertex with \(P_{1}\) in common. Therefore, \(Q\) is as desired, and we have proved the induction step. It remains to establish the induction start. Then, \(n=0\), which implies that: _No internal vertex of \(P\) lies in \(B^{*}_{M}\)._ (2) As \(P\) is disjoint from \(B^{*}\setminus\Phi(B)\), we get \(g^{*}\in\Phi(B)\). Thus, there is a \(10\)-wall' \(R\) and a \(3\)-fan from \(g^{*}\) to proper branch vertices of \(R\) in \(\Phi(H)\). We denote by \(O\) the union of \(R\) and this \(3\)-fan. Note that \(O\) is a subgraph of \(\Phi(H)\). Let us prove that: _For any neighbour \(g_{0}\) of \(g\) in \(H-e_{1}-e_{2}\), the vertex \(g_{0}^{*}\) lies in \(O\)._ (3) Indeed, since \(H\) is subcubic and since \(g^{*}\) has degree \(3\) in \(O\) it follows that for every neighbour \(g_{0}\) of \(g\) in \(H\), we have \(g_{0}^{*}\in V(O)\) -- unless \(g^{*}\) is a terminal. Then, since \(g\in B_{M}\), \(g\) has degree \(2\) in \(H-e_{1}-e_{2}\), and by assumption, \(g^{*}\) has degree \(2\) in \(\Phi(H)-W^{0}\): again, for every neighbour \(g_{0}\) of \(g\) in \(H-e_{1}-e_{2}\), the vertex \(g_{0}^{*}\) lies in \(O\). Let \(g_{1}\) be the neighbour of \(g\) in \(H-e_{1}-e_{2}\) that lies in \(L\), the \(g\)-\(h\) path in \(H\). It now follows from (3) that: _For the neighbour \(g_{1}\) of \(g\) in \(L\) it holds that \(g_{1}^{*}\in V(O\cap P)\)._ (4) Among all vertices in \(O\cap P\), pick \(k^{*}\) to be closest to \(h^{*}\) on \(P\). Note that since \(g_{1}^{*}\in V(P)\), it is a candidate for \(k^{*}\). Thus we immediately have \(k^{*}\neq g^{*}\). Next, we claim: \[k^{*}\in B_{M}^{*} \tag{5}\] Suppose not. In particular, \(k^{*}\neq h^{*}\) as \(h^{*}\in B_{M}^{*}\). By Lemma 11, there are two vertices \(x_{1},x_{2}\neq k^{*}\) that separate \(k^{*}\) from \(B_{M}^{*}\). As \(k^{*}Ph^{*}\) is a \(k^{*}\)-\(B_{M}^{*}\) path, one of \(x_{1},x_{2}\) lies in \(k^{*}Ph^{*}\), say \(x_{1}\). By choice of \(k^{*}\), the subpath \(k^{*}Ph^{*}\) meets \(O\) only in \(k^{*}\), which implies that \(x_{1}\not\in V(O)\). In \(O\) there are two internally disjoint \(k^{*}\)-\(g^{*}\) paths. Since \(x_{1}\not\in V(O)\), one of the two internally disjoint \(k^{*}\)-\(g^{*}\) paths in \(O\) is disjoint from \(x_{1},x_{2}\) unless \(x_{2}=g^{*}\). We thus conclude \(x_{2}=g^{*}\). Next, as \(g^{*}\in B_{M}^{*}\) by assumption, it follows that there exists a 3-fan \(T\) from \(g^{*}\) to \(B_{M}^{*}\) in \(Z\). Since \(H\) is subcubic, for two neighbours \(g_{1},g_{2}\) of \(g\) in \(H-e_{1}-e_{2}\), \(g_{1}^{*}\) and \(g_{2}^{*}\) lie on different paths of the 3-fan \(T\) from \(g^{*}\) to \(B_{M}^{*}\) in \(Z\). That is, there are disjoint paths \(P_{1},P_{2}\), where \(P_{1}\) is a \(g_{1}^{*}\)-\(B_{M}^{*}\) path and \(P_{2}\) is a \(g_{2}^{*}\)-\(B_{M}^{*}\) path, both disjoint from \(x_{2}=g^{*}\). As \(O\) is 2-connected, there are paths in \(O\) from \(k^{*}\) to \(g_{1}^{*}\) and \(g_{2}^{*}\) that avoid \(\{x_{1},x_{2}\}\) (recall that \(x_{1}\not\in V(O)\)). Since \(P_{1}\) and \(P_{2}\) are disjoint, at least one of them is disjoint from \(x_{1}\). Thus there is a \(k^{*}\)-\(B_{M}^{*}\) path in \(Z-\{x_{1},x_{2}\}\), a contradiction. This proves (5). With (2) we get that \(k^{*}=h^{*}\), which implies \(h^{*}\in V(O)\). We claim that: _There is a \(g^{*}\)-\(h^{*}\) path \(Q\) in \(O\) whose second vertex in \((V(H))^{*}\) is \(g_{1}^{*}\)._ (6) Since \(O\) is 2-connected and \(g_{1}^{*}\in V(O)\) by (4), there is a \(g_{1}^{*}\)-\(h^{*}\) path \(Q^{\prime}\) in \(O\) that is disjoint from \(g^{*}\). Since \(g_{1}\) is a neighbour of \(g\) in \(H-e_{1}-e_{2}\), there is a \(g^{*}\)-\(g_{1}^{*}\) path \(Q^{\prime\prime}\) of length 2 in \(O\), which by construction of \(Z\) is internally disjoint from \(Q^{\prime}\). Combining those to \(Q=Q^{\prime\prime}\cup Q^{\prime}\) thus yields the desired \(g^{*}\)-\(h^{*}\) path \(Q\) in \(O\). This proves (6). Note that \(Q\subseteq\Phi(H)\). Thus, to finish the proof we need to show that \(Q\) is disjoint from \(W^{0}\); and that \(Q\) is internally disjoint from every \(S\in\mathcal{S}\). Suppose that the interior of \(Q\) meets either \(W^{0}\) or some path in \(\mathcal{S}\), and let \(q\) be the first vertex in the interior of \(Q\) where that happens. Next, among all vertices in \(g^{*}Qq\cap P\), pick \(\ell^{*}\) to be the one closest to \(h^{*}\) on \(P\). We observe that \(\ell^{*}\) must be an internal vertex of \(P\). Indeed, \(\ell^{*}\neq h^{*}\) as \(q\) is an internal vertex of \(Q\), and \(\ell^{*}\neq g^{*}\) by (6). From (2) it follows that \(\ell^{*}\notin B_{M}^{*}\), and from Lemma 11 it follows that there is a set \(Y=\{y_{1},y_{2}\}\) of at most two vertices that separates \(\ell^{*}\) from \(B_{M}^{*}\) in \(Z\). As the paths \(g^{*}Q\ell^{*}\) and \(\ell^{*}Ph^{*}\) meet only in \(\ell^{*}\) by choice of \(\ell^{*}\), it follows that one vertex in \(Y\), \(y_{1}\) say, lies in \(\ell^{*}Ph^{*}\) and the other, \(y_{2}\), in \(g^{*}Q\ell^{*}\). Now, Figure 5: Situation in Lemma 12. Vertices in \(B_{M}^{*}\) in black. the path \(\ell^{*}Qq\) meets \(g^{*}Q\ell^{*}\) and \(\ell^{*}Ph^{*}\) also only in \(\ell^{*}\) and thus is disjoint from \(Y\). As a consequence, \(q\) cannot lie in \(W^{0}\) as every vertex in \(W^{0}\) sends a 3-fan to \(B_{M}^{*}\). Therefore, \(q\) lies on a path \(S\in\mathcal{S}\). Note that as \(\ell^{*}Qq\) is disjoint from \(Y\) and as the endvertices of \(S\) lie in \(B_{M}^{*}\), it follows that both vertices in \(Y\) must lie on \(S\). If \(y_{1}\) lies in \(S\) then, as \(y_{1}\in V(P)\) and as \(P\) is internally disjoint from \(S\), the vertex \(y_{1}\) must be an endvertex of \(P\), ie, \(y_{1}=h^{*}\). As \(S\) is a \(B_{M}^{*}\)-path, it follows that \(y_{1}\) is an endvertex of \(S\). That \(y_{2}\in V(g^{*}Q\ell^{*})\) lies in \(S\) implies, too, that \(y_{2}\) must be an endvertex of \(S\): Indeed, \(q\) was the first internal vertex on \(Q\) to lie in \(S\), and thus \(y_{2}=g^{*}\), which lies in \(B_{M}^{*}\). But now, \(S\) has both endvertices with \(P\) in common, which is not allowed for a path in \(\mathcal{S}\). We have obtained the final contradiction that proves the lemma. We are done if we find an \((a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*})\) linkage in \(\Phi(H)\cap W\). The next lemma tells us that if there is no such linkage then we obtain two different paths between the terminals, one inside the Heinlein wall and one outside. **Lemma 13**.: _Either there is an (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkage in \(\Phi(H)\cap W\), or there is an \(\{a^{*},b^{*}\}\)-\(\{c^{*},d^{*}\}\) path in \(\Phi(H)\cap W\) whose endvertices are in the same component of \(\Phi(H)-W^{0}\)._ Proof.: We proceed by case distinction. First, consider the case that there is a \(v\in\Phi(B)\) such that \(v\) lies in \(W^{0}\) or such that \(v\) is a terminal with degree at least 2 in \(\Phi(H)\cap W\). As \(v\in\Phi(B)\), there is a 3-fan \(T\) in \(\Phi(H)\) from \(v\) to the proper branch vertices of some 10-wall' \(R\subseteq\Phi(H)\). Moreover, as \(R\) is too large to fit into \(W\) by Lemma 6, there must be some proper branch vertex \(w\) of \(R\) outside \(W\). Thus, \(T\cup R\) contains three internally disjoint \(v\)-\(w\) paths \(P_{1},P_{2},P_{3}\subseteq\Phi(H)\). By definition of \(v\), there are three terminals that are incident with an edge in \((P_{1}\cup P_{2}\cup P_{3})-W^{0}\). Therefore, \(P_{1}\cup P_{2}\cup P_{3}\) contains an \(\{a^{*},b^{*}\}\)-\(\{c^{*},d^{*}\}\) path that lies in \(\Phi(H)\cap W\). Moreover, the endvertices of that path are connected in \(\Phi(H)-W^{0}\) via \(P_{1}\cup P_{2}\cup P_{3}-W^{0}\). Second, we consider the case when \(\Phi(B)\cap V(W^{0})=\emptyset\) and when every terminal in \(\Phi(B)\) has degree at most 1 in \(\Phi(H)\cap W\). We claim that \[\Phi(B)\cap(V(W)\setminus B^{*})=\emptyset \tag{7}\] Since \(\Phi(B)\cap V(W^{0})=\emptyset\) and since \(\{a^{*},c^{*}\}\subseteq B^{*}\), the claim (7) can only be violated if \(b^{*}\in\Phi(B)\setminus B^{*}\) or if \(d^{*}\in\Phi(B)\setminus B^{*}\). While \(b^{*}\) and \(d^{*}\) are not exchangeable, they are largely symmetric for the purpose of the proof of (7). Therefore, we only concentrate on \(b^{*}\) and consider the case that \(b^{*}\in\Phi(B)\) and then show that this implies \(b^{*}\in B^{*}\). The proof for \(d^{*}\) is similar. From \(b^{*}\in\Phi(B)\) it follows that there are three paths \(P_{1},P_{2},P_{3}\) in \(\Phi(H)\) from \(b^{*}\) to proper branch vertices of some 10-wall' \(R\subseteq\Phi(H)\) such that \(P_{1},P_{2},P_{3}\) are disjoint except for \(b^{*}\). Note that all proper branch vertices of \(R\) lie in \(Z-W^{0}\) as \(\Phi(B)\) is disjoint from \(W^{0}\). Therefore, \(R\) may only intersect \(W^{0}\) in at most two paths. (Here, we also use that every terminal in \(\Phi(B)\) has degree at most 1 in \(\Phi(H)\cap W\).) Let \(Q_{1},Q_{2}\) be the paths in \(R\) between proper branch vertices of \(R\) that are incident with \(W^{0}\) (if they exist at all). Let \(P\in\{P_{1},P_{2},P_{3},Q_{1},Q_{2}\}\), and observe that \(P\subseteq\Phi(H)\). As the endvertices of \(P\) are either proper branch vertices of \(R\) or \(b^{*}\), it follows that they lie in \(V(H)^{*}\). We denote them by \(g^{*}\) and \(h^{*}\). Moreover, as we assume \(b^{*}\in\Phi(B)\) it follows that \(\Phi(H)\) contains three internally disjoint disjoint \(g^{*}\)-\(h^{*}\) paths. Only two of these may intersect \(W^{0}\). As a consequence, the endvertices \(g^{*}\) and \(h^{*}\) are contained in the same component of \(\Phi(H)-W^{0}\). Therefore, if \(P\cap W\) contains exactly one non-trivial \(\{a^{*},b^{*}\}\)-\(\{c^{*},d^{*}\}\) path \(Q\), then, with the help of \(P-W^{0}\), we see that the endvertices of \(Q\) are in the same component of \(\Phi(H)-W^{0}\). As, moreover, \(P\cap W\) contains a path between the endvertices of \(Q\), we have found a path as in the statement of the lemma and are done. If, on the other hand, \(P\cap W\) contains two non-trivial \(\{a^{*},b^{*}\}\)-\(\{c^{*},d^{*}\}\) paths, we can use \(e_{1}\) and \(e_{2}\) to find a \(g\)-\(h\) path \(I_{P}\) in \(H\) with \(V(I_{P})^{*}\subseteq V(P)\). If \(P\cap W\) contains an \(a^{*}\)-\(b^{*}\) path or a \(c^{*}\)-\(d^{*}\) path (or both), we can again use \(e_{1}\) or \(e_{2}\) to find a \(g\)-\(h\) path \(I_{P}\) in \(H\) with \(V(I_{P})^{*}\subseteq V(P)\). (If \(P\cap W\) contains both an \(a^{*}\)-\(b^{*}\) path and a \(c^{*}\)-\(d^{*}\) path, we can actually stop as then we have the desired linkage.) Finally, if \(P\cap W\) contains no non-trivial path then, too, we easily find a \(g\)-\(h\) path \(I_{P}\) in \(H\) with \(V(I_{P})^{*}=V(P)\cap V(H)^{*}\). Since \(R\) intersects \(W^{0}\) only in \(Q_{1}\) and \(Q_{2}\) (if these exist at all), using \(I_{Q_{1}}\) and \(I_{Q_{2}}\), we find a 10-wall' \(M^{\prime}\) in \(H\) such that \(V(M^{\prime})^{*}\subseteq V(R)\). In the same way, we note that the \(b\)-\(M^{\prime}\) paths \(I_{P_{1}},I_{P_{2}},I_{P_{3}}\) in \(H\) satisfy \(V(I_{P_{j}})^{*}\subseteq V(P_{j})\) for \(j=1,2,3\). In particular, \(I_{P_{1}},I_{P_{2}},I_{P_{3}}\) are pairwise disjoint except for \(b\). In total, we have found a 3-fan from \(b\) to a 10-wall', which implies that \(b\in B\) and thus \(b^{*}\in B^{*}\). This proves (7). By Lemma 9, it follows from (7) and \(|B|=|\Phi(B)|\) that \[B^{*}=\Phi(B). \tag{8}\] In particular, the terminals \(a^{*}\) and \(c^{*}\) lie in \(\Phi(B)\), which implies that there is, for every terminal \(v\in\{a^{*},c^{*}\}\), a 3-fan \(T\subseteq\Phi(H)\) from \(v\) to proper branch vertices of some 10-wall' \(R\subseteq\Phi(H)\). Note that all proper branch vertices of \(R\) lie in \(\Phi(B)\) and thus outside \(W^{0}\). Therefore, there is for every \(v\in\{a^{*},c^{*}\}\) a path \(Q_{v}\) that starts in \(v\), that ends in another terminal and that is completely contained in \(W\). Moreover, via the 3-fan \(T\), there is a path between the endvertices in \(\Phi(H)-W^{0}\). (Observe that the paths \(Q_{a^{*}},Q_{c^{*}}\) do not have to be disjoint, nor distinct.) If \(Q_{a^{*}}\) ends in \(\{c^{*},d^{*}\}\) or if \(Q_{c^{*}}\) ends in \(\{a^{*},b^{*}\}\), we observe that \(Q_{a^{*}}\) or \(Q_{c^{*}}\) is an \(\{a^{*},b^{*}\}\)-\(\{c^{*},d^{*}\}\) path in \(\Phi(H)\cap W\) whose endvertices are in the same component of \(\Phi(H)-W^{0}\) and we are done. Thus we may assume that \(Q_{a^{*}}\) is an \(a^{*}\)-\(b^{*}\) path and \(Q_{c^{*}}\) is a \(c^{*}\)-\(d^{*}\) path. If \(Q_{a^{*}}\) is disjoint from \(Q_{c^{*}}\), they form an (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkage in \(\Phi(H)\cap W\), which was what we wanted. Thus, we may assume that \(Q_{a^{*}}\) intersects \(Q_{c^{*}}\), which implies that \(Q_{a^{*}}\cup Q_{c^{*}}\) contains an \(a^{*}\)-\(c^{*}\) path \(P\). We then apply Lemma 12 to \(a^{*},c^{*}\) in the role of \(g^{*},h^{*}\), and to some \(a\)-\(c\) path \(L\) in \(M-e_{1}-e_{2}\). Note that \((V(L))^{*}\) is automatically disjoint from \(B^{*}\setminus\Phi(B)\), as the latter set is empty, by (8). The path we obtain from the lemma then shows that the endvertices of \(P\) lie in the same component of \(\Phi(H)-W^{0}\), and we are done. In the next lemma we will use planarity arguments. To this end, if \(G\) is a planar graph that is drawn in the plane, ie, if \(G\subseteq\mathbb{R}^{2}\), then we define the _interior_\(\operatorname{int}(G)\) as the set \(\mathbb{R}^{2}\setminus F\), where \(F\) is the outer face (the unbounded face) of \(G\). We have reached the final lemma that concludes the proof of Theorem 2. **Lemma 5**.: \(\Phi(H)\) _contains an (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkage in \(W\)._ Proof.: Suppose that \(\Phi(H)\cap W\) does not contain any (\(a^{*}\)-\(b^{*}\), \(c^{*}\)-\(d^{*}\)) linkage. Then Lemma 13 yields \(v_{1}\in\{a^{*},b^{*}\}\), \(v_{2}\in\{c^{*},d^{*}\}\) and \(v_{1}\)-\(v_{2}\) paths \(P\) and \(Q\) such that \(P\subseteq\Phi(H)\cap W\) and \(Q\subseteq\Phi(H)-W^{0}\). Set \(D=\{h\in B:h^{*}\not\in\Phi(B)\}\) and observe that Lemma 10 implies that \(|D|\leq 52\). Every vertex is incident with at most one row and one column of the wall' \(M\). Thus, there is a wall' \(M^{\prime}\subseteq M-D-\{a,b,c,d\}\) that contains all but at most 56 rows and columns of \(M\), and that is disjoint from \(D\) and from the terminals \(a,b,c,d\). We write \({M^{\prime}}^{*}\subseteq Z\) for the subwall' of \(M^{*}\) that contains all images of the branch vertices of \(M^{\prime}\) under \(*\). As all proper branch vertices of \(M^{\prime}\) are in \(B_{M}\) and as \({M^{\prime}}^{*}\) is disjoint from \(B^{*}\setminus\Phi(B)\) we can apply Lemma 12 to every (subdivided) edge \(gh\) of \(M^{\prime}\) to see that there is a \(g^{*}\)-\(h^{*}\) path in \(\Phi(H)-W^{0}\). Moreover, as such a path is a \(B^{*}_{M}\)-path (and thus in \(\mathcal{S}\) with respect to the lemma), the obtained paths are all internally disjoint. Replacing the subdivided edges of \({M^{\prime}}^{*}\) one by one in this way, we obtain a wall' \(R\) in \(\Phi(H)-W^{0}\) whose proper branch vertices are identical with those from \({M^{\prime}}^{*}\). In particular, for every row (resp. for every column) of \({M^{\prime}}^{*}\) there is a row (resp. a column) of \(R\) with the same proper branch vertices. We note for later that \[R\subseteq\Phi(H)-W \tag{9}\] We make a second observation. The graph \(Z-W^{0}\) is planar as \(H\) is planar, and, in what follows, we consider a fixed drawing of \(Z-W^{0}\). Then, the interior \(\operatorname{int}(S)\) of any brick \(S\) of \(M^{*}\) is well-defined. We may assume that \(Z-W^{0}\) is drawn in such a way that no brick interior contains the outercycle of \(M^{*}\). We use this to observe that if \(S^{\prime}\) is a brick of \({M^{\prime}}^{*}\) and if \(S\) is the corresponding brick of \(R\) with the same proper branch vertices then any vertex in \(\operatorname{int}(S^{\prime})\) lies in the interior \(\operatorname{int}(S)\) or in the interior of a brick of \(R\) that is adjacent to \(S\), ie, that shares a subdivided edge with \(S\). Recall the \(v_{1}\)-\(v_{2}\) path \(Q\) contained in \(\Phi(H)-W^{0}\). We claim: \[\begin{array}{l}Q\mbox{ meets }R\mbox{, and if }q_{1}\mbox{ is its first and }q_{2}\mbox{ its last vertex in }R\mbox{ then}\\ q_{1},q_{2}\mbox{ are }8\mbox{-apart in }R\mbox{.}\end{array} \tag{10}\] As each pair of one vertex from \(\{a,b\}\) and one of \(\{c,d\}\) is 70-apart in \(M\), it follows that \(v_{1},v_{2}\) are 70-apart in \(M^{*}\). (Recall that \(M^{*}\) is a subdivision of \(M-e_{1}-e_{2}\) in \(Z-W^{0}\).) As every path in \(M^{*}\) from \(v_{1}\) or from \(v_{2}\) to the outercycle of \(M^{*}\) meets at least 70 rows or columns, and as \({M^{\prime}}^{*}\) contains all but 56 rows and all but 56 columns of \(M^{*}\) it follows that there are bricks \(S^{\prime}_{1},S^{\prime}_{2}\) of \({M^{\prime}}^{*}\) such that \(v_{i}\in\operatorname{int}(S^{\prime}_{i})\) for \(i=1,2\). Consider a path \(Q^{\prime}\subseteq{M^{\prime}}^{*}\) from a vertex of \(S^{\prime}_{1}\) to a vertex of \(S^{\prime}_{2}\) and suppose that \(Q^{\prime}\) meets \({M^{\prime}}^{*}\) fewer than 10 times. Then follow \(Q\), which is a path in \(Z-W^{0}\), from \(v_{1}\) to the first vertex in \(S^{\prime}_{1}\), then along \(S^{\prime}_{1}\) to the first vertex of \(Q^{\prime}\), then along \(Q^{\prime}\) to \(S^{\prime}_{2}\), from there to the last vertex of \(Q\) in \(S^{\prime}_{2}\) and along \(Q\) to \(v_{2}\). The resulting \(v_{1}\)-\(v_{2}\) path \(Q^{\prime\prime}\subseteq Z-W^{0}\) meets fewer than 14 rows and columns of \({M^{\prime}}^{*}\) (each of the bricks \(S^{\prime}_{1}\) and \(S^{\prime}_{2}\) may contribute at most two more rows and columns). As \({M^{\prime}}^{*}\) contains all but 56 rows and columns of \(M^{*}\) we see that \(Q^{\prime\prime}\) meets fewer than 70 rows and columns of \(M^{*}\), which is impossible as \(v_{1},v_{2}\) are 70-apart in \(M^{*}\). In a similar way, we see that each path from \(v_{1}\) or from \(v_{2}\) to the outercycle of \({M^{\prime}}^{*}\) meets 10 rows or columns of \({M^{\prime}}^{*}\). Therefore, \(S^{\prime}_{1},S^{\prime}_{2}\) are 10-apart in \({M^{\prime}}^{*}\). As we had observed that the interior of each brick of \(R\) is contained in the interior of the corresponding brick in \({M^{\prime}}^{*}\) together with the interiors of adjacent bricks, it follows that there are bricks \(S_{1}\) and \(S_{2}\) of \(R\) such that \(v_{i}\in\operatorname{int}(S_{i})\) for \(i=1,2\) and such that \(S_{1},S_{2}\) are 8-apart in \(R\). As a consequence, the path \(Q\), which is entirely contained in the plane graph \(Z-W^{0}\), meets \(R\) (in at least eight vertices). Denote by \(q_{1}\) the first vertex of \(Q\) in \(R\), and let \(q_{2}\) be the last vertex of \(Q\) in \(R\). Then \(q_{1}\) lies in the brick \(S_{1}\), and \(q_{2}\) lies in \(S_{2}\). Therefore, \(q_{1},q_{2}\) are 8-apart in \(R\). This proves (10). Recall the \(v_{1}\)-\(v_{2}\) path \(P\) contained in \(\Phi(H)\cap W\). As \(H\) is planar, and as as \(Q\cup P\cup R\subseteq\Phi(H)\), it follows that \(Q\cup P\cup R\) is planar, too. Consider \(q_{1}Qv_{1}\cup P\cup v_{2}Qq_{2}\): this is a \(q_{1}\)-\(q_{2}\) path that meets the wall' \(R\) only in its endvertices since \(P\subseteq W\), while \(R\) is disjoint from \(W\), by (9). However, \(q_{1},q_{2}\) are 8-apart, by (10). Clearly, this is impossible in a planar graph. The final contradiction proves the lemma.
2307.06042
Dissipation and turbulence in general relativistic hydrodynamics
This work is concerned with advancing multi-fluid models in General Relativity, and in particular focuses on the modelling of dissipative fluids and turbulent flows. Such models are required for an accurate description of neutron star phenomenology, and binary neutron star mergers in particular. In fact, the advent of multi-messenger astronomy offers exciting prospects for exploring the extreme physics at play during such cosmic fireworks. We first focus on modelling dissipative fluids in relativity, and explore the arguably unique model that is ideally suited for describing dissipative multi-fluids in General Relativity. Modelling single fluids in relativity is already a hard task, but for neutron stars it is easy to argue that we need to understand even more complicated settings: the presence of superfluid/superconducting mixtures, for example, means that we need to go beyond single-fluid descriptions. We then consider turbulent flows and focus on how to perform "filtering" in a curved spacetime setting. We do so as most recent turbulent models in a Newtonian setting are based on the notion of spatial filtering. As the same strategy is beginning to be applied in numerical relativity, we focus on the foundational underpinnings and propose a novel scheme for carrying out filtering, ensuring consistency with the tenets of General Relativity. Finally, we discuss two applications of relevance for binary neutron star mergers. We focus on the modelling of ($\beta$-)reactions in neutron star simulations, and provide a discussion of the magneto-rotational instability that is suited to highly dynamical environments like mergers. We focus on these two problems as reactions are expected to source the dominant dissipative contribution to the overall dynamics, while the magneto-rotational instability is considered crucial for sustaining the development of turbulence in mergers.
Thomas Celora
2023-07-12T09:40:21Z
http://arxiv.org/abs/2307.06042v1
# University of Southampton Research Repository ###### Abstract The proposed approach to the study of the Southampton Research Repository is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. The proposed approach is based on the idea of Southampton Research Repository. The proposed approach is based on the idea of the Southampton Research Repository. [MISSING_PAGE_POST] **University of Southampton** Faculty of Social Sciences School of Mathematical Sciences **Dissipation and turbulence in general** **relativistic hydrodynamics** **Thomas Celora** ORCiD: 0000-0002-6515-3644 _Thesis for the degree of Doctor of Philosophy_ July 2023 [MISSING_PAGE_POST] University of Southampton Abstract Faculty of Social Sciences School of Mathematical Sciences Doctor of Philosophy **Dissipation and turbulence in general relativistic hydrodynamics** by Thomas Celora Hydrodynamics is one of the oldest research areas in physics, with applications across all macroscopic scales in the Universe. Despite the long history of successes, however, fluid modelling still presents severe conceptual and computational challenges. Not surprisingly, the hurdles become even more formidable for relativistic flows, and new issues come to the fore too. This work is concerned with advancing multi-fluid models in General Relativity, and in particular focuses on the modelling of dissipative fluids and turbulent flows. Such models are required for an accurate description of neutron star phenomenology, and binary neutron star mergers in particular. In fact, the advent of multi-messenger astronomy--started with the first detection of a binary neutron star coalescence in 2017--offers exciting prospects for exploring the extreme physics at play during such cosmic fireworks. In this work we first focus on modelling dissipative fluids in relativity, and explore the arguably unique model that is ideally suited for describing dissipative multi-fluids in General Relativity. Modelling single fluids in relativity is already a hard task, but for neutron stars it is easy to argue that we need to understand even more complicated settings: the presence of superfluid/superconducting mixtures, for example, means that we need to go beyond single-fluid descriptions. We then consider turbulent flows and focus on how to perform "filtering" in a curved spacetime setting. We do so as most recent turbulent models in a Newtonian setting are based on the notion of spatial filtering. As the same strategy is beginning to be applied in numerical relativity, we focus on the foundational underpinnings and propose a novel scheme for carrying out filtering, ensuring consistency with the tenets of General Relativity. Finally, we discuss two applications of relevance for binary neutron star mergers. We focus on the modelling of (\(\beta\)-)reactions in neutron star simulations, and provide a discussion of the magneto-rotational instability that is suited to highly dynamical environments like mergers. We focus on these two problems as reactions are expected to source the dominant dissipative contribution to the overall dynamics, while the magneto-rotational instability is considered crucial for sustaining the development of turbulence in mergers. [MISSING_PAGE_POST] ###### Contents * List of Figures * Declaration of Authorship * Acknowledgements * Notation * 1 Introduction * 1.1 Motivation * 1.2 Outline * 1.3 Relativistic perfect fluids * I Dissipative (multi-)fluids in General Relativity * 2 Dissipative fluid models in General Relativity: an overview * 2.1 Non-equilibrium thermodynamics * 2.1.1 Linear irreversible thermodynamics * 2.1.2 Causality and extended irreversible thermodynamics * 2.2 Traditional strategies for dissipative fluids * 2.2.1 Eckart-Landau-Lifshitz models * 2.2.2 Muller-Israel-Stewart models * 2.2.3 Liu-Muller-Ruggeri and divergence-type theories * 2.3 Variational models * 2.3.1 Non dissipative multi-fluids models * 2.3.2 Carter-like dissipative models * 2.3.3 Andersson and Comer formalism * 2.4 Hydrodynamics as an effective field theory * 3 Linearizing an action-based formalism for dissipative (multi-)fluids * 3.1 Main assumptions of the model * 3.1.1 Flux definition * 3.1.2 Matter space volume forms * 3.2 The non-dissipative limit * 3.2.1 Laboratory vs. general relativistic set-up * 3.2.2 Multiple equilibrium states * 3.2.3 Global analysis of the non-dissipative limit #### 3.2.4 Local analysis of the non-dissipative limit 3.2.5 A final comment on equilibrium 3.3 Perturbations with respect to equilibrium 3.4 Energy density is stationary at equilibrium 3.5 The last piece of the puzzle 3.6 Model comparison 3.6.1 Equating the flux currents 3.6.2 Example: A viscous single fluid 3.7 Cattaneo-type equations 3.8 Summary and outlook ## II A covariant approach to large-eddy filtering * 4 Filtering relativistic hydrodynamics * 4.1 A brief introduction to hydrodynamic turbulence * 4.2 Averaging turbulent flows * 4.3 Averaging vs filtering * 4.4 The spacetime view: Fermi Coordinates * 4.4.1 On covariance and the Einstein equations * 4.5 Averaging in the fluid frame * 4.5.1 Baryon number conservation * 4.5.2 Averaged matter dynamics * 4.5.3 The equation of state * 4.6 Fluid element filtering * 4.7 Filtered Thermodynamics * 4.7.1 The effective entropy * 4.7.2 Energy cascade argument * 4.8 An explicit closure model * 4.8.1 Stability Analysis * 4.8.2 Smagorinsky model * 4.8.3 Fixing the Smagorinsky instability * 4.9 Summary * 5 Filtering relativistic magneto-hydrodynamics * 5.1 A brief introduction to magneto-hydrodynamic turbulence * 5.2 Magneto-hydrodynamics in the fibration: a shortcut * 5.3 MHD covariant filtering: first steps * 5.4 Outlook ## III Binary neutron-star merger applications * 6 Formulating bulk-viscosity for neutron star simulations * 6.1 Simplifications must be made * 6.2 The reactive system * 6.2.1 Thermodynamics of a reactive system * 6.2.2 Thermodynamics working with the equilibrium electron fraction * 6.3 Approximating the reactive system * 6.3.1 Multi-scale arguments and the reactive system * 6.3.2 Invariant manifold method with the electron fraction * 6.3.3 Partially resolved reactions and double counting * 6.4 Making contact with simulations * 6.4.1 What bulk viscous pressure approximation is suitable? * 6.4.2 How relevant is bulk viscosity in mergers? * 6.4.3 The impact of large-eddy filtering * 6.5 Summary and Outlook * 7 Magneto-rotational instability in mergers: a local Newtonian analysis * 7.1 Background gradients and plane-wave expansion * 7.2 The slowly evolving background * 7.2.1 Velocity gradient decomposition * 7.3 Non-inertial equations and the local frame * 7.4 Going back to hydrodynamics * 7.5 Magneto-shear instability in the local frame * 7.5.1 Homogeneous background: a recap * 7.5.2 Sheared Background * 7.5.3 Background with vorticity * 7.6 Concluding remarks: The MRI in perspective * 7.6.1 The MRI vs the Rayleigh criterion * 7.6.2 The missing ingredient: Filtering * IV Conclusions * A Transporting a tetrad and Fermi Coordinates * A.1 Transporting a tetrad along a curve * A.2 Fermi coordinates * B Multi-scale arguments and the invariant manifold method * B.1 Invariant manifold approach * B.2 Two timescale approach * B.3 Linear fast dynamics * B.4 Constructing the fast terms * C Working with the CompOSE database * D Formulating the MRI in the local frame * D.1 Another look at the non-inertial MHD equations. * D.2 A closer look at the Rayleigh stability criterion * E The Routh-Hurwitz criterion [MISSING_PAGE_POST] List of Figures * 2.1 The pull-back from a point in the \(x^{th}\) matter space to the corresponding spacetime worldline. The points in matter space are labelled by \(X_{x}^{A}\) with \(A=1,2,3\). Figure taken from Andersson and Comer [21]. * 3.1 A depiction of the spacetime region \(\mathcal{M}\), with one spatial axis suppressed. It has a characteristic spatial size \(\Delta L\) and temporal size \(\Delta T\). Inside \(\mathcal{M}\) is a smaller region \(\delta\mathcal{M}\) of characteristic spatial and temporal size \(\delta l\) and \(\delta t\), respectively. The boundary \(\partial\mathcal{M}\) consists of the initial and final times slices \(\partial\mathcal{M}_{-}\), \(\partial\mathcal{M}_{+}\) and the timelike hypersurface \(\partial\mathcal{M}_{L}\). * 3.2 An illustration of worldlines associated with the fluid elements (solid vertical red lines, parameterized by \(\tau\), \(\bar{\tau}\)) and "Lagrangian displacements" which connect fluid elements (dashed horizontal blue lines, parameterized by \(\lambda\)). * 4.1 Cartoon of the scales of the turbulent energy spectrum. Figure adapted from McDonough [151]. * 4.2 Left: shaded region resulting from combining the stability constraints obtained for the i) transverse modes (gapped, un-gapped, boosted, un-boosted), ii) longitudinal un-boosted modes (gapped and un-gapped), and iii) longitudinal boosted gapped modes. Right: sound speeds (solid) and damping rates (dashed) for the un-gapped longitudinal modes in the boosted frame. Colours match sound speeds with the corresponding damping rates. For illustrative purposes we used: \(c_{s}=0.16\), \(v=0.4\), \(\eta=1.2\), \(\theta_{1}=0.5\) and a barotropic equation of state (both figures). * 5.1 Cartoon of the perpendicular MHD turbulence spectrum. At scales larger than critical balance (i.e. \(k<k_{CB}\)) is shown the same scaling as in weak turbulence (i.e. \(\propto k_{\perp}^{-2}\)). At scales smaller than critical balance is shown the aligned cascade (i.e. \(\propto k_{\perp}^{-3/2}\)) periodically interrupted at \(k_{1},k_{2}\) and so on, while \(k_{D}\) represents the scale at which dissipative/resistive effects begin to prevail over inertial ones. Figure adapted from Schekochihin [192]. * 6.1 Illustrating the behaviour of \(\beta(\omega)\) in two cases: the solution to the Cattaneo equation (i.e. the full extended irreversible thermodynamics (EIT) relaxation towards the Navier-Stokes limit, blue solid curve) and the parabolic case (the Navier-Stokes limit, orange dashed line). The shaded region indicates frequencies we assume we may not have "access" to numerically (this region moves towards higher frequencies when the numerical resolution is increased). The maximum (for each point we assume that \(\omega={\cal A}\)) potential relative contribution of the bulk viscous approximation \(\chi^{\rm max}/p^{\rm eq}\) at each point in phase space using the APR [210, 195] equation of state. We see that the bulk viscous pressure contribution can be large for most temperatures when \(n\gtrsim 10^{-3}n_{\rm sat}\). However, the bulk viscous approximation should only be used where the reaction rate cannot be resolved by the numerical simulation, which is where the grid frequency is greater than \({\cal A}\). Also shown are contours at \({\cal A}=\{10^{3},10^{4},10^{7},10^{9}\}\) s\({}^{-1}\) (solid, dashed, dot-dash, dot). For current simulations, frequencies of \(\sim 10^{6}\) s\({}^{-1}\) are resolvable. This shows that the bulk viscous approximation should only be used for \(T\gtrsim 10\)MeV, and as the grid resolution improves becomes less necessary. * Real and imaginary part of the solutions of eq. (7.59),with both the frequency and \(|{\bf v}_{A}\cdot{\bf k}|\) in units of \(\sqrt{{\rm Tr}(\sigma^{2})}\). The solutions plotted correspond to the fastest growing modes evolving on top of an MHD sheared background. We see that the magnetic field has a stabilizing effect, as the growth rates are reduced with respect to those of the corresponding hydrodynamic modes. The stabilizing effect is all the more pronounced the more the wave-vector is aligned with the magnetic field lines, and is switched off for modes propagating in the directions perpendicular to the magnetic field lines. In particular, modes corresponding to sufficiently large values of \(|{\bf v}_{A}\cdot{\bf k}|\) are turned stable. * Left: Tetrad transported along the observer worldline. Right: Connecting geodesics starting from and perpendicular to the central curve. Figure adapted from Misner et al. [155]. * Plot of \({\cal A}\) for the APR equation of state used in [104]. The restoring term \(\gamma\) is calculated assuming the Fermi surface approximation remains valid. Contours are at \({\cal A}=\{10^{3},10^{4},10^{7},10^{9}\}\)s\({}^{-1}\) (solid, dash, dot-dash, dot). * Plot of \({\cal A}\) for the APR equation of state used in [104]. The restoring term \(\gamma\) is calculated assuming the Fermi surface approximation remains valid. ## Declaration of Authorship I declare that this thesis and the work presented in it is my own and has been generated by me as the result of my own original research. I confirm that: 1. This work was done wholly or mainly while in candidature for a research degree at this University; 2. Where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated; 3. Where I have consulted the published work of others, this is always clearly attributed; 4. Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work; 5. I have acknowledged all main sources of help; 6. Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself; 7. Parts of this work have been published as: * Celora et al. [57], * Celora et al. [58], * Andersson et al. [26], * Celora et al. [59]. * Celora et al. [60]. [MISSING_PAGE_POST] ## Acknowledgements Even though at times it feels like yesterday, almost four years have passed since I first set foot in Southampton. All the experiences I have had during this time have massively shaped the person I have become, and if I can proudly stand where I am today I owe it to the great people I shared parts of this journey with. Since the list is too long to acknowledge everyone personally, I will only thank here those that deserve special mentions and without whom this would not have been possible. First of all, I am hugely indebted to my supervisors Nils Andersson and Ian Hakwe, and to Greg Comer--a.k.a. the fantastic trio. I have indeed been extremely lucky to have had not two, but three superb mentors. You have been an endless source of inspiration and guidance, and I did my best to learn as much as I could from you. Working with giants like you has been an immense pleasure and honour, and I hope our collaboration will continue for a long time. I must also express my sincere thanks to the members of the Gravity Group, particularly those working on neutron star physics. I am honoured to have been part of such a group of brilliant scientists, and grateful for all I have learnt from you. Next, I want to thank my mother Anna. You were the first one to show me how fascinating physics is, and I strongly believe you were the real trigger for everything that came later. I can never repay you for all the sacrifices you have made, which have allowed me to be here today. I must also thank my brother and sisters Agostino, Eleonora and Giuditta--rigorously in alphabetic order. I truly came to appreciate during my time here how much you all mean to me. Knowing that I can always count on you is priceless. My friends also played a key role in this endeavour. Both the new ones I met in Southampton, and those who have known me for longer. Without you, I do not think I would have made it this far. Last, but by no means least, I wish to thank my partner in crime Elisa. You are the one that has been closest to me throughout the emotional roller coaster of these years. I know I have been difficult way too often, and I cannot find the words to express how grateful I am for your love, support and faith in me. You should know that this achievement is also yours, and I look forward with indescribable excitement to the next steps of our journey. [MISSING_PAGE_POST] ## Notation **General conventions and units.** In this thesis we work with geometric units, that is \[G =6.674\times 10^{-11}\,\mathrm{m}^{2}\,\mathrm{kg}^{-1}\,\mathrm{s}^ {-2}=1\,\] \[c =2.997\times 10^{8}\,\mathrm{m}\,\mathrm{s}^{-1}=1\,\] where \(G\) is Newton's gravitational constant and \(c\) is the speed of light in vacuum. We use the "mostly-plus" signature for the metric, so that the flat line element in Minkowski coordinates \(\{t,x,y,z\}\) takes the form \[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+\mathrm{d}x^{2}+\mathrm{d}y^{2}+\mathrm{d}z^ {2}\.\] As such, time-like four vectors have negative norm. Throughout this document we mainly work using explicit components notation for tensors. Space-time indices are denoted by "early" Latin characters \(a,b,c,\dots\) and take values from \(0\) to \(3\), while spatial indices are denoted with "late" Latin characters \(i,j,k\dots\) and take values from \(1\) to \(3\). We reserve the Latin characters \(\mathrm{x},\mathrm{y},\mathrm{z}\) to denote different chemical species/components. In a multi-component system made of neutrons (n) and protons (p) for example, the chemical index takes values \(\mathrm{x}=\mathrm{n},\mathrm{p}\). Capital Latin characters \(A,B,C,\dots\) will be used to indicate coordinates on the matter spaces only. Finally, we distinguish indices with respect to a coordinate basis from those relative to an orthonormal basis or tetrad. The latter are denoted with an additional "hat" symbol on top, such as \(v^{a}\). We make use of the Einstein summation convention, where repeated indices (one contravariant or "up" and one covariant or "down") imply summation. For example \[v_{a}v^{a}=\sum_{a=0}^{3}v_{a}v^{a}\,\] where \(v^{a}\) is an arbitrary four vector. Note that the Einstein summation convention does not apply to the chemical indices \(x,y,z\). Similarly, we will not distinguish between "up or down" chemical indices. Indices enclosed in round or square brackets denote, respectively, symmetrization and anti-symmetrization. If one (or more) of the indices within such brackets has "straight lines" to its left and right, the (anti-)symmetrization does not apply to it. For example, given a generic tensor \(A^{abc}\) \[A^{a(bc)}=\frac{1}{2}\left(A^{abc}+A^{acb}\right)\,\quad A^{a[bc]}=\frac{1}{2} \left(A^{abc}-A^{acb}\right)\,\quad A^{(a[b]c)}=\frac{1}{2}\left(A^{abc}+A^{cba}\right)\.\] Derivatives and forms.Throughout this document we make use of different notions of derivatives of a tensor. As usual, partial and covariant derivatives are denoted with \(\partial\) and \(\nabla\) respectively. Working in a coordinate basis, the tangent space is spanned by the basis vectors \[\partial_{a}=\frac{\partial}{\partial x^{a}}\,\] and the covariant derivative of a generic tensor \(T^{ab\dots}_{\ c\dot{c}\dot{d}\dots}\) takes the form \[\nabla_{\epsilon}T^{ab\dots}_{\ c\dot{c}\dot{d}\dots}=\partial_{ \epsilon}T^{ab\dots}_{\ c\dot{c}\dot{d}\dots}+\Gamma^{a}_{\ ef}T^{fb\dots}_{\ c \dot{d}\dots}+\Gamma^{b}_{\ ef}T^{af\dots}_{\ c\dot{d}\dots}+\dots\\ -\Gamma^{f}_{\ ec}T^{ab\dots}_{\ j\dot{d}\dots}-\Gamma^{d}_{\ cd}T^{ab \dots}_{\ c\dot{f}\dots}-\dots\.\] Clearly, this reduces to partial derivative for scalar quantities. The connection coefficients \(\Gamma^{a}_{\ bc}\) are taken as Christoffel symbols and are determined by the metric \(g_{ab}\) and its first derivatives as \[\Gamma^{a}_{\ bc}=\frac{1}{2}g^{ad}\left(\partial_{b}g_{dc}+\partial_{c}g_{ bd}-\partial_{d}g_{bc}\right)\.\] This means that, in particular, the connection is compatible with the metric \(\nabla_{a}g_{bc}=0\) and the connection is symmetric (i.e. torsion free) \(\Gamma^{a}_{\ bc}=\Gamma^{a}_{\ (bc)}\). An orthonormal basis \(e_{\dot{a}}\) is linked to a coordinate one through the "matrix" \(e^{a}_{\ b}\) (or its inverse \(e^{a}_{\ b}\)) so that \[\partial_{a}=e^{b}_{a}e_{\ b}\.\] The covariant derivative of a tensor \(T^{ab\dots}_{\ c\dot{c}\dot{d}\dots}\) in a tetrad basis takes the form \[\nabla_{\epsilon}T^{\dot{a}\dot{b}\dots}=\partial_{\epsilon}T^{ \dot{a}\dot{b}\dots}+\omega_{\ ef}^{\ \dot{a}}T^{\dot{b}\dots}_{\ \dot{c}\dot{d}\dots}+\omega_{\ ef}^{\ \dot{b}}T^{\dot{a}\dot{f}\dots}_{\ \dot{c}\dot{d}\dots}+\dots\\ -\omega_{\ ef}^{\ \dot{b}}T^{\dot{a}\dot{b}\dots}-\omega_{\ ef}^{\ \dot{d}}T^{\dot{b}\dot{b}\dots}_{\ \dot{c}\dot{f}\dots}-\dots\,\] where the spin coefficients \(\omega_{\ a\ \dot{b}}^{\ \dot{c}}\) can be obtained from the connection coefficients via \[\omega_{\ a\ \dot{b}}^{\ \dot{c}}=e^{\dot{c}}_{\ b}\ T^{c}_{\ ad}-e^{\dot{c}}_{ \ b}\ \partial_{a}e^{\dot{c}}_{\ d}\.\] We also make use of the notion of Lie derivative. Given a tensor \(T^{ab...}_{cd...}\) and a vector \(v^{a}\), the Lie derivative of \(T^{ab...}_{cd...}\) along \(v^{a}\) is given by \[\begin{split}\mathcal{L}_{v}T^{ab...}_{cd...}=v^{d}\partial_{d}T^{ ab...}_{cd...}&-(\partial_{e}v^{a})T^{eb...}_{cd...}-(\partial_{e}v^{b})T^{ ae...}_{cd...}-\dots\\ &+(\partial_{e}v^{e})T^{ab...}_{cd...}+(\partial_{d}v^{e})T^{ ab...}_{ce...}+\dots\.\end{split}\] Whilst the Lie derivative does not require a metric or a connection to be defined, it is often convenient to rewrite the expression above in a form that is manifestly covariant. This can be obtained by substituting \(\partial\to\nabla\) in the expression above, provided the connection is symmetric (the case of interest here as we work with Christoffel symbols). We also make use of differential (p-)forms, namely covariant tensors that are totally anti-symmetric. Given a p-form \(n\) we write it in components notation using the "natural basis" \[n=\frac{1}{p!}n_{i_{1}\dots i_{p}}dx^{i_{1}}\wedge\dots\wedge dx^{i_{p}}\,\] where \(\wedge\) is the exterior product and \(n_{i_{1}\dots i_{p}}\) is totally anti-symmetric in its indices. The exterior derivative \(d\) is an operation that takes a p-form as input and outputs a (p+1)-form. In components notation this is defined as \[dn_{i_{1}\dots i_{p+1}}=(p+1)\partial_{[i_{1}}n_{i_{2}\dots i_{p+1}]}\.\] In a similar fashion as for the Lie derivative, the exterior derivative of a p-form can be written in a manifestly covariant way by substituting \(\partial\to\nabla\) in the expression above provided we work with a symmetric connection. The Hodge-dual operation, denoted by \(*\), takes a p-form as input and outputs a (n-p)-form, where \(n\) is the dimension of the manifold the p-form lives in. In component notation, \[(*n)_{i_{1}\dots i_{q}}=\varepsilon_{i_{1}\dots i_{q}}{}^{j_{1}\dots i_{p}}n_{ j_{1}\dots j_{p}}\,\quad q=n-p\] where \(\varepsilon\) is the spacetime volume form, defined as \[\varepsilon_{abcd}=\sqrt{-g}[a\,b\,c\,d]\,\quad\varepsilon_{ab}{}^{cd}= \varepsilon_{abef}g^{ce}g^{df}\] with \([a\,b\,c\,d]\) denoting the Levi-Civita symbol. Riemann, Ricci and Einstein tensor.The space-time curvature is encoded in the Riemann tensor, namely the space-time is flat provided the Riemann tensor vanishes. Given a generic co-vector (or 1-form) \(w_{a}\), the Riemann tensor is defined as \[[\nabla_{a},\nabla_{b}]w_{c}=(\nabla_{a}\nabla_{b}-\nabla_{b}\nabla_{a})\,w_{ c}=-R^{d}_{cab}w_{d}\.\] In components notation, \[R^{a}_{\,\,bcd}=\partial_{c}\Gamma^{a}_{\,\,bd}-\partial_{d}\Gamma^{a}_{\,\,bc}+ \Gamma^{e}_{\,\,db}\Gamma^{a}_{\,\,ce}-\Gamma^{e}_{\,\,bc}\Gamma^{a}_{\,\,de}\.\] The Ricci tensor, Ricci scalar and Einstein tensor are obtained from the Riemann tensor as \[R_{ab}=R^{c}_{\,\,acb}\,\quad R=R^{a}_{\,\,a}\,\quad G_{ab}=R_{ab}-\frac{1}{2} Rg_{ab}\.\] **Projections and velocity gradients decomposition.** Throughout this document we often use parallel and perpendicular projection operators with respect to an observer. Given a time-like unit vector \(u^{a}\), these are defined as \[\not{\!\!\!/}^{a}_{\,\,b}=-u^{a}u_{b}\,\quad\perp^{a}_{\,\,b}=g^{a}_{\,\,b}+u^{a }u_{b}\.\] We also make frequent use of the observer four velocity gradients decomposition. This is given by \[\nabla_{a}u_{b}=-a_{b}u_{a}+\omega_{ab}+\sigma_{ab}+\frac{1}{3}\theta\perp_{ab}\] where the acceleration and vorticity, shear and expansion rate are defined (respectively) as \[a^{b} =u^{a}\nabla_{a}u^{b}\,\] \[\omega_{ab} =\perp^{c}_{\,\,[a}\perp^{d}_{\,\,b]}\nabla_{c}u_{d}\,\] \[\sigma_{ab} =\perp^{c}_{\,\,(a}\perp^{d}_{\,\,b)}\nabla_{c}u_{d}-\frac{1}{3 }\theta\perp_{ab}\,\] \[\theta =\nabla_{a}u^{a}=\perp^{a}_{\,\,b}\nabla_{a}u^{b}\.\] Finally, the notion of Hodge-dual in the subspace orthogonal with respect to some observer \(u^{a}\) four velocity is frequently used in this work. This is defined in terms of the space-time volume form \(\varepsilon_{abcd}\) as \[\varepsilon^{\rm u}_{abc}=\varepsilon_{dabc}u^{d}\.\] Note that, for notational clarity, we will often drop the u-label and write the object as \(\varepsilon_{abc}\) instead. We do so whenever there is no risk of confusion. Any variation from (or addition to) the notational conventions discussed here will be made explicit throughout the document. ## Chapter 1 Introduction ### 1.1 Motivation Fluid behaviour is relevant across all macroscopic scales in the Universe, from the interior of biological cells and the cardiovascular apparatus all the way up to planetary systems, stars, galaxies and beyond. This, together with the fact that fluids are vital to human survival, is arguably the reason why hydrodynamics--namely the study and modelling of fluid behaviour--is one of the oldest research areas in physics. And yet, despite fluids having attracted the attention of numerous scientists and engineers over the years, hydrodynamic modelling still presents severe conceptual and computational challenges. This is demonstrated, for example, by the "Navier-Stokes existence and smoothness problem" Millennium Prize, and/or the fact that turbulent flows--namely, the vast majority of real-life flows--are extremely costly to simulate. Such challenges are ever more formidable when we couple hydrodynamics to relativity, meaning that we consider fluids flowing at velocities commensurate to the speed of light and, at the same time, immersed in a strong gravitational field. The challenges added by the (successful) marriage of relativity and hydrodynamics are, not surprisingly, both conceptual and practical. To give one example, the most intuitive and natural extensions to relativity of the Navier-Stokes equations--further discussed in this thesis--are well-known to give rise to problems [115]. On the practical side, simulations of relativistic fluids are intimidating by their complexity and computational cost. As a small side-step before carrying on, let us state clearly that throughout this thesis we consider Einstein's General Relativity as the theory of gravity, even though we know that this cannot be the "ultimate theory". The indications of this are, in fact both theoretical and observational. On the one hand we know General Relativity breaks down on the quantum scales. On the other hand, we may view the need of including mysterious (and predominant) dark energy/matter contents in our cosmological models--required to match observational evidences such as the accelerated expansion of the Universe--as an indication that something is missing in our understanding. Nonetheless we will here take a pragmatic/conservative approach and consider General Relativity as the "correct" theory of gravity. This is well-motivated considering that General Relativity has passed, over the years, weak and strong-field tests with flying colours (see, for example, the recent review by Will [226]). Given the tremendous challenges posed by the modelling of relativistic fluids, it is perfectly logical to wonder why we should care. The obvious reason is that there exist real systems whose description/modelling requires relativistic hydrodynamics. The most intriguing ones, possibly from a biased perspective, are neutron stars. Neutron stars form, quite dramatically, as result of a dying star that was born sufficiently massive--namely with mass \(8M_{\odot}\lesssim M\lesssim 30M_{\odot}\), where \(M_{\odot}\) is the mass of our Sun [199, 186, 17]. Once it has emptied its nuclear fuel reservoir, such a massive star undergoes an extremely powerful explosion, known as "core-collapse supernova", that leaves behind a proto-neutron star. In essence, neutron stars are among the most exotic objects in the Universe: they contain roughly the same mass as the Sun squeezed into a 10 km radius sphere. Their extreme compactness, second only to black holes, means that General Relativity is an absolute must for accurately modelling neutron stars' phenomenology. At the same time, the extreme densities reached inside neutron stars (a few times that of an atomic nucleus) make them ideal laboratories to probe otherwise inaccessible physical regimes. In particular, realistic modelling of neutron star phenomenology--and its "validation" through simulations and observations--allow us to set constraints on the elusive equation of state encoding matter properties at such extreme densities, and test our understanding of gravity in the strong field regime at the same time. When it comes to neutron star astrophysics, it is almost impossible not to mention the spectacular detection of \(17^{\rm th}\) August 2017--hence dubbed \(GW170817\). For the first time, the Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo teams captured the faint gravitational wave signal emitted during a binary neutron star inspiral [2]. This detection came roughly two years after the very first gravitational wave detection accomplished by the LIGO team on \(14^{\rm th}\) September 2015 [1]. In this first remarkable event--known as \(GW150914\)--the gravitational waves were generated by two black holes inspiralling and merging, and travelled for about 1.3 billion years (at the speed of light) before reaching the detectors on Earth. Despite this earlier detection and the many more binary black hole merger detections that followed1, \(GW170817\) truly represents a milestone for astrophysics. The gravitational wave event was soon followed by the detection of a post-merger electromagnetic counterpart [3] thus marking the beginning of a new "multi-messenger era"2. Moreover, this event also confirmed the long-standing paradigm that much of the heavy elements in our Universe form precisely during such spectacular cosmic fireworks [123, 171]. Footnote 2: In truth, it was the FERMI gamma ray telescope that sent out the trigger alert to LIGO and other telescopes. A large noise glitch in one of the LIGO detectors at the same time as the real signal, in fact prevented LIGO from sending out the alert first—as the gravitational wave signal arrived in the detectors roughly two seconds before the gamma ray burst detection. However, after having cleaned the data, LIGO confirmed the detection in gravitational waves of the first binary neutron star merger. Whilst it is undeniable that \(GW170817\) represents an astonishing achievement, it has also demonstrated the range of exciting features we can explore with neutron star mergers [36, 185, 149, 189, 203]. Neutron star mergers provide an opportunity to explore many extremes of physics, from the state of matter beyond the nuclear saturation density (the elusive equation of state [164, 80, 218, 135]) to the formation of a long-lived merger remnant (likely a black hole [201, 45, 30]). Matter outflows and the associated rapid nuclear reactions determine the (hopefully) observable kilonova signature [152] and the twisting of the stars' magnetic field may help collimate an emerging jet and hence explain observed short gamma-ray bursts [167, 64]. Current gravitational-wave facilities, however, are only sensitive to the signal produced in the inspiral phase (during which the orbit shrinks due to energy lost in gravitational waves until the neutron stars touch and merge), even though much of the interesting dynamics happens at merger and during the post merger phase (at the end of which the system eventually settles down to either a black hole or neutron star). The violent merger dynamics is affected by the details of the transport properties of matter at such extreme densities and temperatures [12], and hence crucial to set tight constraints on the still-baffling equation of state. For this reason, new and more sensitive facilities are currently at the planning stage--such as the Einstein Telescope [146] and the Cosmic Explorer [182, 76] in the context of gravitational wave detectors--and expected to come online in the 2030s. With more sensitive instruments coming on-stream in the future, higher precision observations are anticipated and one may hope to extract more detailed information about the involved physics. These extremely exciting prospects constitute the motivation for the work presented in this thesis. ### 1.2 Outline As present (and future) binary neutron star merger detections offer a wealth of opportunities to explore several extremes of physics, it is important to have an understanding of the different ingredients required to fully realize this potential--this will also provide further context for the results discussed within this thesis. We obviously need sensitive detectors/telescopes (of all sorts) and, for the purpose of this thesis, it will suffice to say that efforts in this direction are under way. We obviously also need sophisticated tools to analyze the data, and extract useful physics information from a merger detection. Focusing on the gravitational aspects, for example, we need to contrast the gravitational wave signal to waveform templates. To construct the templates, we need accurate theoretical modelling of the entire merger process, from inspiral to ringdown [69]. Analytical models (based on the so-called Post-Newtonian expansion [172]) can accurately describe the inspiral phase, but they prove inadequate as the orbital separation of the two neutron stars (or black holes) approaches a few times their radii. To model the actual merger and post-merger phase we must rely on numerical simulations [184, 200]. This means that, as mergers are highly dynamical events, the full suite of Einstein equations have to be solved together with a suitable description of matter. Current simulations tend to involve an evolution of the ideal fluid equations, possibly augmented with a relatively simple scheme (or closure) to account for electromagnetic effects and (neutrino) radiation. Although these models already provide a formidable computational task, it is easy to argue that an accurate description of the physics involved in a merger would require even more complicated theoretical models. This is ever more true given the expected advancements in detector technology and sensitivity. When it comes to neutron stars, it can be easily argued that a fluid model capable of accounting for all the relevant physics should involve at least four distinct components [21]. We expect, in fact, to find superfluid neutrons and superconducting protons as we go deep enough into the crust and core of a neutron star [103, 62], meaning that neutrons and protons flow independently from each other. We also expect a flow of electrons, coupled to the protons due to electromagnetic interactions, and finally a heat flux/entropy flow. This means that sophisticated multi-fluid models constitute an important part of the story, and a substantial part of this thesis is dedicated to discussing them. Moreover, as the violent dynamics of a merger is expected to push matter out of (local) thermodynamic equilibrium [104, 105], we also need to include dissipation in the models we would like to simulate. At the same time, however, while there is no doubt that multifluid models are crucial for modelling the phenomenology of neutron stars3, the question arises as to whether, say, superconductivity in the core will impact merger and post-merger dynamics. We may expect, for example, that the heating generated by the two neutron stars smashing against each other is going to melt the crust and push matter above the relevant critical temperature. In this respect, however, current simulations do not provide a definite answer, and some suggest that parts of the core could remain cold enough for superfluidity/superconductivity to play a role [104]. Obviously, the presence of superfluid/superconducting phases would change the underlying dissipative mechanisms. In essence, given the expected sensitivity of future detectors, we are urged to check whether a more realistic modelling of neutron star matter can leave a measurable imprint in the detected signals. In developing such complicated theoretical models, however, we need to bear in mind the extreme computational costs of simulating them, forcing us to consider simplifications whenever possible. The limitations associated with actual numerical implementations become ever more crucial if we consider that we expect turbulence to develop in mergers due to known fluid instabilities [138, 151]. In fact, the modelling of turbulence is an extremely complicated and subtle business already at the Newtonian level. In essence, we would like to make sure that the underlying physical features are faithfully represented by our theoretical models and, at the same time, make sure that we "get the physics right" in our numerical simulations. This is clearly a very challenging issue. As a final point, it is also crucial to keep in mind that we need to make sure that the physics we hope to explore is cleanly associated with particular observational features. In this thesis we will discuss recent theoretical advancements in (electromagnetic) multi-fluid modelling in General Relativity, motivated by applications to neutron star astrophysics and neutron star mergers in particular. The introduction so far already suggests that there are a number of interconnected aspects to be kept in mind, from dissipation to turbulence. Given the complexity of the systems we aim to model, it is natural to consider each of these different aspects "one at a time". As such, the thesis is divided into three different parts, each of which is somewhat self-contained and could be read almost independently from the rest. Nonetheless, the discussions provided in the different parts complement each other to form a coherent story. The first part of this thesis is devoted to the modelling of dissipative multi-fluids. Modelling dissipation in relativity is challenging already for single fluid models, and has baffled physicists for quite some time. Several ideas and prescriptions are currently on the market, and some of them are fairly recent. We will therefore start the first part by reviewing in chapter 2 the different modelling strategies. The discussion will highlight, in particular, that most of the present strategies do not seem to allow for an "easy" extension to multi-fluid models, which is presumably required for neutron stars. We will then continue in chapter 3 focusing on the (arguably) unique strategy that is clearly suited for this. In particular we will study the close to thermodynamic equilibrium regime of an action based variational model for dissipative multi-fluids. The second part of this thesis is devoted to discussing some of the issues that arise when modelling turbulence in relativity. Turbulent models in Newtonian hydrodynamics often involve some notion of filtering/averaging, and such strategies are becoming fashionable also in the relativity community, with impressive successes [169, 5]. However, whilst there has been significant numerical effort going into extending the Newtonian logic to relativity, the foundational underpinnings for these strategies are not as well explored as one may like. As such, after a brief introduction to hydrodynamic turbulence, we will focus in chapter 4 on discussing the foundational issues that arise in extending the Newtonian logic to the relativistic setting. This brings us to propose a novel scheme for carrying out filtering for relativistic fluids. Whilst the analysis in chapter 4 focuses on single-fluid models, in the following chapter 5 we will discuss the first steps towards extending such a scheme to multi-fluids. Even though at first sight this second part may seem relatively independent from the previous one, it is rather evident that the two are linked. A first indication of this comes from the well-known fact that energy and momentum transport are enhanced in turbulent flows. Furthermore, filtering an ideal fluid model--that is a model that does not account for dissipative effects--inevitably introduces additional terms in the equations that are akin to "effective dissipative terms", as we demonstrate in chapter 4. While the first two parts of this thesis are rather theoretical--although the discussion in part II brings to the fore the key role played by computational limitations--we will continue in part III by considering two relevant binary neutron star merger applications. In particular, in chapter 6 we will focus on modelling reaction-sourced bulk viscosity for neutron star simulations--as reactions are expected to source the dominant dissipative mechanism in mergers. The topic is relatively well-explored from a "theoretical" perspective [11, 9, 10, 6, 8]--but still numerically challenging--so that our analysis aims to establish how the inevitable "limitations" of a numerical simulation (in terms of resolution) enter the discussion. In particular, we also explore the link to (or conflict with) strategies for dealing with turbulent flows. We then continue our journey focusing in chapter 7 on the magneto-rotational instability (MRI), which is considered a key mechanism for developing/sustaining turbulence in the outer envelopes of merger remnants. Crucially, the analysis is framed in a way suited for highly dynamical environments such as mergers. We provide our concluding remarks and comment on future works in part IV. Supplemental material is provided in appendices. In appendix A we discuss the Fermi coordinates as these play a key role in chapter 4. Appendix B provides additional information about the multi-scale methods used in chapter 6, while in appendix C we make explicit contact with quantities that can be computed from standard equation of state tables as collected, for example, in the compOSE database [210]. In appendix D we link the analysis of chapter 7 to the usual MRI results/criteria. Finally, in appendix E we discuss the Routh-Hurwitz criterion, which is going to be used both in chapters 4 and 7. ### 1.3 Relativistic perfect fluids As an appetizer before we begin the main part of our journey, let us cover some background material that is relevant to all the following parts. Quite naturally, dissipative fluid models build on the notion of perfect fluids, namely fluids where dissipative effects can be neglected. As for turbulence, we expect it to develop in fluid flows where inertial effects prevail over viscous/dissipative ones. As such, we need to understand how to model (relativistic) ideal fluids before we can meaningfully start talking about dissipation and turbulence. Excellent introductions to relativistic ideal fluids can be found in several textbooks/reviews--such as, for example, [134, 224, 155, 81, 184, 21]-- and the material is often covered in introductions to General Relativity. Nonetheless, we take this as an opportunity to set the stage for the analysis presented in the following chapters. Let us begin by writing down the Einstein field equations (in geometric units) \[G^{ab}=8\pi T^{ab}\, \tag{1.1}\] where \(G^{ab}\) is the Einstein tensor associated with the spacetime metric, while \(T^{ab}\) is the stress-energy-momentum tensor associated with the matter content within the spacetime. In essence, these equations prescribe how the matter/energy content curves the spacetime and, in turn, the spacetime metric/curvature dictates how (test, freely-falling) particles move in spacetime. An important property of the Einstein tensor is that it is divergence free, meaning that \[\nabla_{a}G^{ab}=0\, \tag{1.2}\] which follows from its definition and the second Bianchi identity satisfied by the Riemann tensor. This means that, as a consequence of the Einstein field equations, we also have \[\nabla_{a}T^{ab}=0\, \tag{1.3}\] namely, energy and momentum are locally conserved. From a field-theory perspective, the conservation of the stress-energy-momentum tensor is associated with diffeomorphism invariance--and hence is analogous to conservation laws obeyed by the Noether currents whenever the field-theory has some underlying symmetry [144, 225]. Whilst the distinction between equations of motion and conserved Noether currents becomes important in the context of multi-fluid modelling--and we will come back to stress this issue--we here follow the "tradition" and refer to eq. (1.3) as the equations of motion of hydrodynamics. It is immediately clear, however, that in order for eq. (1.3) to yield equations we can work with (or simulate), we need to specify the stress-energy-momentum tensor--otherwise our fluid model is somewhat "empty". To do so, we first note that the stress-energy-momentum tensor can be decomposed--just like any other tensor--into parts that are, algebraically, parallel or orthogonal with respect to some observer \(U^{a}\).4 In particular, we can decompose the stress-energy-momentum tensor as \[T^{ab}=\mathcal{E}U^{a}U^{b}+2\mathcal{Q}^{(a}U^{b)}+\mathcal{S}^{ab} \tag{1.4}\] where \[\mathcal{E}=T^{ab}U_{a}U_{b}\,\quad\mathcal{Q}^{a}=\perp^{a}_{b}T^{bc}U_{c}\,\quad \mathcal{S}^{ab}=\perp^{ac}\perp^{bd}T_{cd}\.\] (1.5a) The different terms are then interpreted as follows: \[\mathcal{E}\] is the energy density measured by the observer, \[\mathcal{Q}^{a}\] is the spatial momentum flux measured by the observer (and, by definition, satisfies \[\mathcal{Q}^{a}U_{a}=0\] ) while \[\mathcal{S}^{ab}\] encodes the spatial stresses (and is obviously a symmetric tensor). Whilst this decomposition is useful, it is just an algebraic decomposition. As such, we need to do some more work. In order to arrive at the perfect fluid equations we now introduce the concept of locally co-moving observer, which we denote with \(u^{a}\) in order to distinguish it from the generic one introduced above. We assume that the particles constituting the fluid collide sufficiently often (using a classical mechanics language) that we can meaningfully talk about a mean velocity field associated with the particles' collective motion. The co-moving observer is then defined as the observer that moves with the mean flow. Next, we further assume that the frequent collisions are random in nature, so that an observer moving with the mean flow would observe an isotropic distribution of particles. This then brings us to consider the most generic stress-energy-momentum tensor that is consistent with such a notion of isotropy. In essence, we will require every quantity in the stress-energy-momentum tensor decomposition to be invariant with respect to spatial rotations (spatial with respect to the co-moving observer \(u^{a}\)). We can rephrase the last assumption in a more formal way, by saying that we retain only those parts of \(T^{ab}\)--which belongs to a rank-two tensorial representation of the Lorentz group5\(SO(3,1)\)--that transform as scalars under the rotation group \(SO(3)\). As the rotation group is a sub-group of the Lorentz one, the decomposition of different representations of the Lorentz group with respect to the rotation group are well known. In particular, a symmetric rank-two (Lorentz) tensor is made up of two scalars, one vector, and a spin-2 part with respect to the rotation group. The assumption of isotropy means that we take the vectorial and spin-2 part to vanish. However, let us here leave the formalities aside and proceed with a more intuitive discussion. We begin with the energy density, which is obviously a scalar under the Lorentz group. As such, the energy density is also a scalar with respect to rotations and we can keep it. Next, we observe that the momentum flux is a spatial vector living in the subspace orthogonal to the observer four-vector \(u^{a}\). This means that a non vanishing momentum flux vector would identify a preferred direction in the subspace orthogonal to \(u^{a}\). In order to not break isotropy, then, we need to assume that it vanishes. Finally, we focus on the spatial stresses and note these can be further decomposed as \[\mathcal{S}^{ab}=\mathcal{S}\perp^{ab}+\mathcal{S}^{\langle ab\rangle}\,\quad\text{ where } \mathcal{S}=\mathcal{S}^{a}_{a}\,\ \mathcal{S}^{\langle ab\rangle}=\mathcal{S}^{ab}-\frac{1}{3}g^{ab} \mathcal{S}. \tag{1.6}\] In short, \(\mathcal{S}\) is the trace and \(\mathcal{S}^{\langle ab\rangle}\) is the (symmetric) trace-free part of the stress tensor. The trace is a Lorentz scalar so that the same logic as for the energy density applies. On the other hand, the trace-free part describes anisotropic stresses and hence must vanish in isotropic fluids. We conclude that the stress-energy-momentum tensor of an ideal fluid takes the form \[T^{ab}=\varepsilon u^{a}u^{b}+p(g^{ab}+u^{a}u^{b}). \tag{1.7}\] Note that we have here re-labelled the energy density and isotropic stresses as \(\varepsilon\) and \(p\) (respectively), consistent with the standard notation. The isotropic stresses are then identified with the (equilibrium) thermodynamic pressure, usually given in the form of an equation of state (either in analytical form or provided as a table of data). Since we now have a prescription for the stress-energy-momentum tensor, we can use it to make the energy-momentum conservation law more explicit. Working out the parallel and orthogonal projections of eq. (1.3) we obtain6 Footnote 6: It should be obvious by now that we here mean parallel and orthogonal with respect to the comoving observer. \[u^{a}\nabla_{a}\varepsilon+(\varepsilon+p)\nabla_{a}u^{a}=0\, \tag{1.8a}\] \[(p+\varepsilon)a^{b}+\perp^{bc}\nabla_{c}p=0. \tag{1.8b}\] The first relation is, intuitively, the familiar energy density balance law. It says, in fact, that the time derivative of the energy density is proportional to the expansion rate of the fluid (with a negative sign). If the fluid undergoes compression (negative expansion rate), the energy density will increase because i) the same energy is now stored in a smaller volume ii) of the "\(pdV\)"-work the fluid element does on its surroundings. The second equation is instead the relativistic Euler equation: it says that the fluid is accelerated in such a way to minimize pressure gradients, where the pre-factor in front of the four-acceleration \(a^{b}\) accounts for pressure contributions to the total mass-energy of the fluid. We can think of the two equations in (1.8) as evolution equations for the energy density and the four velocity, respectively7. The system would then be closed provided we have an equation of state of the form \(p(\varepsilon)\). Whilst this is a valid equation of state for, say, a fluid made of radiation, we would normally work with a two-parameter (or more) equation of state. For example, we can think of it as a function \(p=p(n,\varepsilon)\), where \(n\) is the particle number density8. This means that we need an additional evolution equation for \(n\). As the particle number density is typically associated with baryons, we are then led to write a continuity equation Footnote 8: In non-relativistic fluids, the pressure is often given as function of the mass density instead. \[\nabla_{a}(nu^{a})=0\Longrightarrow u^{a}\nabla_{a}n+n\nabla_{a}u^{a}=0. \tag{1.9}\] Note that the particle number density continuity equation plays the role of the mass continuity equation in non-relativistic fluids (see, e.g. [199]). Taken together, eqs. (1.8) and (1.9) form a closed system of equations9. Footnote 9: We also note that more work is required to write the same equations in a form suitable for numerical implementations [184, 200, 100]. However, this would require us to cover additional background, which would be beyond the scope of this thesis anyway. We conclude this introduction to relativistic perfect fluids showing why this model truly describes an ideal fluid. To do so, however, we need to talk a little bit about (equilibrium) thermodynamics. Let us first of all consider the first law of thermodynamics written in terms of quantities per unit volume (see, for example, [21]) \[\mathrm{d}\varepsilon=T\mathrm{d}s+\mu\mathrm{d}n\,\quad T=\left(\frac{ \partial\varepsilon}{\partial s}\right)_{n}\,\qquad\mu=\left(\frac{\partial \varepsilon}{\partial n}\right)_{s} \tag{1.10}\] where \(\mu\,T\) are the chemical potential and temperature, while \(s\) in the entropy density--and \(n\), \(\varepsilon\) are the particle number and energy densities as before. Similarly, the "Euler relation" is \[p+\varepsilon=Ts+\mu n. \tag{1.11}\] In essence, what this shows is that we can think of the equation of state as some relation that allows us to express the various thermodynamic quantities in terms of our two favourite ones. Quite naturally from a theory perspective, we are then led to choose these as the energy and particle number densities. In particular, we can think of the entropy density as \(s=s(n,\varepsilon)\). As such, we can derive an evolution equation for the entropy density starting from the relevant equations for the energy and particle densities. It is then easy to show that \[T\nabla_{a}(su^{a})=T\mathrm{u}^{a}\nabla_{a}s+Ts\nabla_{a}u^{a} =u^{a}\nabla_{a}\varepsilon-\mu u^{a}\nabla_{a}n+\left(p+\varepsilon-\mu n \right)\nabla_{a}u^{a}\\ =(u^{a}\nabla_{a}\varepsilon+(p+\varepsilon)\nabla_{a}u^{a})- \mu\left(u^{a}\nabla_{a}n+n\nabla_{a}u^{a}\right)=0\, \tag{1.12}\] where we have used the thermodynamic relations above as well as the continuity equations for the energy and particle number densities. In essence, this equation says that the entropy-current \(su^{a}\) satisfies a continuity equation as well. As thermodynamics dictates that dissipation is associated with an increase in entropy, the model truly represents an ideal fluid. **Part I** **Disipative (multi-)fluids in General Relativity** ## Chapter 2 Dissipative fluid models in General Relativity: an overview The first part of this thesis focuses on modelling dissipative fluids in (general) relativity, which is a thorny issue that has kept physicists busy for quite some time. The covariant nature of General Relativity highlights the central role played by the reference frame used to describe a physical system. At the same time, the evolution of dissipative fluids must be consistent with thermodynamic principles and the arrow of time associated with the second law. As such, it is clear that the marriage of General Relativity and thermodynamics poses interesting foundational questions, so that it should not come a surprise that a number of authors have made significant contributions over the years1. Footnote 1: Since the list of authors is indeed very long (and growing), we find it difficult to do justice to all of them here. Instead, we will acknowledge as we go along the most important contributions (for our purposes) made by the various authors, and point to [184, 21, 188, 42] for further references. Here we present an overview of the most important developments and results obtained over the past seventy years. The aim is not to draw a complete picture but only to introduce the main ideas and set the stage for what comes next. We will, in fact, continue in the following chapter 3 by focusing on the action-based dissipative framework of Andersson and Comer [20]. We do so as this is the only framework currently on the market that is suited for describing multi-fluid systems. ### 2.1 Non-equilibrium thermodynamics As hydrodynamics is a macroscopic theory, fluid models must be complemented with some information about the microphysics of the system they aim to describe. For the ideal fluid case, for instance, such information may be the chemical potentials of the fluid constituents. These can be obtained within the equilibrium thermodynamics framework. However, if we move to the non-ideal case, new effects like viscosity or heat conductivity start to play a role in the macroscopic dynamics. To take these effects into account, one needs to provide hydrodynamic models with new information, typically in the form of transport coefficients. As these describe out of equilibrium processes, we start with a short discussion of non-equilibrium thermodynamics. Note that the analysis in this section is Newtonian. We will discuss how to re-phrase the same ideas in a relativistic setting later. #### Linear irreversible thermodynamics Classical equilibrium thermodynamics is a macroscopic theory that aims at describing the observed properties of a many-particle system--at thermodynamic equilibrium--in terms of a finite set of macroscopic variables (like energy \(E\), volume \(V\) and particle's number \(N\)). It does not describe, however, the evolution of a system towards such equilibrium states, nor the damping of statistical fluctuations around it. These kind of phenomena are the realm of irreversible thermodynamics. A first fundamental result in the study of irreversible processes is due to the pioneering work of Onsager [166], and it is now customary to refer to it as "Linear Irreversible Thermodynamics" (LIT). To sketch the main ideas of LIT, let us consider the entropy of a system out of (but close to) equilibrium, assuming that the entropy \(S\) is function of some state variables \(A_{1},\ A_{2},\ A_{3}\dots\) The key assumption of LIT is that the number of variables necessary to specify the out of equilibrium state of the system is the same as in equilibrium. Then we can write2 Footnote 2: In this section we will use the greek letters \(\alpha,\ \beta\dots\) to label the dissipative processes that lead to some entropy production. The Einstein summation convention does not apply to them. \[S=S_{\rm eq}-\frac{1}{2}\sum_{\beta,\gamma}G_{\beta\gamma}\alpha_{\beta} \alpha_{\gamma}\, \tag{2.1}\] where \(\alpha_{\beta}\) is the deviation from the equilibrium value of the corresponding variable \(A_{\beta}\). If we then assume the _Onsager regression hypothesis_--which states that spontaneous fluctuations of the system decay with the same evolution law as the perturbations caused by external forces--we can write \[\frac{d\alpha_{\beta}}{dt}=-\sum_{\gamma}M_{\beta\gamma}\alpha_{\gamma}\, \tag{2.2}\] where the matrix \(M\) describes the decay of spontaneous fluctuations. The entropy production in turn is3 Footnote 3: Let us note for clarity that the thermodynamic quantities in this section are global and refer to the entire system under consideration. As a consequence, all equilibrium quantities are constant in time. This applies also to \(G_{\beta\gamma}\) defined in eq. (2.1), which is evaluated at equilibrium. \[\frac{dS}{dt}=-\sum_{\beta,\gamma}G_{\beta\gamma}\alpha_{\beta}\frac{d\alpha_{ \gamma}}{dt}=\sum_{\gamma}J_{\gamma}X_{\gamma}\, \tag{2.3}\] where we have introduced the _thermodynamic forces_\(X_{\gamma}=-\sum_{\beta}G_{\beta\gamma}\alpha_{\beta}\) and _thermodynamic fluxes_\(J_{\gamma}=d\alpha_{\gamma}/dt\). By means of eq. (2.2) we can rewrite the entropy production rate as \[\Gamma_{s}=\frac{dS}{dt}=\sum_{\beta\gamma}L_{\beta\gamma}X_{\beta}X_{\gamma}\, \tag{2.4}\] where the matrix \(\mathbf{L}=\mathbf{M}\cdot\mathbf{G}^{-1}\). The second law of thermodynamics then implies that the symmetric part of \(\mathbf{L}\) must be a positive semi-definite matrix. While in his original discussion Onsager only considered thermodynamic variables that are even under time reversal, Casimir later extended the analysis to include also odd variables (e.g. the magnetic field) [56], demonstrating the so-called Onsager-Casimir relations: \[L_{\beta\gamma}=\varepsilon_{\beta}\varepsilon_{\gamma}L_{\gamma\beta}\, \tag{2.5}\] where \(\varepsilon_{\beta}\) is the parity of the variable \(\alpha_{\beta}\) under time reversal. #### Causality and extended irreversible thermodynamics Despite LIT being successful and largely used in many different contexts, it has serious drawbacks that are disturbing already at the Newtonian level, and unacceptable at the relativistic one. For instance, the standard Fourier law for the heat-flux \(q\) (here given in 1+1 dimensions) \(q=-\kappa\partial_{x}T\) can be deduced within the LIT program4. When the Fourier law is coupled to the balance law for the energy density \(\varepsilon\) of a system at rest \(\partial_{t}\varepsilon=-\partial_{x}q\), it yields the heat equation Footnote 4: Here \(\kappa\) is the heat conductivity, while \(T\) denotes the temperature. \[\frac{\partial T}{\partial t}-D\frac{\partial^{2}T}{\partial x^{2}}=0\, \tag{2.6}\] where we introduced the heat diffusivity \(D\), defined as \(D=\kappa/c_{v}\) where \(c_{v}\) is the heat capacity at constant volume. As a result, a temperature perturbation will propagate with infinite speed5. This is obviously against our intuition as we would expect thermal and viscous disturbances to propagate with a speed of the order of, say, the mean molecular speed. Whilst this problem may be ignored for non-relativistic applications where typical speeds are much smaller than the speed of light \(c\), it comes back to bite us in a special/general relativistic setting. The Extended Irreversible Thermodynamics (EIT) program is built in order to address the issues of LIT keeping the analysis at a thermodynamical level. We now sketch the main ideas of EIT and point to [121] for an exhaustive discussion. The key assumption of EIT is that some (the majority) of the microscopic degrees of freedom rapidly decay towards their equilibrium values while others will do so on longer timescales. The causality issues of LIT are solved in the EIT paradigm by enlarging the set of variables that describe the out-of-equilibrium state of the system--through the inclusion of slowly decaying variables6. At the level of a phenomenological macroscopic theory, the additional variables can be chosen as the thermodynamic fluxes introduced above. The reason for this can be seen by going back to eq. (2.4). On the one hand, the entropy production rate--a central quantity in the Onsager LIT program--must be quadratic to retain consistency with the Second law of thermodynamics. On the other hand, the out-of-equilibrium entropy is in general a function of a larger set of variables--compared with the equilibrium one. If we then expand the entropy up to second order, like in eqs. (2.1) and (2.4), the dependence on these additional variables must be taken into account. This is exactly the "missing piece" in Onsager LIT, and the ultimate reason for its non-causal predictions. To better understand this, let us consider the simple example of a Newtonian fluid with heat-flux. Footnote 6: This idea was first proposed by Müller [160]. Let us start by assuming the (generalized) entropy density \(s\) of the system is a function of the energy density as usual and one additional non-equilibrium variable, the heat-flux \(q\): \[s=s(\varepsilon,q). \tag{2.7}\] As a result, its differential reads \[ds=\theta^{-1}d\varepsilon+adq\Longrightarrow\frac{\partial s}{\partial t}= \theta^{-1}\frac{\partial\varepsilon}{\partial t}+a\frac{\partial q}{ \partial t}. \tag{2.8}\] where both \(\theta\)--which is a generalized temperature7--and \(a\) are functions of \((\varepsilon,q)\). Next, since in equilibrium there is no heat-flux and the entropy is maximized, we can expand the generalized entropy as \(s(\varepsilon,q)=s_{\rm e}(\varepsilon)-\frac{1}{2}s_{2}(\varepsilon)q^{2}\) and write Footnote 7: By generalized temperature we here mean a notion of temperature valid also out of equilibrium, noting that this notion is not unique [121]. \[a =\left(\frac{\partial s}{\partial q}\right)_{\varepsilon}=-s_{2}q\, \tag{2.9a}\] \[\theta^{-1}=\left(\frac{\partial s}{\partial u}\right)_{q}=\frac{ 1}{T}-\frac{1}{2}\frac{\mathrm{d}s_{2}}{\mathrm{d}\varepsilon}q^{2}. \tag{2.9b}\] The net result is that we can now compute \(\partial_{t}s\) and write it in the form of a balance equation as \[\frac{\partial s}{\partial t}=-\frac{\partial j_{s}}{\partial x}+\Gamma_{\rm s}\, \tag{2.10}\] where the entropy current \(j_{\rm s}\) and the entropy production rate are \[j_{\rm s} =\frac{1}{T}q\, \tag{2.11a}\] \[\Gamma_{\rm s} =q\left[\frac{\partial\left(T^{-1}\right)}{\partial x}-s_{2}\, \frac{\partial q}{\partial t}\right]\, \tag{2.11b}\] and we have used the energy conservation law as above. We can then proceed exactly as in the LIT program by writing the entropy production rate as the product of thermodynamic forces and fluxes; namely \(\Gamma_{\rm s}=qX_{q}\). Assuming a phenomenological law for the force \(X_{\rm q}=\mu(\varepsilon)q\) we obtain the following law for the heat-flux8 Footnote 8: Where \(\tau=s_{2}/\mu\) and \(\kappa=(\mu T^{2})^{-1}\), although this is not really crucial for the present discussion. \[\tau\frac{\partial q}{\partial t}+q=-\kappa\frac{\partial T}{\partial x}. \tag{2.12}\] In conclusion, by simply enlarging the set of state variables used to described an out-of-equilibrium system--and then following the same strategy as in the Onsager LIT program--we obtained the Maxwell-Cattaneo law for the heat-flux (eq. (2.12)). We can then immediately use the last equation and the energy balance law to get an evolution equation for the temperature of the telegrapher type: \[\tau\frac{\partial^{2}T}{\partial t^{2}}+\frac{\partial T}{\partial t}-D\, \frac{\partial^{2}T}{\partial x^{2}}=0. \tag{2.13}\] This equation is hyperbolic and propagates temperature disturbances with a finite speed, its solution becoming indistinguishable from that of the heat-equation at late times. ### 2.2 Traditional strategies for dissipative fluids Having discussed the basic ideas behind the EIT paradigm, we now proceed to describe the traditional strategies to model dissipative fluids in relativity. We will first focus on the so-called first-order theories--the Eckart-Landau-Lifshitz models--and then on the Muller-Israel-Stewart theory, following the presentation in [21]. We conclude with a brief discussion on the more abstract divergence-type theories. #### Eckart-Landau-Lifshitz models With the term "first-order" theories one typically refers to the dissipative models introduced by Eckart [73] and Landau and Lifshitz [134]. They represent the most simple and natural generalization of the Newtonian Fourier-Navier-Stokes equations. These two models are essentially the same in terms of physics content--the difference being in the observer chosen to "measure" the various quantities--so we follow [21] and present them as one. Let us note that, however, the mathematical properties of the equations in the two models are not the same--and this impacts on their stability and causality properties. In the Landau-Eckart models, the equations of motion are given by simple conservation laws9 Footnote 9: These equations constitute the “matter-sector” of the theory, so that in General Relativity they must be coupled with Einstein equations. \[\nabla_{a}n^{a}=0\, \tag{2.14a}\] \[\nabla_{a}T^{ab}=0\, \tag{2.14b}\] where \(n^{a}\) is the particle flux (or conserved baryon current) and \(T^{ab}\) is the total stress-energy-momentum tensor. In order to account for dissipative effects the stress-energy-momentum tensor is decomposed as \[T^{ab}=\underbrace{(\varepsilon+p)u^{a}u^{b}+pg^{ab}}_{\text{ideal terms}}+\underbrace{\chi\perp^{ab}+2q^{(a}u^{b)}+\chi^{ab}}_{\text{ dissipative terms}}\, \tag{2.15}\] where we have separated the terms that would be present also in the ideal case from the rest. Making contact with the discussion in section 1.3, we have split \(\mathcal{S}=p+\chi\) and renamed \(S^{\langle ab\rangle}=\chi^{ab}\). In practice, the stress-energy-momentum tensor of some reference equilibrium state is augmented by the dissipative fluxes introduced above. In Eckart-Landau-Lifshitz theories (as well as the Muller-Israel-Stewart model discussed later) the pressure and energy density of the fluid are assumed to be equal to the reference equilibrium values10. The terms \(q^{a}\), \(\chi\), \(\chi^{ab}\) represent dissipative fluxes, respectively the heat-flux, the bulk-viscous scalar and the shear-viscous tensor. These additional terms are required to satisfy the following algebraic constraints Footnote 10: Or better, the reference equilibrium is defined as the one having the same energy and pressure, and vanishing thermodynamic fluxes. \[u^{a}q_{a}=\chi_{a}^{a} =0\, \tag{2.16a}\] \[u^{a}\chi_{ab} =0\,\] (2.16b) \[\chi_{[ab]} =0. \tag{2.16c}\] At the end of the day, eq. (2.15) is just the same algebraic decomposition we encountered in section 1.3. In a similar fashion, the the particle flux \(n^{a}\) is linked to the observer 4-velocity as \[n^{a}=nu^{a}+\nu^{a}\, \tag{2.17}\] where the diffusion vector \(\nu^{a}\) is required to be orthogonal to the observer 4-velocity \(u^{a}\nu_{a}=0\). Hence it is proportional to the relative velocity between the observer and the particle flux. The Eckart or Landau-Lifschitz frame are obtained by choosing the observer to be respectively the matter frame or the momentum frame--practically this means setting either \(\nu^{a}\) or \(q^{a}\) to zero in the equations above. Because of the additional quantities in the stress-energy-momentum tensor decomposition, eq. (2.14) are under-determined. In first-order theories, the system is closed by introducing an entropy flux \(s^{a}\), which is assumed to be a linear combination of all the available vectors as \[s^{a}=su^{a}+\beta q^{a}-\lambda\nu^{a}\, \tag{2.18}\] with so-far unspecified coefficients. We also have \[-u_{a}s^{a}=s\, \tag{2.19}\] so that \(s=s(\varepsilon,n)\) in eq. (2.18) is the entropy density (as measured by the chosen observer) and it is assumed to satisfy an equilibrium thermodynamic relation \[n\nabla_{a}x_{s}=\frac{1}{T}\nabla_{a}\varepsilon-\frac{p+\varepsilon}{nT} \nabla_{a}n\, \tag{2.20}\] where \(x_{s}=s/n\) is the entropy per particle. By means of the last equation and the energy conservation law \(u_{b}\nabla_{a}T^{ab}=0\) we can write the entropy production rate as \[\nabla_{a}s^{a} =q^{a}\Big{(}\nabla_{a}\beta-\frac{1}{T}u^{b}\nabla_{b}u_{a}\Big{)} +\Big{(}\frac{1}{T}-\beta\Big{)}\nabla_{a}q^{a}\] \[-\Big{(}x_{s}+\lambda-\frac{p+\varepsilon}{nT}\Big{)}\nabla_{a} \nu^{a}-\nu^{a}\nabla_{a}\lambda-\frac{\chi}{T}\nabla_{a}u^{a}-\frac{\chi^{ab }}{T}\nabla_{a}u_{b}. \tag{2.21}\] To ensure consistency with the thermodynamic second law we first set \[\beta =\frac{1}{T}\, \tag{2.22a}\] \[\lambda =\frac{1}{nT}\bigg{(}p+\varepsilon-sT\bigg{)}=\frac{\mu}{T}. \tag{2.22b}\] In practice, we identify the parameters introduced in eq. (2.18) with the (inverse) temperature and the Gibbs free-energy per particle. Then, the second law is guaranteed to hold in the simplest possible way--often referred to as the "natural way"--by guessing the following constitutive relations for the dissipative fluxes \[q^{a} =-\kappa\;T\;\perp^{ab}\Big{(}\frac{1}{T}\nabla_{b}T+u^{c}\nabla_{c}u _{b}\Big{)}\, \tag{2.23a}\] \[\nu^{a} =-\sigma T^{2}\;\perp^{ab}\nabla_{b}\lambda\,\] (2.23b) \[\chi =-\zeta\theta\,\] (2.23c) \[\chi^{ab} =-\eta\sigma^{ab}\, \tag{2.23d}\] where \(\kappa\) is the heat conductivity (as before) and we introduced the diffusion coefficient \(\sigma\) and the bulk-and shear viscosity coefficients \(\zeta,\eta\). Moreover, let us stress that \(\theta\) here is the expansion rate of the observer (and not a generalized temperature), and similarly \(\sigma^{ab}\) is the shear tensor. We can now rewrite the entropy production rate as \[\nabla_{a}s^{a}=\frac{q^{a}q_{a}}{\kappa T}+\frac{\chi^{2}}{\eta T}+\frac{v^{ a}v_{a}}{\sigma T^{2}}+\frac{\chi^{ab}\chi_{ab}}{\eta T}\geq 0. \tag{2.24}\] The system of equation is now determined and we just need to make sure that the transport coefficients--determined by microphysics-- introduced above are all positive. Let us conclude with a few comments on these first-order models. First of all, despite being quite natural and intuitive, these models have been shown to suffer from stability and causality issues. This means that if we set the system to deviate slightly from thermodynamic equilibrium, the deviations from it may grow rapidly in time (see [115]). Also, they give rise to non-causal behaviour. To better understand this it is sufficient to go back to eq. (2.23) and observe that, for instance, the heat equation is nothing more than a relativistic version of the usual Fourier law. The ultimate reason for this lies in the use of an equilibrium entropy (see eqs. (2.19) and (2.20)). In the parlance of the previous section this is an LIT description--hence, is not surprising that the resulting equations are non-causal. More recent works (see [40, 133, 117] for instance) have shown that one can build first-order theories that respects the stability/causality requirements. Because such models are motivated within a field-theory perspective, we will discuss these theories in section 2.4. #### Muller-Israel-Stewart models The take-home message from section 2.2.1 is that the most intuitive and simple way of including dissipative effects in (general) relativistic fluid models leads to problematic results. Inspired by the work of Muller [160] and supported by relativistic kinetic theory, Israel and Stewart proposed an extension of the first order theories to overcome their stability and causality flaws [118, 119, 120]. The logic of these "second-order theories" does not differ much from first-order ones. The main difference is in the ansatz for the entropy flux. In second-order theories the entropy flux is expanded up to second order in the dissipative quantities11 (compare with eq. (2.18)) Footnote 11: It is quite common to write down the equations of the second-order theories in the Eckart frame, namely set the diffusion vector \(v^{a}\) to vanish. As this simplifies a little the expressions that follow, we will stick to this convention. \[s^{a}=su^{a}+\frac{1}{T}q^{a}-\frac{1}{2T}\Big{(}\beta_{0}\chi^{2}+\beta_{1}q ^{b}q_{b}+\beta_{2}\chi_{bc}\chi^{bc}\Big{)}u^{a}+\alpha_{0}\frac{\chi q^{a}}{ T}+\alpha_{1}\frac{\chi^{ab}q_{b}}{T}. \tag{2.25}\] As a result, the number of unknown parameters--that have to be determined by microphysical calculations--is now larger. It is also interesting to note that the entropy measured in the Eckart frame is now \[-u^{a}s_{a}=s-\frac{1}{2T}\Big{(}\beta_{0}\chi^{2}+\beta_{1}q^{b}q_{b}+\beta_{ 2}\chi_{bc}\chi^{bc}\Big{)}. \tag{2.26}\] Here \(s\) is, as before, the entropy density of the fluid at equilibrium--so that eq. (2.20) still holds. It is clear that the ansatz in eq. (2.25) is consistent with the EIT paradigm as second order combinations of the dissipative terms enter the formula for the out-of-equilibrium entropy of the system12. Also, because the entropy is maximized at equilibrium, we get "for free" the following constraints on (some of) the additional parameters: \(\beta_{0},\,\beta_{1},\,\beta_{2}\geq 0\). Footnote 12: Actually, it may be more correct to view/present the EIT program as an attempt to systematize the ideas behind Müller-Israel-Stewart theory. The strategy then follows the logic of first-order theories. Making use of the equation of motion \(\nabla_{a}T^{ab}=0\) and eq. (2.20) we arrive at \[\nabla_{a}s^{a}=-\frac{1}{T}\chi\Bigg{[}\theta+\beta_{0}u^{a} \nabla_{a}\chi-\alpha_{0}\nabla_{a}q^{a}-\gamma_{0}Tq^{a}\nabla_{a}\bigg{(} \frac{\alpha_{0}}{T}\bigg{)}+\frac{\chi T}{2}\nabla_{a}\bigg{(}\frac{\beta_{0 }u^{a}}{T}\bigg{)}\Bigg{]}\] \[-\frac{1}{T}q^{a}\Bigg{[}\frac{1}{T}\nabla_{a}T+a_{a}+\beta_{1}u^ {b}\nabla_{b}q^{a}-\alpha_{0}\nabla_{a}\chi-\alpha_{1}\nabla_{b}\chi^{b}_{a}+\] \[+\frac{T}{2}q_{a}\nabla_{b}\bigg{(}\frac{\beta_{1}u^{b}}{T}\bigg{)} -(1-\gamma_{0})\chi T\nabla_{a}\bigg{(}\frac{\alpha_{0}}{T}\bigg{)}-(1-\gamma _{1})T\chi^{b}_{a}\nabla_{b}\bigg{(}\frac{\alpha_{1}}{T}\bigg{)}\Bigg{]}\] \[-\frac{1}{T}\chi^{ab}\Bigg{[}\nabla_{a}u_{b}+\beta_{2}u^{c}\nabla _{c}\chi_{ab}-\alpha_{1}\nabla_{a}q_{b}+\frac{T}{2}\chi_{ab}\nabla_{c}\bigg{(} \frac{\beta_{2}u^{c}}{T}\bigg{)}-\gamma_{1}Tq_{a}\nabla_{b}\bigg{(}\frac{\alpha _{1}}{T}\bigg{)}\Bigg{]}. \tag{2.27}\] Let us note that, following [114] we included two additional parameters \(\gamma_{0},\,\gamma_{1}\) because of the freedom we have in distributing the mixed quadratic terms. Consistency with the thermodynamic second law may then be enforced as in the previous subsection by assuming the entropy production rate is a sum of quadratic terms. This in turn yields the equation of motion for the dissipative fluxes and the system of equations is now determined. Not surprisingly, they look like the general relativistic version of the Cattaneo laws \[\tau_{b}\dot{\chi}+\chi =-\zeta[\dots]\, \tag{2.28a}\] \[\tau_{s}\dot{\chi}_{ab}+\chi_{ab} =-2\eta[\dots]\,\] (2.28b) \[\tau_{h}\dot{q}_{a}+q^{a} =-\kappa T\perp^{ab}[\dots]_{b}\, \tag{2.28c}\] where we have introduced three different relaxation timescales \(\tau_{b},\ \tau_{s},\ \tau_{h}\)--which can be related to the parameters introduced above--and the "dot" stands for the proper time derivative associated with the Eckart frame 4-velocity: \(\dot{A}=u^{a}\nabla_{a}A\). We refer to, for example, [184, 21] for explicit expressions of the terms we omitted in square brackets. We nonetheless anticipate that they include, as one may expect, the first order forces--namely the thermodynamic forces of the first order theories, such as the expansion rate for the bulk-viscous scalar--and additional terms quadratic in the fluxes themselves. The Muller-Israel-Stewart (MIS) model has been proven to overcome the stability and causality issues of the first-order theories [114]: it possesses stable equilibrium states, and deviations from these states propagate causally13. However, a number of other issues remain to be addressed. Footnote 13: This can be intuitively understood from the Cattaneo-type form of the equations for the fluxes. First, the Muller-Israel-Stewart model is based on an implicit expansion in deviations away from thermal equilibrium, and stability/causality are guaranteed only for the linearized system of equations. The non-linear behaviour is a completely different game, and not well-explored. One exception is the analysis of Hiscock and Lindblom [116], which explores the presence of non-linear pathologies (in an extremely simplified case), even though this relates to such an extreme regime that it may not be relevant for any physical or astrophysical application. When it comes to causality in the non-linear regime, whilst there have been recent progress for the bulk-viscous case [41], a definite answer is still missing. Second, from a field-theory perspective, the "second-order" expansion of the MIS model cannot be considered complete. Even though the dissipative terms are based on kinetic theory, the model contains only squares of first-order "thermodynamic fluxes" (as in the sense of Onsager [166]) in all possible combinations. Last, and maybe most importantly, the equations of motion are obtained from the conservation of the total stress-energy-momentum tensor of the system, and it is not clear how to extend the model to multi-fluid systems. #### Liu-Muller-Ruggeri and divergence-type theories As we have discussed above, the MIS model represents a significant improvement over the first order theories. Nonetheless, there is another open question regarding the MIS model we have not discussed, and we will briefly touch upon it now. Local well-posedness and strong hyperbolicity are not firmly established14. Motivated by this and the quest for a theory with more solid mathematical foundations, Muller and Ruggeri [161] have proposed a class of models (then slightly generalized by Geroch and Lindblom [91, 92]) known as divergence-type theories (see Rezzolla and Zanotti [184] and the review by Salazar and Zannias [190] for more details). This class of theories is based on three fundamental principles: i) Principle of Relativity ii) Maximum Entropy Principle iii) Hyperbolicity. In practice, the formal hydrodynamic equations are Footnote 14: The bulk-viscous MIS model has been shown to be weakly hyperbolic only recently in [41]. \[\nabla_{a}n^{a}=0\, \tag{2.29a}\] \[\nabla_{a}T^{ab}=0\,\] (2.29b) \[\nabla_{a}A^{abc}=I^{bc}\, \tag{2.29c}\] where \(A^{abc}=A^{a(bc)}\), \(A^{ab}_{\ b}=0\), \(I^{bc}=I^{(bc)}\), \(I^{a}_{a}=0\) and the system is closed assuming \(A^{abc}\), \(I^{bc}\) are functions of \(n^{a}\), \(T^{ab}\). The first two equations represent the conservation of particle flux and stress-energy-momentum tensor as before, while \(A^{abc}\) is supposed to be the third moment of the one particle distribution function of some underlying relativistic kinetic theory (see [141, 211, 61]). Similarly \(I^{bc}\) should represent an approximation to the second moment of the collisional integral of some underlying kinetic theory model. In addition, this class of theories must be completed by an entropy current \(s^{a}\), again considered as function of \(n^{a}\), \(T^{ab}\). The number of theories that can be formulated as in eq. (2.29) is obviously quite large, and one can see, for example, that the set contains both the Eckart-Landau-Lifschitz and the Muller-Israel-Stewart models. It is fair to say, however, that the number of acceptable theories can be somewhat reduced using constraints coming from the principles stated above--although to do this in practice one often assumes \(A^{abc}\) and \(I^{bc}\) are linear in the dissipative fluxes. While the final aim is that of constructing a framework in which discussing issues like stability, causality and hyperbolicity becomes relatively simple, this is (to the best of our knowledge) very much a work in progress. Before we move on, it is worth mentioning an important result that was derived independently by Geroch [90] and Lindblom [140]. They argued that fluid states predicted by the causal divergence-type theories decay on very short timescales--of the order of the characteristic timescale of microscopic particle interactions--to ones that are well-described by the Eckart-Landau-Lifschitz model. In essence, the results suggest that while the first-order theories are non causal, unstable and not well-posed--and hence problematic for numerical applications--their physical predictions would be practically/experimentally indistinguishable from those of second order ones. ### 2.3 Variational models In this section we review variational approaches to model dissipative multi-fluids in General Relativity. Most of these models are built on extending the action-based model for non dissipative multi-fluids first championed by Taub [209] and then developed by Carter and collaborators (see Carter [52], Comer and Langlois [67, 68], Carter and Langlois [54] and Andersson and Comer [21] for an up-to-date and pedagogical review). We start with a summary of the variational principle for non dissipative multi-fluids. #### Non dissipative multi-fluids models To model general relativistic multi-fluids systems we start from an action of the form \[S=\int\mathrm{d}^{4}x\sqrt{-g}\big{(}R+\Lambda\big{)}\, \tag{2.30}\] where \(R\) is the Ricci scalar and the so-called "master function" \(\Lambda\) accounts for the matter content of the theory. To model a multi-fluid system with different chemical species (or constituents) labelled by x, y..., we take the master function \(\Lambda\) to depend on the particle fluxes \(n_{\mathrm{x}}^{a}\). Assuming the system to be isotropic, the master function is considered as a function of all the possible scalars that can be constructed from the fluxes \(n_{\mathrm{x}}^{a}\) and the spacetime metric: \(\Lambda=\Lambda(n_{\mathrm{x}}^{2},n_{\mathrm{xy}}^{2})\), where \(n_{\mathrm{x}}^{2}=-n_{\mathrm{x}}^{a}n_{a}^{\mathrm{x}}\) and \(n_{\mathrm{xy}}^{2}=-n_{\mathrm{x}}^{a}n_{a}^{\mathrm{y}}\). Then, performing the variation of the master function we obtain15 Footnote 15: Hereafter we ignore boundary terms unless they become relevant to the discussion. \[\delta\big{(}\sqrt{-g}\Lambda\big{)}=\sqrt{-g}\left[\sum_{\mathrm{x}}\mu_{a}^{ \mathrm{x}}\delta n_{\mathrm{x}}^{a}+\frac{1}{2}\left(\Lambda g^{ab}+\sum_{ \mathrm{x}}n_{\mathrm{x}}^{a}\mu_{\mathrm{x}}^{b}\right)\delta g_{ab}\right]\, \tag{2.31}\] where we have introduced the particle four-momenta \[\mu_{a}^{\mathrm{x}}=\frac{\partial\Lambda}{\partial n_{\mathrm{x}}^{a}}= \mathcal{B}^{\mathrm{x}}n_{a}^{\mathrm{x}}+\sum_{\mathrm{y}\neq\mathrm{x}} \mathcal{A}^{\mathrm{xy}}n_{a}^{\mathrm{y}}\, \tag{2.32}\] and \[\mathcal{B}^{\mathrm{x}}=-2\frac{\partial\Lambda}{\partial n_{\mathrm{x}}^{2} }\, \tag{2.33}\] while the entrainment coefficients are defined as \[\mathcal{A}^{\mathrm{xy}}=-\frac{\partial\Lambda}{\partial n_{\mathrm{xy}}^{2 }}. \tag{2.34}\] By inspecting eq. (2.31) we immediately learn two things. First, we see that the variational approach automatically accounts for the entrainment effect. Roughly speaking, entrainment is a non-dissipative interaction between the species and causes a species' four-momentum \(\mu^{\rm x}_{a}\) to be misaligned with its respective particle flux \(n^{a}_{\rm x}\). Entrainment was first recognized as an important dynamical effect in superfluid mixtures by Andreev and Bashkin [27] and, from a microphysical perspective, is akin to the notion of effective masses (gained by the electrons, say, when they move past a ion lattice). Second, we learn that an unconstrained variation on the fluxes gives rise to trivial equations of motion. In fact, the fluid equations that follow from eq. (2.31) are \(\mu^{\rm x}_{a}=0\). This is a well-established result: to obtain non-trivial fluid equations of motion from a Lagrangian, the variation of the particle fluxes must be constrained [196, 52, 175]. A particularly elegant way of imposing the relevant constraint involves introducing the matter space, defined by identifying each currents worldline as a single point [55], see fig. 2.1 for an illustration of the idea. For each fluid, the matter space is a three-dimensional manifold, so that when we introduce a set of coordinates \(X^{A}_{\rm x}\) on, say, the x-fluid's matter space, we attach a "name", or label, to each fluid element. Because the entire worldline of each fluid element is mapped to a single matter space point, it is clear that the fluid element's label \(X^{A}_{\rm x}\), now considered as a collection of three scalars on spacetime, takes the same value at each point on the worldline. After assigning a label to each fluid element worldline, we can use the linear map \(\Psi^{A}_{\rm x}\), defined as \[\Psi^{A}_{\rm x}\doteq\frac{\partial X^{A}_{\rm x}}{\partial x^{a}}\, \tag{2.35}\] to push-forward (pull-back) vectors (co-vectors) between spacetime and the matter spaces. This is important because we can associate with each of the particle fluxes Figure 2.1: The pull-back from a point in the \({\rm x}^{th}\) matter space to the corresponding spacetime worldline. The points in matter space are labelled by \(X^{A}_{\rm x}\) with \(A=1,2,3\). Figure taken from Andersson and Comer [21]. \(n_{\rm x}^{a}\) a three-form \(n_{abc}^{\rm x}\) by the standard Hodge-dual procedure: \[n_{\rm x}^{a}=\frac{1}{3!}\varepsilon^{bcda}\,n_{bcd}^{\rm x}\,\quad n_{abc}^{ \rm x}=\varepsilon_{eabc}\,n_{\rm x}^{e}. \tag{2.36}\] Now we can assume that the spacetime three-form \(n_{abc}^{\rm x}\) is obtained by pulling back a corresponding matter space three-form, to be denoted \(n_{ABC}^{\rm x}\); namely, \[n_{abc}^{\rm x}=\Psi^{A}_{\rm x}[a\,\Psi^{B}_{\rm x\,b}\Psi^{C}_{\rm x\,c}]n_{ ABC}^{\rm x}\, \tag{2.37}\] where, as usual, straight brackets indicate anti-symmetrization (and round ones symmetrization). Similarly, upon applying the Hodge-dual to the four-momentum \(\mu_{a}^{\rm x}\), we can push-forward with the map and identify a matter space momentum "three-form" \(\mu_{\rm x}^{ABC}\) via \[\mu_{\rm x}^{abc} =\varepsilon^{dabc}\,\mu_{d}^{\rm x}\, \tag{2.38a}\] \[\mu_{\rm x}^{ABC} =\Psi^{A}_{\rm x\,[a}\,\Psi^{B}_{\rm x\,b}\Psi^{C}_{\rm x\,c}]\, \mu_{\rm x}^{abc}. \tag{2.38b}\] The main idea of the convective variational principle is to obtain the particle flux variation \(\delta n_{\rm x}^{a}\) by first varying the matter-space three-form and then working backwards. Generally speaking, there are two ways of tracking changes in a fluid system--Eulerian and Lagrangian. The first, to be denoted by a \(\delta\), measures changes in the fluid at fixed spacetime coordinates. The second, to be denoted \(\Delta_{\rm x}\), measures changes following the motion of fluid elements. Locally, the two are related through the Lie derivative along some displacement vector field \(\xi_{\rm x}^{a}\) as16 Footnote 16: We note that this relation between Lagrangian and Eulerian variation works only to first order in the perturbation fields \(\xi_{\rm x}^{a}\), see Friedman and Schutz [83] for further details. \[\Delta_{\rm x}=\delta+\mathcal{L}_{\xi_{\rm x}}\, \tag{2.39}\] where \(\mathcal{L}_{\xi_{\rm x}}\) is the Lie derivative with respect to \(\xi_{\rm x}^{a}\). Because the label \(X_{\rm x}^{A}\) of a fluid element is fixed, we can assert \[\Delta_{\rm x}X_{\rm x}^{A}=0\ \Longrightarrow\ \delta X_{\rm x}^{A}=- \mathcal{L}_{\xi_{\rm x}}X_{\rm x}^{A}=-\Psi^{A}_{\rm x\,a}\xi_{\rm x}^{a}. \tag{2.40}\] Now, it is easy to show that the particle flux variation \(\delta n_{\rm x}^{a}\) is (see [21]) \[\delta n_{\rm x}^{a}=-\frac{1}{2}n_{\rm x}^{a}g^{bc}\delta g_{bc}-\frac{1}{3! }\varepsilon^{bcda}\mathcal{L}_{\xi_{\rm x}}n_{bcd}^{\rm x}. \tag{2.41}\] As a result of eq. (2.41) we can write the _constrained_ variation of the master function as \[\delta\big{(}\sqrt{-g}\Lambda\big{)}=\sqrt{-g}\bigg{[}\frac{1}{2}\Big{(}\Psi g ^{ab}+\sum_{\rm x}n_{\rm x}^{a}\mu_{\rm x}^{b}\Big{)}\delta g_{ab}-\sum_{\rm x} \Big{(}f_{a}^{\rm x}+\Gamma_{x}\mu_{a}^{\rm x}\Big{)}\xi_{\rm x}^{a}\bigg{]}\, \tag{2.42}\] where we introduced the "generalized pressure" \(\Psi\) (not to be confused with the map introduced earlier) \[\Psi=\Lambda-\sum_{\mathrm{x}}n^{a}_{\mathrm{x}}n^{\mathrm{x}}_{a}\, \tag{2.43}\] while the force densities and creation rates (for each species) \[f^{\mathrm{x}}_{a}=2n^{b}_{\mathrm{x}}\nabla_{[b}n^{\mathrm{x}}_{a]}\,\quad \Gamma_{\mathrm{x}}=\nabla_{a}n^{a}_{\mathrm{x}}. \tag{2.44}\] Since the particle-flux three-forms \(n^{\mathrm{x}}_{abc}\) are pulled-back from the matter space, they are automatically closed--because \(n^{\mathrm{x}}_{ABC}\) is a three-form on a three dimensional matter space and the pull-back operation commutes with the exterior derivative. As a result the constrained variation gives zero creation rate for each particle-flux17: Footnote 17: Here \(dn\) represents the exterior derivative of the differential form \(n\). \[\Gamma_{\mathrm{x}}=\nabla_{a}n^{a}_{\mathrm{x}}=\frac{1}{3!}\epsilon^{bcda} \nabla_{[a}n^{\mathrm{x}}_{bcd]}=\frac{1}{4!}\epsilon^{bcda}(dn)_{abcd}=0. \tag{2.45}\] Therefore, the term proportional to \(\Gamma_{\mathrm{x}}\) actually drops out of eq. (2.42). Still, it is interesting to observe that such terms are formally present in the fluid equations. We conclude this subsection by observing that the constrained variational principle also gives the total fluid stress-energy-momentum tensor as \[T^{ab}=\Psi g^{ab}+\sum_{\mathrm{x}}n^{a}_{\mathrm{x}}\mu^{b}_{\mathrm{x}}\, \tag{2.46}\] and that it follows as an identity that \[\nabla_{a}T^{ab}=\sum_{\mathrm{x}}f^{\mathrm{x}}_{a}=0\, \tag{2.47}\] where the last equivalence is ultimately a consequence of the second Bianchi identity satisfied by the Riemann tensor. As a final point, let us go back to discuss an issue we briefly hinted at in section 1.3. Working with the multi-fluid framework, we can in fact appreciate the difference between equations of motion \((f^{\mathrm{x}}_{a}=0)\) and energy-momentum conservation laws. In particular, the procedure is built in such a way that we automatically obtain as many equations as needed, whatever the number of particle fluxes is. Before we move on to discuss extensions of the variational framework to the modelling of dissipative fluids, let us briefly comment on the case of massless particles, i.e. radiation. Our discussion of the variational model relies heavily on the notion of matter space, which we have introduced by "assigning a label" to each fluid element worldline. Next, by associating the fluid elements' worldlines to those of the particles in the system, we have used the matter space construction to automatically impose particles' number conservation (for each species separately), as appropriate for a multi-fluid non-dissipative model. This logic obviously appears problematic if we consider, say, photons since their number is not conserved. At the same time, however, it is not clear whether modelling radiation as a fluid is the right thing to do in the first place. An acceptable description of radiation should in fact be able to model both the "trapped" regime--where the photons, say, interact sufficiently often that their mean-free-path is small enough and we can meaningfully introduce a notion of fluid element (see section 3.1.1)--and the "free-streaming" regime--where the photons do not interact often and are able to escape freely--as well as the transition between the two. As such, it appears to us that to model radiation it is best to start from a more fundamental approach, namely relativistic kinetic theory (see, e.g. [61, 184]). In particular, one can start from the relativistic Boltzmann equation and derive an infinite hierarchy of equations for the moments of the one-particle distribution function associated with the radiation field [212]. In practice, this hierarchy needs to be truncated at some level as we cannot work with or simulate an infinite set of equations. In particular, it is quite common to stop at second order (see, e.g. [183, 222, 178, 162]). This means that only the first two moments of the distribution function are evolved, and leads to the equations of radiation hydrodynamics (see, e.g. [153]). Whilst the end-result resembles a "fluid model" for radiation, the underlying theory is much more detailed, and hence better suited to describe radiation in the first place. #### Carter-like dissipative models We now describe two dissipative extensions of the variational model we just discussed. We start with a model for an heat-conducting medium and then move on to briefly discuss a variational model proposed by Carter. ##### Andersson-Lopez-Monsalvo model for relativistic heat conduction Here we will briefly describe of a model for an heat-conducting medium developed in Lopez-Monsalvo and Andersson [142] and Andersson and Lopez-Monsalvo [22]. The model linchpin is to include the heat flux through an entropy current \(s^{a}\) that can flow differently from matter \(n^{a}\). As a result, the variational approach to multifluids is a natural starting point. This model is substantially a "correction" of a previous attempt by Carter (see Carter [51]). In his model Carter set to zero the entrainment between entropy and matter. This turns out to have a significant impact as the model without entrainment has been shown to violate causality (see Olson and Hiscock [165]). The take home message is that entrainment between entropy and matter is a fundamental ingredient in the description of an heat-conducting fluid--through entrainment the entropy current gains an effective mass and this results in an inertial heat response. The model starting point is the non-dissipative variational principle for a two-fluid system where the particle fluxes are \(n^{a}\)--that represents matter particles--and an entropy current \(s^{a}\)--which can be thought of as a gas of thermal excitations. Using the results of the previous subsection we can write the force densities as \[f_{a}^{\rm n} =2n^{b}\nabla_{[b}u_{a]}+\mu_{a}\nabla_{b}n^{b}\, \tag{2.48a}\] \[f_{a}^{\rm s} =2s^{b}\nabla_{[b}\theta_{a]}+\theta_{a}\nabla_{b}s^{b}\, \tag{2.48b}\] where--following [142]--we named the matter particles and entropy 4-momenta (respectively \(\mu_{a}\), \(\theta_{a}\). As we already pointed out earlier, even though the constrained variation is built in such a way that the particle fluxes are automatically conserved, the action-based force densities contain a term proportional to \(\Gamma_{\rm s}\) or \(\Gamma_{\rm n}\). As a result we can build an hybrid model in the sense that the form of the force densities--and stress-energy-momentum-tensor--stems from the variational principle and, at the same time, we let the particle creation rates differ from zero. Because there is just one matter particle flux, we require \(\Gamma_{\rm n}=\nabla_{b}n^{b}=0\) while \(\Gamma_{\rm s}=\nabla_{b}s^{b}\geq 0\) to retain consistency with thermodynamics. It is then convenient to work in the Eckart frame, that is we introduce the matter 4-velocity \(u^{a}\) such that \(n^{a}=nu^{a}\) while for the entropy current we have \[s^{a}=s^{*}(u^{a}+w^{a})\, \tag{2.49}\] where \(s^{*}\) is the entropy density measured by the matter particles. We also introduce a similar decomposition for the entropy 4-momentum \[\theta_{a}=\big{(}\mathcal{B}^{\rm s}s^{*}+\mathcal{A}^{\rm ns}n\big{)}u_{a}+ \mathcal{B}^{\rm s}s^{*}w_{a}\doteq\theta^{*}u_{a}+p_{a}\, \tag{2.50}\] and then rewrite the heat-flux as \[q_{a}=-\ \perp_{ab}u_{c}T^{bc}=s^{*}\theta^{*}w_{a}\doteq\theta^{*}\sigma_{a}. \tag{2.51}\] Next, the energy density measured by matter particles is \(\varepsilon=u_{a}u_{b}T^{ab}=-\Lambda+p_{a}\sigma^{a}\). Therefore, when the system is out of equilibrium and the heat flows relative to matter the energy density depends also on the heat-flux (encoded in the variables \(\sigma^{a}\) and \(p_{a}\)), i.e. we have an _extended Gibbs relation_ \[\mathrm{d}\varepsilon=\mu\mathrm{d}n+\theta^{*}\mathrm{d}s^{*}+\sigma^{a} \mathrm{d}p_{a}. \tag{2.52}\] We stress that this result arises automatically in the model and is consistent with the EIT picture--the key difference is that it is derived, not postulated. Finally, it is possible to rewrite the equation of motion for the entropy current as an equation for the heat-flux as \[\tau\big{(}\dot{q}^{a}+q_{c}\nabla^{a}u^{c}\big{)}+q^{a}=\kappa\perp^{ab} \big{(}\nabla_{b}\theta^{*}+\theta^{*}a_{b}\big{)}\, \tag{2.53}\] where the relaxation timescale \(\tau\) can be rewritten in terms of the entrainment parameter \(\mathcal{A}_{\rm ns}\) and \(\kappa\) is the heat conductivity. The last equation is clearly a general relativistic generalization of the Cattaneo law (see eq. (2.12)) and has been shown to be consistent with the Israel-Stewart model in the linear regime (see [142]). ##### Carter variational principle for dissipative fluids Another important step forward, at least from the formal point of view, is the variational model proposed by Carter. We now sketch the main ideas and assumptions of the model, while referring to the original work for the details [53]. We do so mainly to highlight the differences with what comes next. The key quantity is again the master function \(\Lambda\) which now is assumed to depend also on a set of additional rank-2 (symmetric) tensors \(\tau^{ab}_{\Sigma}\). These new dynamical fields should be identified with viscous tensors and the label \(\Sigma\) is introduced to allow for different sources of viscosity separately--such as bulk and shear viscosity. The variation of the master function \(\Lambda(n^{a}_{\rm x},\tau^{ab}_{\Sigma},S_{ab})\) can then be written as \[\delta\Lambda=\sum_{\rm x}\mu^{x}_{a}\delta n^{a}_{\rm x}+\frac{1}{2}\sum_{ \Sigma}\frac{\partial\Lambda}{\partial\tau^{ab}_{\Sigma}}\delta\tau^{ab}_{ \Sigma}+\frac{\partial\Lambda}{\partial g_{ab}}\delta g_{ab}. \tag{2.54}\] In this approach, however, the action is only used to obtain the structure of the force terms, and the stress-energy-momentum conservation as a Noether identity. The final equations of motion are not obtained by setting to zero the variation of the action, but enforcing consistency with the second law of thermodynamics. Moreover, the identification of the new dynamical fields \(\tau^{ab}_{\Sigma}\) with viscous tensors is so far just formal. To complete the identification with the usual thermodynamic fluxes a specific expansion in deviations away from thermal equilibrium had to be introduced, and the resulting model has been shown to belong to the same family as those of the MIS variety [174]. #### Andersson and Comer formalism We now describe a recent action-based formalism for dissipative (general relativistic) multi-fluids proposed by Andersson and Comer [20], which is built by generalizing the non-dissipative model presented in section 2.3.1. As this is the starting point for the original results described in chapter 3, we will spend more time going into the details. We also note two important aspects of the model that distinguish it from the ones presented in the previous subsections. First, the approach is "fully variational" in the sense that the final equations of motion are obtained as Euler-Lagrange equations starting from an action--while the models discussed in the earlier section took the variational equations as starting point, and then modified them appropriately. This makes the model well suited for describing dissipative multi-fluids18. Second, the model does not introduce any new dynamical field, focusing instead on the particle fluxes \(n_{\mathrm{x}}^{a}\). Footnote 18: As opposed to simple fluids with, possibly, heat flow, which the standard approaches are designed for. In order to not get lost in the algebra, it is useful to start with two simple observations. First, at the microscopic level dissipation is the product of interaction between particles, which, at the fluid-dynamical level, intuitively translates into the idea of _interacting matter spaces_. Second, a central feature of dissipative fluids is having non-conserved fluxes, \(\Gamma_{\mathrm{x}}\neq 0\). As can be seen going back to eq. (2.45), flux conservation is--in the constrained variation--a direct consequence of having an associated closed three-form \(n_{abc}^{\mathrm{x}}\). As a result, if we want to keep working with the matter space construction and, at the same time, let the particle fluxes be non-conserved, we have to break the closure property of the three-forms. Let us implement these two ideas by reviewing the constrained variation procedure. Clearly, we still have that the particle labels do not change if we follow them (in the Lagrangian sense) \(\Delta_{\mathrm{x}}X_{\mathrm{x}}^{A}=0\). Also, it is still true that \[\Delta_{\mathrm{x}}\Psi_{\mathrm{x}\,a}^{A}=\Delta_{\mathrm{x}}\left(\frac{ \partial X_{\mathrm{x}}^{A}}{\partial x^{a}}\right)=\frac{\partial}{\partial x ^{a}}\big{(}\Delta_{\mathrm{x}}X_{\mathrm{x}}^{A}\big{)}=0. \tag{2.55}\] We can now work out (again) the particle flux variation \(\delta n_{\mathrm{x}}^{a}\) without assuming the three-form \(n_{ABC}^{\mathrm{x}}\) to be closed to get \[\delta n_{\mathrm{x}}^{a}=-\frac{1}{2}n_{\mathrm{x}}^{a}g^{bc}\delta g_{bc}- \frac{1}{3!}e^{koda}\bigg{(}\mathcal{L}_{\xi_{n}}n_{bcd}^{\mathrm{x}}-\Psi_{ \mathrm{x}\,[b}^{B}\Psi_{\mathrm{x}\,c}^{C}\Psi_{\mathrm{x}\,d]}^{D}\,\Delta _{\mathrm{x}}n_{BCD}^{\mathrm{x}}\bigg{)}. \tag{2.56}\] But there is a deeper point to be made here. Formally, we can take \(n_{ABC}^{\mathrm{x}}\) to be a particle measure form on the matter space, which "counts" the total number of species x particles in the system. If it is a tensor on matter space then it must be a function only of the matter space coordinates \(X_{\mathrm{x}}^{A}\). The fact that \(n_{ABC}^{\mathrm{x}}=n_{ABC}^{\mathrm{x}}(X_{\mathrm{x}}^{A})\) implies \(\Delta_{\mathrm{x}}n_{ABC}^{\mathrm{x}}=0\), and the flux variation above reduces to the result for non-dissipative fluids. Therefore, to get the non-dissipative equations of motion one simply has to impose that the number of particles is conserved in the variation, or, equivalently, that the particle creation rates \(\Gamma_{\mathrm{x}}=\nabla_{a}n_{\mathrm{x}}^{a}\) vanish. It then follows that a way to include dissipative processes (read: \(\Gamma_{\mathrm{x}}\neq 0\)) at the level of the action principle is to break the tensorial nature of the matter space particle measure form \(n_{ABC}^{\mathrm{x}}\), and allow it to be a function of more than just the \(X_{\mathrm{x}}^{A}\). In other words, we are breaking the closure property of the \(n_{abc}^{\mathrm{x}}\). Before we move on to discuss the model more in detail, let us briefly comment on the differences between dissipative fluids and radiation. Using the variational approach to model radiation seems problematic as the photon number is not conserved--although we argued in section 2.3.1 that using a fluid-scheme to model radiation appears too restrictive in the first place. As the matter space is introduced by labelling the worldlines, and we naively tend to associate the worldlines to the particles, one may be led to think that using the matter space construction to model a dissipative system is similarly problematic. We now consider a multi-fluid system undergoing reactions to argue that this is not quite the case. First of all, let us begin by recalling that all fluid models inevitably involve some notion of averaging (see also section 4.2). As such, it is probably more correct to think of the worldlines as associated with fluid elements--and we are going to discuss more in detail how these are defined in section 3.1.1--rather than the individual particles themselves. Moreover, while the system is undergoing reactions, it is still subject to, say, the conservation of baryons, leptons and so on. In essence, one could imagine some notion of a "global" matter space that, in the non-dissipative limit factorizes as the direct sum of the individual matter spaces associated with each species. While the factorization in terms of matter spaces associated with the different species breaks down in the dissipative case, and one may be tempted to think of factorizing it in terms of baryons, leptons and so on, it still makes sense to consider fluid elements associated with the various species. The reason being that the dynamical behaviour of each of these is different--as they have, say, different charge and hence behave differently when immersed in a magnetic field. Admittedly, this discussion is somewhat hand-waving, and we are urged to make it more precise--which motivates the analysis and new results presented in chapter 3. Before we get there though, let us see where this idea brings us by considering two explicit examples. Following the discussion in the original paper, we assume that the x-particle three-form depends also on the matter space coordinates of the y-matter spaces \(X_{\mathrm{y}}^{A}\). As a result we have \[\Delta_{\mathrm{x}}n_{ABC}^{\mathrm{x}} =\sum_{\mathrm{y}\neq\mathrm{x}}\frac{\partial n_{ABC}^{\mathrm{ x}}}{\partial X_{\mathrm{y}}^{D}}\Delta_{\mathrm{x}}X_{\mathrm{y}}^{D}=\sum_{ \mathrm{y}\neq\mathrm{x}}\frac{\partial n_{ABC}^{\mathrm{x}}}{\partial X_{ \mathrm{y}}^{D}}\Big{(}\delta X_{\mathrm{y}}^{D}-\mathcal{L}_{\xi_{\mathrm{x} }}X_{\mathrm{y}}^{D}\Big{)}\] \[=\sum_{\mathrm{y}\neq\mathrm{x}}\frac{\partial n_{ABC}^{\mathrm{ x}}}{\partial X_{\mathrm{y}}^{D}}\Big{(}\tilde{\xi}_{\mathrm{x}}^{a}-\xi_{ \mathrm{y}}^{a}\Big{)}\partial_{a}X_{\mathrm{y}}^{D}. \tag{2.57}\] Using the last equation in eq. (2.56) and defining a "resistivity coefficient" as \[\mathrm{R}_{\mathrm{a}}^{\mathrm{xy}}=\frac{1}{3!}\mu_{\mathrm{x}}^{ABC}\frac{ \partial n_{ABC}^{\mathrm{x}}}{\partial X_{\mathrm{y}}^{D}}\Psi_{\mathrm{y}a}^ {D}\, \tag{2.58}\] it is possible to write the fluid-part in the Lagrangian variation as \[\mu_{a}^{\mathrm{x}}\delta n_{\mathrm{x}}^{a}=\text{``non-dissipative terms''}+\sum_{\mathrm{y}\neq\mathrm{x}}\mathrm{R}_{\mathrm{a}}^{\mathrm{xy}} \Big{(}\tilde{\xi}_{\mathrm{y}}^{a}-\tilde{\xi}_{\mathrm{x}}^{a}\Big{)}. \tag{2.59}\] The additional piece in the variation then changes the equations of motion to \[f_{a}^{\mathrm{x}}+\Gamma_{\mathrm{x}}\mu_{a}^{\mathrm{x}}=\sum_{\mathrm{y} \neq\mathrm{x}}\left(\mathrm{R}_{\mathrm{a}}^{\mathrm{yx}}-\mathrm{R}_{\mathrm{ a}}^{\mathrm{xy}}\right)\, \tag{2.60}\] where \(f_{a}^{\rm x}\) is the same as in section 2.3.1. The take home message from this example is that by enlarging the set of quantities which the three-form can depend on we obtain additional terms that look like resistivity coefficients. The natural follow-up question then is: can we perform a similar calculation and obtain, again from an action principle, additional terms in the equations that look like viscous tensors? The answer--demonstrated by Andersson and Comer [20]--is yes. Let us, in fact, consider a situation where the three form depends both on \(X_{\rm y}^{A}\) and the projected metric \[g_{\rm x}^{AB}=\Psi_{x\,a}^{A}\Psi_{x\,b}^{B}\,g^{ab}. \tag{2.61}\] We then have \[\Delta_{\rm x}n_{ABC}^{\rm x}=\sum_{\gamma\neq{\rm x}}\frac{\partial n_{ABC}^{ \rm x}}{\partial X_{\rm y}^{D}}\Delta_{\rm x}X_{\rm y}^{D}+\frac{\partial n_{ ABC}^{\rm x}}{\partial g_{\rm x}^{DE}}\Delta_{\rm x}g_{\rm x}^{DE}. \tag{2.62}\] The novel terms arising from \(\Delta_{\rm x}g_{\rm x}^{DE}\) in the fluid-part of the Lagrangian variation then read \[\mu_{a}^{\rm x}\delta n_{\rm x}^{a} =\ldots+\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x }}{\partial g_{\rm x}^{DE}}\Delta_{\rm x}g_{\rm x}^{DE}\] \[=\ldots+\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{ \rm x}}{\partial g_{\rm x}^{DE}}\,\Psi_{x\,a}^{D}\,\Psi_{x\,b}^{E}\Big{(} \delta g^{ab}-2\nabla^{(a}\xi_{\rm x}^{b)}\Big{)}\] \[=\ldots+\frac{1}{2}S_{\rm x}^{ab}\delta g_{ab}-S_{ab}^{x}\nabla^{ b}\xi_{\rm x}^{a}\, \tag{2.63}\] where we have defined the "viscosity tensor" \[S_{ab}^{\rm x}=\frac{1}{3}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm x}^{DE}}\,\Psi_{x\,a}^{D}\,\Psi_{x\,b}^{E}\, \tag{2.64}\] and the \(\ldots\) represent the non-dissipative terms plus the ones in eq. (2.59). From eq. (2.63) we the intuitively see that the additional dependence of the matter space three-form is going to affect both the force densities and the total stress-energy tensor of the system. Cranking through the algebra, the action variation can be written as \[\delta\big{(}\sqrt{-g}\Lambda\big{)} =\sqrt{-g}\bigg{[}\frac{1}{2}\Big{(}\Psi g^{ab}+\sum_{\rm x}n_{x} ^{a}\mu_{\rm x}^{b}+S_{\rm x}^{ab}\Big{)}\delta g_{ab}\] \[\qquad\qquad-\sum_{\rm x}\Big{(}f_{a}^{\rm x}+\Gamma_{x}\mu_{a}^{ \rm x}+\nabla^{b}S_{ba}^{\rm x}-\sum_{\rm y\neq x}\big{(}{\rm R}_{a}^{\rm yx}- {\rm R}_{a}^{\rm xy}\big{)}\Big{)}\xi_{\rm x}^{a}\bigg{]}. \tag{2.65}\] Therefore, the action-based model provides us with the total stress-energy-momentum tensor \[T^{ab}=\Psi g^{ab}+\sum_{\rm x}n_{x}^{a}\mu_{\rm x}^{b}+S_{\rm x}^{ab}\, \tag{2.66}\] which contains the additional viscous stress-tensors \(S_{\rm x}^{ab}\). Also the resulting fluid equations of motion in this second case are \[f_{a}^{\rm x}+\Gamma_{\rm x}\mu_{a}^{\rm x}+\nabla^{b}S_{ba}^{\rm x}=\sum_{y\neq \rm x}\left({\rm R}_{a}^{\rm yx}-{\rm R}_{a}^{\rm xy}\right)\,, \tag{2.67}\] and thus contain both a resistivity term and the four divergence of the new viscous tensor. Having described these two examples, let us move on to the general formalism presented in Andersson and Comer [20]. The authors consider the case where the particle three-forms depend on \[n_{ABC}^{\rm x}=n_{ABC}^{\rm x}(X_{\rm x}^{A},\,X_{\rm y}^{A},\,g_{\rm x}^{AB}, \,g_{\rm y}^{AB},\,g_{\rm xy}^{AB})\, \tag{2.68}\] and the "mixed projected metrics" are defined as \[g_{\rm xy}^{AB}=\Psi_{\rm x}^{A}\Psi_{\rm y}^{B}\,g^{ab}. \tag{2.69}\] Performing the variations as in the examples above they arrive at the following equations of motion \[f_{a}^{\rm x}+\Gamma_{\rm x}\mu_{a}^{\rm x}+\nabla^{b}D_{ba}^{\rm x}=R_{a}^{\rm x }\, \tag{2.70}\] where \[D_{ab}^{\rm x} =S_{ab}^{\rm x}+\sum_{y\neq\rm x}s_{ab}^{\rm yx}+\frac{1}{2} \Big{(}{\cal S}_{ba}^{\rm xy}+{\cal S}_{ab}^{\rm yx}\Big{)}\, \tag{2.71a}\] \[R_{a}^{\rm x} =\sum_{y\neq\rm x}\left({\rm R}_{a}^{\rm yx}-{\rm R}_{a}^{\rm xy} \right)+\left(r_{a}^{\rm yx}-r_{a}^{\rm xy}\right)+\left({\cal R}_{a}^{\rm yx }-{\cal R}_{a}^{\rm xy}\right). \tag{2.71b}\] and \[s_{ab}^{\rm xy} =\frac{1}{3}h_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm xy}^{DE}}\,\Psi_{\rm y}^{D}\,\Psi_{\rm y}^{E}\, \tag{2.72a}\] \[{\cal S}_{ab}^{\rm xy} =\frac{1}{3}h_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm xy}^{DE}}\,\Psi_{\rm x}^{D}\,\Psi_{\rm y}^{E}\,\] (2.72b) \[r_{a}^{\rm xy} =\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm xy}^{DE}}\,\nabla_{a}\big{(}g^{bc}\Psi_{\rm y}^{D}\Psi_{\rm y }^{E}\big{)}\,\] (2.72c) \[{\cal R}_{a}^{\rm xy} =\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm xy}^{DE}}\,g^{bc}\Psi_{\rm x}^{D}\nabla_{a}\big{(}\Psi_{\rm y }^{E}\big{)}\, \tag{2.72d}\] while \({\rm R}_{a}^{\rm xy}\), \(S_{ab}^{\rm x}\) are defined as in eqs. (2.58) and (2.64). Projecting the field equation along \(u_{\rm x}^{a}=n_{\rm x}^{a}/n_{\rm x}\), we see that \[\left(-u_{\rm x}^{a}h_{a}^{\rm x}\right)\Gamma_{\rm x}=u_{\rm x}^{a}\nabla^{b} D_{ba}^{\rm x}-u_{\rm x}^{a}R_{a}^{\rm x}\, \tag{2.73}\] while the stress-energy-momentum tensor is \[T^{ab}=\Psi g^{ab}+\sum_{\mathrm{x}}n^{a}_{\mathrm{x}}n^{b}_{\mathrm{x}}+D^{ab}\, \tag{2.74}\] and \(D^{ab}=\sum_{\mathrm{x}}D^{ab}_{\mathrm{x}}\) is the sum of each species' total viscous tensor. Let us observe that it follows, as an identity, that \(\sum_{\mathrm{x}}R^{\mathrm{x}}_{a}=0\), and because of this we have automatically \(\nabla_{a}T^{ab}=0\). We also note that the "resistive terms" \(r^{\mathrm{x}}_{a}\), \(\mathcal{R}^{\mathrm{x}}_{a}\) as well as the viscous tensors \(s^{\mathrm{x}\mathrm{y}}_{ab}\), \(\mathcal{S}^{\mathrm{x}\mathrm{y}}_{ab}\) arise because we assume that \(n^{\mathrm{x}}_{ABC}\) depends on \(g^{AB}_{\mathrm{y}}\) and \(g^{AB}_{\mathrm{x}}\), respectively. Finally, it is easy to see that, in general, the x-species total viscous tensor \(D^{\mathrm{x}}_{ab}\) is not necessarily symmetric because \(\mathcal{S}^{\mathrm{x}\mathrm{y}}_{ab}\) is not. This property is, however, not inherited by the total viscous tensor of the system meaning \(D_{ab}=\sum_{\mathrm{x}}D^{\mathrm{x}}_{ab}=D_{ba}\). Let us conclude this section with some comments. First, the action-based dissipative formalism presented here is quite general and, as such usable--at least in principle--in a large number of astrophysical situations. For instance, it has already been used to build models beyond ideal magneto-hydrodynamics [24, 25, 23] due to the fact that it is intuitively clear how to couple the model to electromagnetism. Also, it is important to note that the action and the field equations are fully non-linear. The "variational" aspect of the approach is in the context of the action principle, and there is nothing in the variational process that says the field equations themselves have to be linear in the fields19. In fact, chapter 3 deals with a linearization of the present model in terms of deviations from--a self-consistently defined notion of--thermodynamic equilibrium. Footnote 19: An obvious and familiar example illustrating this same feature comes to mind: the Einstein-Hilbert action yields the Einstein equations, which are notoriously non-linear in the metric. ### 2.4 Hydrodynamics as an effective field theory After having discussed the "traditional" approaches and the recent variational efforts, we now turn our attention to some fairly recent work based on a field-theory take on the problem. As we will see, this work represent a significant change in perspective which is worth exploring. Moreover, we will adapt some of the ideas behind these strategies later on (see section 4.8.1), although in a different spirit. Hydrodynamics can be viewed as the classical low energy limit of a more fundamental quantum (many body and thermal) field theory. Such a perspective is quite natural in the context of heavy-ion-collisions (see [188]). We also note that, as these theories have been developed having in mind heavy-ion-collision applications, they are mostly discussed in a special relativistic setting. An extension to General Relativity is not necessarily straightforward, and may require some careful thinking--as discussed in the next chapter. We also mention that a similar view on hydrodynamics as a coarse-grained theory has been explored for non-dissipative fluid models as well. For instance, in Bhattacharya et al. [46] the authors proposed a variational non-dissipative hydrodynamic model based on a derivative expansion. Roughly speaking, the Lagrangian of the theory is built as the coarse-grained limit of some more fundamental one--the cutoff being the microscopic mean-free-path of the particles. Such an action is constructed summing all possible terms with a specific numbers of derivatives, each with its own coupling. In natural units, each derivative has the dimension of a mass. Since all terms in the action must have the same dimension, those with higher derivatives correspond to lower (mass) dimension couplings. These lower dimension couplings will take smaller values in the low energy limit, so that they are suppressed at the hydrodynamic level. Moving on to dissipative fluids, two important lessons can be drawn from the traditional approaches and suggests we should investigate the effective field theory point of view. First, most of the dissipative theories--with the notable exception of the variational model of [20]--are intrinsically only valid in the "linear regime", i.e. close to some reference equilibrium state. Second, the relaxation effect we have described in section 2.2.3 suggests that while second-order theories may be required to overcome the problems of the Landau-Eckart models, their physical predictions are going to be practically indistinguishable form those of first-order ones. Furthermore, when fluctuations about equilibrium are included in the modelling, new problems arise. In particular, it turns out that corrections to the correlation functions coming from second order terms are smaller than those due to interactions between fluctuations. This is the so called _breakdown of second order hydrodynamics_. It is a relatively well known problem that is not specific to relativistic theories, and leads to stochastic hydrodynamics (see Landau and Lifshitz [134] and the recent review by Kovtun [132]). For all these reasons, it makes sense to ask whether one can fix the stability and causality issues directly at first order. In the effective field-theory framework, dissipative hydrodynamics equations for single fluids can be constructed as follows. The equations of motion are given by the (special relativistic) conservation laws \[\partial_{a}n^{a} =0\, \tag{2.75a}\] \[\partial_{a}T^{ab} =0\, \tag{2.75b}\] where the conserved currents are decomposed as usual \[n^{a} =nu^{a}+\nu^{a}\, \tag{2.76a}\] \[T^{ab} =\epsilon u^{a}u^{b}+(p+\chi)\perp^{ab}+2q^{(a}u^{b)}+\chi^{ab}. \tag{2.76b}\] To close the system, we need to provide explicit expressions for the dissipative fluxes via some constitutive equations. These are obtained through the most general gradient expansion in the (chosen) equilibrium hydrodynamic variables. Typically these are taken to be a four-velocity \(u^{a}\), the chemical potential \(\mu\) and the temperature \(T\): \[\varepsilon =\varepsilon_{\text{eq}}+\varepsilon_{1}\dot{T}/T+\varepsilon_{2} \partial_{a}u^{a}+\varepsilon_{3}u^{a}\partial_{a}(\mu/T)\, \tag{2.77a}\] \[\chi =\tau_{1}\dot{T}/T+\tau_{2}\partial_{a}u^{a}+\tau_{3}u^{a} \partial_{a}(\mu/T)\,\] (2.77b) \[q^{a} =\theta_{1}\dot{u}^{a}+\frac{\theta_{2}}{T}\perp^{ab}\partial_{b }T+\theta_{3}\perp^{ab}\partial_{b}(\mu/T)\,\] (2.77c) \[\chi^{ab} =\eta\sigma^{ab}\,\] (2.77d) \[n =n_{\text{eq}}+v_{1}\dot{T}/T+\nu_{2}\partial_{a}u^{a}+v_{3}u^{a} \partial_{a}(\mu/T\,\] (2.77e) \[j^{a} =\gamma_{1}\dot{u}^{a}+\frac{\gamma_{2}}{T}\perp^{ab}\partial_{b }T+\gamma_{3}\perp^{ab}\partial_{b}(\mu/T)\, \tag{2.77f}\] where the "dots" represent a derivative along \(u^{a}\), that is \(\dot{T}=u^{a}\partial_{a}T\). The causality and stability properties of these theories have been studied in a number of recent papers, such as, for example [40, 133, 117]. The results depend on the equation of state, but it has been demonstrated--at least for some simple cases, like conformal fluids--that one can derive a set of constraints on the expansion parameters, and guarantee stability and causality. Even more recently, an extension of this approach to General Relativity was explored in [42]. These results may come a bit of a surprise after the earlier discussion of first order theories, so let us briefly comment on why it is in fact possible to satisfy the stability and causality constraints at first order. A first and key ingredient for this is the larger number of free coefficients in the gradient expansion. This is ultimately motivated by the fact that quantities like the temperature or chemical potential are not uniquely defined out of equilibrium--different definitions are possible as long as they agree in equilibrium. This is essentially the reason why there are many more coefficients in eq. (2.77) when compared to the Landau or Eckart models. Moreover, we recall that, according to the effective field theory picture, hydrodynamics is a coarse grained theory whose validity is restricted to long wavelengths and low frequencies--and the gradient expansion makes sense as long as the gradients are in fact small. Given this, one is urged to respect the constraints coming from the second law of thermodynamics only in the regime of validity of the theory, namely "on-shell". The trick consists of stabilising the unstable modes that appear in first-order theories by allowing for violation of the second law of thermodynamics (out of the "hydrodynamic regime"). For more details on how this additional freedom can be used to ensure covariant stability, we point to section 4.8.1 where a similar strategy is used in the context of turbulence modelling. Even though the discussion in this section is at a broad-brush level, it is clear that these theories represent a radical change with respect to the MIS paradigm. This is mainly with regards to the way they solve the instability by "killing" the unstable modes, and also because of the change in perspective with respect to EIT. As such, they offer interesting prospects and we point to [88] for a pedagogical discussion of the differences with the EIT paradigm. ## Chapter 3 Linearizing an action-based formalism for dissipative (multi-)fluids The main goal of this chapter is to compare the action-based formulation of Andersson and Comer [20] with previous approaches--such as MIS (see section 2.2.2). The key point is that the Andersson and Comer [20] action principle does not reference any sort of chemical, dynamical, or thermal equilibrium, other than to start with the assumption that the physics can be modelled as fluid phenomena. Conversely, traditional strategies for dissipative fluids--and recent works based on an effective field-theory perspective (see section 2.4)--use an expansion to create an approximate set of field equations to describe dissipative phenomena. Since the action-based model already provides a set of equations (at least in principle) valid in every regime, we can make the comparison using standard perturbation techniques. The dissipation terms are assumed to generate first-order deviations away from equilibria obtained using the non-dissipative limit of the field equations. Working this way we hope to also understand better the role of length- and time-scales of fluid elements on the large scale behavior of the system; in particular, how to link the micro-scale dynamics of the many particles in a fluid element with the macro-scale dynamics between the fluid elements themselves, and the role of the Equivalence Principle in setting these scales. The results presented in this chapter have been published in Celora et al. [57]. ### 3.1 Main assumptions of the model Let us get started with an introductory section in which we expand on the central assumptions behind the fluid-modelling scheme and the nature of the non-closed three forms \(n^{\kappa}_{ABC}\) introduced in section 2.3.3--central to the dissipative model of Andersson and Comer [20]. #### Flux definition The crux of the fluid modelling scheme is to assume that knowledge of the total mass-energy and momentum flux obtained by tracking the worldlines of individual particles can be replaced with tracking the worldlines of fluid elements. These are defined in the following way: Take a multi-particle system at some initial time having, say, total spatial size \(V\), total number of particles \(N\), total mass-energy \(E\), and total entropy \(S\). At the same time, fill-up side-to-side, top-to-bottom, and front-to-back the entire system with \(I=1...M\) local conceptual boxes--the fluid elements. Each element has its own volume \(\delta V_{I}\), number of particles \(\delta N_{I}\), mass-energy \(\delta E_{I}\), and entropy \(\delta S_{I}\). Roughly speaking, if there are characteristic values \(\delta\,V_{I}\sim\delta\,V\), \(\delta\,N_{I}\sim\delta\,N\), etcetera, representative of the fluid elements, then \(V\sim M\,\delta V\), \(N\sim M\,\delta N\), \(E\sim M\,\delta E\), and \(S\sim M\,\delta S\). Clearly, as the number \(M\) is increased the ratios \(\delta V/V\), \(\delta N/N\), etcetera decrease, and the elements become "ultra-local", implying that the change in the spacetime metric across them is small. Now consider the \(I^{\rm th}\)-fluid element. It moves through spacetime and, if the element is small enough, the trajectory can be accurately represented by a single unit four-velocity \(u^{a}_{I}\). When taken together, and in the limit \(M\to\infty\), all the \(u^{a}_{I}\) form a vector field on spacetime and this field plays a role in the fluid system's degrees of freedom. If a local typical scattering length \(\lambda_{I}\) between the particles exists, and the size of fluid elements is commensurate with that length (\(\delta V_{I}\sim\lambda_{I}^{3}\)), then the average four-velocity of the \(\delta N_{I}\) particles will be \(u^{a}_{I}\). In principle, we now have everything we need to define the actual fluid degrees of freedom, which are the particle fluxes \(n^{a}_{I}=(\delta N_{I}/\delta V_{I})u^{a}_{I}\). Even though this may be familiar, we went through the details of the formal process to define fluid elements to point out what are the fundamental assumptions behind a fluid description. We have introduced typical scattering lengths and average velocities as part of our fluid element definition, and therefore we must assume that fluid elements contain enough particles to warrant a statistical/thermodynamical treatment. In the formal procedure there is no requirement of being close to thermodynamic equilibrium. It can be shown (see, for instance [181]) that fluid dynamics can be obtained as a limit of kinetic theory (via a Chapman-Enskog type expansion), but the realm of hydrodynamics is potentially vaster. #### Matter space volume forms All dissipative terms that enter the action-based equations are obtained by assuming that the fundamental current three-forms \(n^{\mathrm{x}}_{abc}\) depend on an additional set of quantities which breaks their closure (\(\nabla_{[a}n^{\mathrm{x}}_{bcd]}\neq 0\)). We now want to expand on how this can happen, but begin by introducing a bit of notation. We need to distinguish between the Levi-Civita symbol \(\eta_{ABC}\) and a volume measure form \(\varepsilon^{\mathrm{x}}_{ABC}\) on the matter space. The Levi-Civita symbol is defined as \(\eta_{ABC}=[A\,B\,C]\) for every chosen set of coordinates (and thus is not a tensor but a tensor density) while the volume measure form \(\varepsilon^{\mathrm{x}}_{ABC}\) can be defined1 by means of the push-forward of the metric: Footnote 1: This is tricky for a couple of reasons: It is well known from work on general relativistic elastic bodies [122] that this is not the only possible choice. Also, the projected metric \(g^{AB}_{\mathrm{x}}\) is not “fixed” in the sense that the spacetime metric \(g_{ab}\) changes, in a general curved spacetime, as a fluid element moves from point-to-point along its worldline. \[g_{\mathrm{x}} =\frac{1}{3!}\eta_{ABC}\eta_{DEF}\,g^{AD}_{\mathrm{x}}g^{BE}_{ \mathrm{x}}g^{CF}_{\mathrm{x}}=\det(g^{AB}_{\mathrm{x}})\, \tag{3.1a}\] \[\varepsilon^{\mathrm{x}}_{ABC} =\sqrt{g^{\mathrm{x}}}\eta_{ABC}=\sqrt{g^{\mathrm{x}}}[A\,B\,C]\, \tag{3.1b}\] where \(g^{\mathrm{x}}=(g_{\mathrm{x}})^{-1}\) is the determinant of the inverse matrix \(g^{\mathrm{x}}_{AB}\); i.e. \(g^{\mathrm{x}}_{AC}g^{CB}_{\mathrm{x}}=\delta^{B}_{A}\). As a result, \(\varepsilon^{\mathrm{x}}_{ABC}\) is a three form and transforms as a tensor under coordinates transformations on the matter space. This volume measure form provides a way to measure the volume of "matter elements", infinitesimal volumes in the matter space manifold. We can relate these quantities to the current and momentum three-forms: \[n^{\mathrm{x}}_{ABC} =\mathcal{N}_{\mathrm{x}}\,\varepsilon^{\mathrm{x}}_{ABC}= \bar{\mathcal{N}}_{\mathrm{x}}\eta^{\mathrm{x}}_{ABC}\, \tag{3.2a}\] \[\mu^{ABC}_{\mathrm{x}} =\mathcal{M}_{\mathrm{x}}\,\varepsilon^{ABC}_{\mathrm{x}}=\bar{ \mathcal{M}}_{\mathrm{x}}\,\eta^{ABC}_{\mathrm{x}}. \tag{3.2b}\] The point we want to make here is that the barred quantities look more like scalar densities on the x-matter space, while the non-barred ones look more like scalars. The relation between the two normalizations is simply \[\mathcal{N}_{\mathrm{x}} =\sqrt{g_{\mathrm{x}}}\,\bar{\mathcal{N}}_{\mathrm{x}}\, \tag{3.3a}\] \[\mathcal{M}_{\mathrm{x}} =\sqrt{g^{\mathrm{x}}}\,\bar{\mathcal{M}}_{\mathrm{x}}. \tag{3.3b}\] We can use this to expedite our use of the variational principle by focusing the additional functional dependence of \(n^{\mathrm{x}}_{ABC}\) into \[\mathcal{N}_{\mathrm{x}}=\mathcal{N}_{\mathrm{x}}(X^{A}_{\mathrm{x}},\,X^{A}_{ \mathrm{y}},\,g^{AB}_{\mathrm{x}},\,g^{AB}_{\mathrm{y}},\,g^{AB}_{\mathrm{xy}}). \tag{3.4}\] To make contact with proper quantities measured in spacetime--that is, with the rest frame density and rest frame momentum for each fluid component--it is useful to introduce an appropriate tetrad \(e^{\hat{a}}_{a}\) for each species; an orthonormal basis whose timelike unit vector \(e_{\hat{0}}=u_{\mathrm{x}}\), so that \(u^{\hat{a}}_{\mathrm{x}}=(e_{\hat{0}})^{\hat{a}}=\delta^{\hat{a}}_{0}=(1,0,0,0)^ {\top}\). The components of the spacetime measure form in this tetrad basis are2 Footnote 2: Recall that, since \(g_{ab}=e^{\hat{a}}_{a}e^{\hat{b}}_{b}\gamma_{\hat{a}b}\), the determinant of the tetrad \(e=\sqrt{|g|}\). \[\varepsilon^{\hat{a}\hat{b}\hat{c}\hat{d}}=\varepsilon^{abcd}e^{\hat{a}}_{a}e^{ \hat{b}}_{b}\varepsilon^{\hat{c}}_{c}e^{\hat{d}}_{d}=\eta^{\hat{a}\hat{b}\hat{c} \hat{d}} \tag{3.5}\] , where \(\eta^{\hat{a}\hat{b}\hat{c}\hat{d}}=-[\hat{a}\,\hat{b}\,\hat{c}\,\hat{d}]\) and we have omitted the chemical index. Now, since pushforward (and pull-back) is a linear map between vector spaces (the tangent space), it transforms as a linear map under coordinate changes, and we can write \[A^{A}=\frac{\partial X^{A}_{\mathrm{x}}}{\partial x^{a}}\,A^{a}=\Psi^{A}_{ \mathrm{x}\hat{a}}A^{\hat{a}}\, \tag{3.6}\] where we have introduced the short-hand notation3 Footnote 3: Following [50] we denote the inverse matrix of the tetrad as \(e^{\hat{a}}_{\mathrm{x}}\). \[\Psi^{A}_{\mathrm{x}\hat{a}}\doteq\Psi^{A}_{\mathrm{x}\,a}\,e^{a}_{\hat{a}}= \frac{\partial X^{A}_{\mathrm{x}}}{\partial x^{a}}\,e^{a}_{\mathrm{a}}. \tag{3.7}\] Making use of the fact that \(0=u^{\hat{a}}_{\mathrm{x}}\Psi^{A}_{\mathrm{x}\hat{a}}=\Psi^{A}_{\mathrm{x} \hat{0}}\) we then get4 Footnote 4: The index \(\hat{i}\) runs over the \(1,2,3\) components of the tetrad basis, and \(\hat{a}=0,\hat{i}\). \[g^{AB}_{\mathrm{x}}=\Psi^{A}_{\mathrm{x}\hat{a}}\Psi^{B}_{\mathrm{x}\,\hat{b} }\eta^{\hat{a}\hat{b}}\Longrightarrow g_{\mathrm{x}}=\det\bigl{(}\Psi^{A}_{ \mathrm{x}\hat{i}}\bigr{)}^{2}\, \tag{3.8}\] which leads to5 Footnote 5: Note that, because of the standard convention we use \(\eta^{\hat{b}\hat{c}\hat{d}\hat{d}}=-\varepsilon^{\hat{b}\hat{c}\hat{d}}\) with \(\hat{b},\hat{c},\hat{d}=1,2,3\). \[\mathcal{M}_{\mathrm{x}} =\frac{1}{3!}\,\mu^{ABC}_{\mathrm{x}}\,\varepsilon^{\mathrm{x}}_{ ABC}=\] \[=\frac{1}{3!}\sqrt{g^{\mathrm{x}}}\eta_{ABC}\,\Psi^{A}_{\mathrm{x} \,\mathrm{a}}\Psi^{B}_{\mathrm{x}\,\mathrm{b}}\Psi^{C}_{\mathrm{x}\,\mathrm{c }}\,\varepsilon^{\hat{0}\hat{a}\hat{b}\hat{c}}\mu^{\mathrm{x}}_{0}=\mu_{ \mathrm{x}}\, \tag{3.9}\] where we have used \(\mu_{\mathrm{x}}=-\mu^{\mathrm{x}}_{a}u^{a}_{\mathrm{x}}=-\mu^{\hat{0}}_{ \mathrm{x}}\). This fact is important because it makes clear that only the (rest-frame) energy content of the four-momentum co-vector \(\mu^{\mathrm{x}}_{a}\) is stored in the normalization of the matter space momentum three-form \(\mu^{ABC}_{\mathrm{x}}\). Similarly, one can show that \(\mathcal{N}_{\mathrm{x}}=n_{\mathrm{x}}\) \[n_{\mathrm{x}} =-\frac{1}{3!}u^{\mathrm{x}}_{a}\,\varepsilon^{bcda}\,n^{\mathrm{ x}}_{bcd}=-\frac{1}{3!}u^{\mathrm{x}}_{\hat{a}}\,\varepsilon^{\hat{b}\hat{c} \hat{d}\hat{a}}\,\Psi^{B}_{\mathrm{x}\,\mathrm{[}}\Psi^{C}_{\mathrm{x}\,\mathrm{ c}}\Psi^{D}_{\mathrm{x}\,\mathrm{d}}\,]\,\varepsilon^{\mathrm{x}}_{BCD}\mathcal{N}_{ \mathrm{x}}\] \[=-\frac{1}{3!}u^{\mathrm{x}}_{\hat{0}}\,\varepsilon^{\hat{b}\hat {c}\hat{d}\hat{0}}\varepsilon_{\hat{b}\hat{c}\hat{d}}\,\mathcal{N}_{\mathrm{x}} =-u^{\mathrm{x}}_{0}\mathcal{N}_{\mathrm{x}}=\mathcal{N}_{\mathrm{x}}. \tag{3.10}\] These relations are not surprising. It is, in fact, quite intuitive that the non-barred quantities are related to spacetime (rest-frame) densities given that the three-forms \(\varepsilon^{\mathrm{x}}_{ABC}\) measure the volume of the matter space elements. We can also use the tetrad formalism to prove another result that will be needed later on; the intimate connection between a non-zero particle creation rate and an extended functional dependence of the current three-form. In fact, we have (see eq. (2.45)) \[\Gamma_{\rm x}=\nabla_{a}n_{\rm x}^{a}=\frac{1}{3!}\epsilon^{bcda}\,\Psi^{B}_{ \rm x\,[b}\,\Psi^{C}_{\rm x\,c}\,\Psi^{D}_{xd}\nabla_{a]}n_{BCD}^{\rm x}\, \tag{3.11}\] where we used \(\nabla_{[a}\,\Psi^{B}_{\rm x\,b}\Psi^{C}_{\rm x\,c}\,\Psi^{D}_{xd]}=0\). Introducing (again) a tetrad comoving with the x-species, and multiplying by \(\mu_{\rm x}\) we have \[\mu_{\rm x}\Gamma_{\rm x}=\frac{1}{3!}\mu_{\rm x}^{ABC}\,u_{\rm x}^{a}\nabla_{a }n_{ABC}^{\rm x}\equiv\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{dn_{ABC}^{\rm x}}{d \tau_{\rm x}}. \tag{3.12}\] As explained earlier, the right-hand-side of this equation vanishes identically if \(n_{ABC}^{\rm x}=n_{ABC}^{\rm x}(X_{\rm x}^{A})\), while it is in general non-zero if we assume the extended functional dependence given in eq. (3.4). We can now use the introduced normalizations to slim the notation (with respect to that used in [20]) for the various pieces of \(R_{a}^{\rm x}\) and \(D_{ab}^{\rm x}\). For instance, the "purely reactive" term from [20] becomes \[\mathrm{R}_{a}^{\rm xy}=\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{ \rm x}}{\partial X_{\rm y}^{D}}\,\Psi^{D}_{ya}=\mathcal{M}_{\rm x}\frac{ \partial\mathcal{N}_{\rm x}}{\partial X_{\rm y}^{D}}\,\Psi^{D}_{ya}\equiv \mathrm{R}_{D}^{\rm xy}\,\Psi^{D}_{ya}. \tag{3.13}\] Similarly we can write \[s_{ab}^{\rm xy} =\frac{1}{3}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm xy}^{D}}\,\Psi^{D}_{ya}\,\Psi^{E}_{y\,b}=2\mathcal{M}_{\rm x} \frac{\partial\mathcal{N}_{\rm x}}{\partial g_{\rm xy}^{D}}\,\Psi^{D}_{ya}\, \Psi^{E}_{y\,b}\] \[\equiv s_{DE}^{\rm xy}\,\Psi^{D}_{ya}\,\Psi^{E}_{y\,b}\, \tag{3.14a}\] \[\mathcal{S}_{ab}^{\rm xy} =\frac{1}{3}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm xy}^{D}}\,\Psi^{D}_{xa}\,\Psi^{E}_{y\,b}=2\mathcal{M}_{\rm x} \frac{\partial\mathcal{N}_{\rm x}}{\partial g_{\rm xy}^{D}}\,\Psi^{D}_{xa}\, \Psi^{E}_{y\,b}\] \[\equiv\mathcal{S}_{DE}^{\rm xy}\,\Psi^{D}_{xa}\,\Psi^{E}_{y\,b}\, \tag{3.14b}\] where we have used the fact that the partial derivatives are performed, say, with respect to the metric \(g_{y}^{AB}\) keeping fixed \(g_{\rm x}^{AB}\) and \(g_{\rm xy}^{AB}\). We will consider the validity of this assumption later. The remaining viscous stress tensor, \(S_{ab}^{\rm x}\), leads to a slightly more involved expression, because of the presence of \(g^{\rm x}\) in eq. (3.3). We have \[S_{ab}^{\rm x} =\frac{1}{3}\mu_{\rm x}^{ABC}\frac{\partial n_{ABC}^{\rm x}}{ \partial g_{\rm x}^{D}}\,\Psi^{D}_{xa}\,\Psi^{E}_{xb}=2\left(\frac{\mathcal{M} _{\rm x}}{\sqrt{g^{\rm x}}}\frac{\partial\left(\mathcal{N}_{\rm x}\sqrt{g^{ \rm x}}\right)}{\partial g_{\rm x}^{D}}\right)\Psi^{D}_{xa}\,\Psi^{E}_{x\,b}=\] \[=2\left(\mathcal{M}_{\rm x}\frac{\partial\mathcal{N}_{\rm x}}{ \partial g_{\rm x}^{D}}-\frac{1}{2}\mathcal{N}_{\rm x}\mathcal{M}_{\rm x}\,g_{ DE}^{\rm x}\right)\Psi^{D}_{xa}\,\Psi^{E}_{xb}=\] \[\equiv S_{DE}^{\rm x}\,\Psi^{D}_{xa}\,\Psi^{E}_{xb}. \tag{3.15}\] It is also obvious, by looking at the respective definitions, that the reactive terms that stem from the fact that \(\mathcal{N}_{\mathrm{x}}\) can depend also on \(g^{AB}_{\mathrm{y}}\) and \(g^{AB}_{\mathrm{xy}}\) can now be written \[r^{\mathrm{xy}}_{a}=\frac{1}{2}s^{\mathrm{xy}}_{DE}\,\nabla_{a}(g^ {bc}\Psi^{D}_{\mathrm{y}}\Psi^{E}_{\mathrm{y}c})\, \tag{3.16a}\] \[\mathcal{R}^{\mathrm{xy}}_{a}=\frac{1}{2}\mathcal{S}^{\mathrm{xy} }_{DE}\,g^{bc}\Psi^{D}_{\mathrm{x}b}\nabla_{a}\big{(}\Psi^{E}_{\mathrm{y}c} \big{)}. \tag{3.16b}\] Let us conclude this introductory section with a caveat on the slimmed notation introduced so far. In order to consider all the projected metrics to be independent, we need to make sure we are not actually adding degrees of freedom to the problem. Since the various projected metrics are ultimately combinations of \(\Psi^{A}_{\mathrm{x}b}\) we need to make sure that the degrees of freedom associated with the metrics is less or equal to \(12\,l\)--where \(l\) is the number of constituents. Because the number of mixed projected metrics is easily found to be \(l(l-1)/2\) we have \[6\frac{l(l+1)}{2}\leq 12l\Longrightarrow l\leq 3. \tag{3.17}\] Therefore, the slimmed notation introduced in this section applies to cases with less than (or equal to) three species moving independently. In what follows, the machinery is developed in a general setting, but we will focus to cases with less than three species for specific applications. ### 3.2 The non-dissipative limit We now begin to develop the process for comparing standard relativistic models for dissipative fluids with that provided by the action principle. Standard approaches [119, 120, 118, 206, 119] assume a reference equilibrium state and then build in dissipation via deviations away from this state. The action principle formally does not require any sort of equilibrium, but provides a fully non-linear set of field equations. Obviously, our first task must be to extract from the non-linear equations a notion of equilibrium. This is not straightforward for various reasons, a key one being that an arbitrary spacetime in General Relativity does not have global temporal, spatial, and rotational invariance. As a first step, we will recall features of the typical laboratory set-ups within which the laws of chemistry, dynamics, and thermodynamics were first established. #### Laboratory vs. general relativistic set-up A typical laboratory set-up is essentially local in the spacetime sense, implying there is--to a great deal of precision--temporal, spatial, and rotational invariance. Noether symmetries exist, which lead to energy, momentum, and angular momentum conservation. A clean separation between internal and external influences can be made, and these influences themselves can be manipulated. The effect of long-range, non-screenable forces on the system--for example, gravity--can be ignored. Well-defined (theoretical and experimental/observational) notions of total energy and entropy can be realized. Equilibrium can be defined in the broadest sense by saying the system evolves to a state where its total energy is minimized, or, equally, its total entropy is maximized. Internal interactions are due to, say, chemical reactions, whereas external interactions are those which distort the system's volume or allow particles and heat to enter or leave through the volume's surface. If a system is in chemical equilibrium internally, we can say that the reactions inside it are running forwards and backwards at such a rate that constituent particle number ratios remain fixed in time. If the given system is in chemical equilibrium with another system, then the chemical potentials of the two will be equal. A system in dynamical equilibrium just sits there, with no temporal evolution. Any pressure acting on the system's surface will be balanced by an internal pressure of the same value. Finally, we can say that two systems are in thermal equilibrium when there is no heat flow between them, the end result being equality of their respective temperatures. Now, let us return to the problem at hand--equilibrium when General Relativity cannot be neglected. A general relativistic set-up is problematic from the get-go, because one is hard-pressed to find properties of equilibrium like those just discussed which are workable at all time- and length-scales. Broadly speaking, there seems to be no general relativistic rules on how the local thermodynamics of local (intensive) parameters--chemical potential \(\mu\), pressure \(p\), and temperature \(T\)--connect with some notion of global thermodynamics for global (extensive) parameters--such as the total energy \(E\). An unambiguous extrapolation of the standard definitions of chemical, dynamical, and thermal equilibrium given above to General Relativity is not possible, for reasons to be explained below. There is also the well-known difficulty of identifying the total energy of a region in an arbitrary spacetime, since the Equivalence Principle precludes an ultra-local definition of gravitational energy density.6 Footnote 6: Of course, for asymptotically flat spacetimes, one can define quantities like the Schwarzschild mass. Gravitational-wave energy can be defined but only after averaging over wavelengths, see [145, 207]. The reason that the laboratory rules for chemical and thermal equilibrium are not viable in General Relativity was established long ago by Tolman and Ehrenfest [214, 215]: In General Relativity, all forms of energy react to gravity. Temperature and chemical potentials represent forms of energy and can undergo red-shift or blue-shift. There is no single temperature for an isolated system, and so saying "system A is in thermal equilibrium with system B if their temperatures are the same" becomes ambiguous; similarly for chemical equilibrium. As for dynamical equilibrium, a standard undergraduate physics calculation shows that pressure increases with depth in water which nevertheless remains at rest.7 Footnote 7: In this context, we can think of it as resulting from the breaking via gravity of spacelike Killing vectors which lead to space-translation invariance. Even the use of the word "equilibrium" is tricky because it tends to imply that a system in thermal and chemical equilibrium is independent of time, because the total entropy and total particle number do not evolve. In General Relativity, a system which is independent of time occurs only for special spacetimes which have a global timelike Killing vector field. Strictly speaking, this immediately puts the non-dissipative fluid models of Cosmology--the Friedman-Lemaitre-Robertson-Walker solutions--out of the discussion, as the universe is expanding, making it time-dependent. This points to another problem of the notion of total energy in General Relativity and arguments based on the standard understanding of energy conservation: In Special Relativity, the curvature is zero and there is a timelike Killing vector field leading directly to a Noether symmetry for the system and total energy conservation. (There are also Killing vector fields representing rotational and spatial invariance, which lead to Noether symmetries resulting in total angular and linear momentum conservation.) In an expanding universe this line of reasoning for energy conservation obviously breaks down. The main message is this: Important issues remain unsettled even after a century's worth of debate. We will not resolve these issues here; instead, what we will do is take the action-based formalism and see how its internal machinery can be manipulated to produce a self-consistent notion of the non-dissipative limit, without trying to resolve the deeper issues about the nature of equilibrium8. Our way forward is to take advantage of the fact that the action-based field equations are fully non-linear and complete. Footnote 8: We will still use the word “equilibrium” interchangeably with the non-dissipative limit. #### Multiple equilibrium states The main mechanism for manipulating the machinery of the action-based field equations is to apply perturbation techniques similar to those used to determine, say, quasi-normal modes of neutron stars. The general idea for neutron stars is to analyze linear perturbations of configurations having particular symmetries generated by Killing vectors. Among the most studied neutron star "ground-states" are those having Killing vectors which generate staticity and spherical symmetry, and those with Killing vectors that generate axisymmetry and stationarity; basically, non-rotating and rotating backgrounds, respectively. In an analogous way, we can expect different options for generating the non-dissipative limit of a multi-fluid system. For example, we can take the limit where the different dissipation coefficients (such as shear and bulk viscosities) are effectively zero. Another possibility is the limit where the dissipation coefficients are non-zero but the fluid motion itself is such that the dissipation mechanisms are not acting. The formalism developed by Onsager [166] (recall discussion in section 2.1.1) is worthy of mention here, because the system of field equations it creates are more explicit in how the two limits can be implemented (see, for example, [18]). It is interesting also to note that the philosophy of the Onsager approach is not so much about how to expand away from an equilibrium, but rather how a non-equilibrium system gets driven back to the equilibrium state. Here, because the field equations are fully non-linear, they can, in principle, describe systems which are being driven toward or away from equilibrium. Next, we will explore some of the different options for equilibrium states. We will use a global analysis which assumes that the second law of thermodynamics applies and that a knowledge of the fluxes throughout a region of spacetime is enough to determine whether or not dissipation is acting. A local analysis of the formalism will also be pursued, involving the field equations themselves. #### Global analysis of the non-dissipative limit Recall that the fundamental dynamical variables are the particle fluxes \(n^{a}_{\mathrm{x}}\) and the entropy \(s^{a}=n^{a}_{\mathrm{s}}\).9 The formalism's linchpin is the breaking of the closure of the particle-flux three-forms, \(n^{\mathrm{x}}_{abc}\) and \(s_{abc}\), which leads to non-zero creation rates \(\Gamma_{\mathrm{x}}\) and \(\Gamma_{\mathrm{s}}\). In turn, these non-zero creation rates lead to the resistive contribution \(R^{a}_{\mathrm{x}}\) and the dissipation tensor \(D^{\mathrm{x}}_{ab}\) terms in the equations of motion. The nice thing about fluxes, which we will exploit here, is that they can be integrated. Footnote 9: Because we impose the second law of thermodynamics below, we are specifically separating out the entropy flux in this discussion. When we use the Einstein equations and the field equations of a multi-fluid system, our goal is to get solutions for the metric and fluxes on a "chunk" of spacetime, for a given set of initial/boundary conditions. Suppose we pick an ad hoc region \(\mathcal{M}\) of spacetime, as illustrated in fig. 3.1. The fact that it is a region implies there is a "conceptual boundary", meaning the whole spacetime is being divided up into smaller domains. Let \(u^{a}_{\mathrm{B}}\) (collectively) denote the unit normal to the total boundary of the region, defined so that it always points "out". The boundary itself consists of two spacelike hypersurfaces \(\partial\mathcal{M}_{\pm}\) (with unit normals \(u^{a}_{\mathrm{B}_{\pm}}\), \(u^{a}_{\mathrm{B}_{\pm}}u^{\mathrm{B}_{\pm}}_{a}=-1\)), and a timelike hypersurface \(\partial\mathcal{M}_{L}\) (with unit normal \(u^{a}_{\mathrm{B}_{\pm}}\), \(u^{a}_{\mathrm{B}_{\pm}}u^{\mathrm{B}_{\pm}}_{a}=+1\)); in essence, think of \(\partial\mathcal{M}_{-}\) as a 3D region of characteristic volume \(\Delta L^{3}\) on an initial time-slice of \(\mathcal{M}\) and \(\partial\mathcal{M}_{+}\) as the same volume on the final time-slice, and then \(\partial\mathcal{M}_{L}\) will be similar to the union of the surface of the same volume on each leaf of some spacelike foliation of \(\mathcal{M}\) between \(\partial\mathcal{M}_{-}\) and \(\partial\mathcal{M}_{+}\). The induced metric on \(\partial\mathcal{M}_{\pm}\) is \(h^{ab}_{\pm}=g^{ab}+u^{a}_{\text{B}_{\pm}}u^{b}_{\text{B}_{\pm}}\) and for \(\partial\mathcal{M}_{L}\) it is \(h^{ab}_{L}=g^{ab}-u^{a}_{\text{B}_{\text{L}}}u^{b}_{\text{B}_{\text{L}}}\). There are three contributions to the total particle number change \(\Delta N^{\text{x}}\) and total entropy change \(\Delta S\): (i) The total particle number \(N^{\text{x}}_{-}\) and entropy \(S^{\text{x}}_{-}\) which exist in \(\partial\mathcal{M}_{-}\); (ii) The total particle number \(N^{\text{x}}_{+}\) and entropy \(S^{\text{x}}_{+}\) which exist in \(\partial\mathcal{M}_{+}\); and, (iii) The number of particles \(\Delta N^{\text{x}}_{L}\) and amount of entropy \(\Delta S_{L}\) which enter/leave \(\partial\mathcal{M}_{L}\). Each contribution is obtainable from its associated flux: If \(n^{\text{x}}_{\pm}\) (\(s_{\pm}\)) are the particle number (entropy) densities as measured with respect to the volumes \(\partial\mathcal{M}_{\pm}\), and \(n^{\text{x}}_{L}\) (\(s_{L}\)) is the number of particles (amount of entropy) per unit area per unit time entering/leaving \(\partial\mathcal{M}_{L}\), then10 Footnote 10: We are denoting with \(h_{\pm}\) the determinant of \(h^{\pm}_{ab}\). We have also taken into account the fact that \(u^{a}_{\text{B}_{-}}\) points to the past, and that the signature of the induced metric \(l^{ab}_{\text{B}}\) is \((-++)\) as \(\partial\mathcal{M}_{L}\) is timelike. \[N^{\text{x}}_{+} =\int_{\partial\mathcal{M}_{+}}\text{d}^{3}x\sqrt{h_{+}}\ n^{ \text{x}}_{+}=\int_{\partial\mathcal{M}_{+}}\text{d}^{3}x\sqrt{h_{+}}\ \left(-u^{\text{B}_{+}}_{a}n^{a}_{\text{x}}\right)\, \tag{3.18a}\] \[N^{\text{x}}_{-} =\int_{\partial\mathcal{M}_{-}}\text{d}^{3}x\sqrt{h_{-}}\ n^{ \text{x}}_{-}=\int_{\partial\mathcal{M}_{-}}\text{d}^{3}x\sqrt{h_{-}}\ \left(u^{\text{B}_{-}}_{a}n^{a}_{\text{x}}\right)\,\] (3.18b) \[\Delta N^{\text{x}}_{L} =\int_{\partial\mathcal{M}_{L}}\text{d}^{3}x\sqrt{-h_{L}}\ n^{ \text{x}}_{L}=\int_{\partial\mathcal{M}_{L}}\text{d}^{3}x\sqrt{-h_{L}}\ \left(u^{\text{B}_{ \text{L}}}_{a}n^{a}_{\text{x}}\right)\, \tag{3.18c}\] Figure 3.1: A depiction of the spacetime region \(\mathcal{M}\), with one spatial axis suppressed. It has a characteristic spatial size \(\Delta L\) and temporal size \(\Delta T\). Inside \(\mathcal{M}\) is a smaller region \(\delta\mathcal{M}\) of characteristic spatial and temporal size \(\delta l\) and \(\delta t\), respectively. The boundary \(\partial\mathcal{M}\) consists of the initial and final time-slices \(\partial\mathcal{M}_{-}\), \(\partial\mathcal{M}_{+}\) and the time-like hypersurface \(\partial\mathcal{M}_{L}\). and similarly for \(\Delta S\). The changes in the total x-particles \(\Delta N_{\rm x}\) and entropy \(\Delta S\) over the region \(\mathcal{M}\) are therefore \[\Delta N^{\rm x} =N_{+}^{\rm x}-N_{-}^{\rm x}+\Delta N_{L}^{\rm x}\, \tag{3.19a}\] \[\Delta S =S_{+}-S_{-}+\Delta S_{L}. \tag{3.19b}\] If the length- and time-scales of spacetime region \(\mathcal{M}\) are those typical of terrestrial labs (read: its curvature is zero throughout), then we have great confidence in asserting the second law of thermodynamics; namely, the net change of the total entropy must satisfy \(\Delta S\geq 0\). We could even be confident that we could determine the total energy \(E\) and volume \(V\) of the system, and have a working first law of thermodynamics which connects \(\Delta E\), \(\Delta N^{\rm x}\), \(\Delta V\), and \(\Delta S\): \[\Delta E=T\Delta S-p\Delta V+\sum_{\rm x}\mu_{\rm x}\Delta N^{\rm x}. \tag{3.20}\] The temperature \(T\), pressure \(p\), and chemical potentials \(\mu_{\rm x}\) would be well-defined and calculable. We could even use the standard notions of chemical, dynamical, and thermal equilibrium and say that system A of spacetime region \(\mathcal{M}_{A}\) is in chemical, dynamical, and thermal equilibrium with system B of spacetime region \(\mathcal{M}_{B}\) if, respectively, their chemical potentials are equal, their pressures are equal, and their temperatures are equal. Now, let us suppose we have a region large enough that spacetime curvature can no longer be ignored. Probably, it would be a safe bet to say that the second law still applies; i.e., \(\Delta S\geq 0\). But, we are hard-pressed to employ the laboratory definitions of chemical, dynamical, and thermal equilibrium. Consequently, it is difficult to imagine a global first law of thermodynamics for general relativistic multifluid systems similar to that in eq. (3.20); again, the reason being that intensive parameters are spacetime dependent, and an extensive parameter like total energy may not even be definable. Still, our task is to explore any possible link between parameters which require scales where spacetime curvature is necessary (\(\Delta N^{\rm x}\) and \(\Delta S\)) to the local fluid variables (\(n_{\rm x}^{a}\) and \(s^{a}\)) which enter the fluid field equations. Fortunately, the divergence theorem provides such a link. Applying it to the divergence of both the particle and entropy fluxes gives \[\Delta N^{\rm x} =\int_{\mathcal{M}}\mathrm{d}^{4}x\sqrt{-g}\ \nabla_{a}n_{\rm x}^{a}=\int_{\mathcal{M}}\mathrm{d}^{4}x\sqrt{-g}\ \Gamma_{\rm x}\, \tag{3.21a}\] \[\Delta S =\int_{\mathcal{M}}\mathrm{d}^{4}x\sqrt{-g}\ \nabla_{a}s^{a}=\int_{ \mathcal{M}}\mathrm{d}^{4}x\sqrt{-g}\ \Gamma_{\rm s}. \tag{3.21b}\] These are not new results, but they serve the purpose here of establishing a direct link between global and local variables, which we will use to formulate some aspects of the non-dissipative limit of our formalism. Consider an idealized situation of a spacetime region \(\mathcal{M}\) sub-divided into a region \(\mathcal{M}_{A}\) for which \(\Delta N_{A}^{\mathrm{x}}<0\) and \(\Delta S_{A}<0\), and another region \(\mathcal{M}_{B}\) for which \(\Delta N_{B}^{\mathrm{x}}>0\) and \(\Delta S_{B}>0\). The trick is that they are such that the total changes on \(\mathcal{M}\) vanish: \[\Delta N^{\mathrm{x}}=\Delta N_{A}^{\mathrm{x}}+\Delta N_{B}^{\mathrm{x}}=0\,\ \Delta S=\Delta S_{A}+\Delta S_{B}=0. \tag{3.22}\] The point is that, even though \(\Gamma_{\mathrm{x}}\) and \(\Gamma_{\mathrm{s}}\) are not zero, this is an example of a global, fully general relativistic, non-dissipative system since there is no net total particle number or total entropy change. But is this realistic? Is this the kind of definition of the non-dissipative limit we are looking for? Probably not. What is more likely is that the non-dissipative limit is better understood by breaking up \(\mathcal{M}\) into many small spacetime regions \(\delta\mathcal{M}\), with characteristic temporal and volume scales \(\delta t\) and \(\left(\delta l\right)^{3}\), respectively, as illustrated in fig. 3.1. Once again, let us imagine that \(\delta\mathcal{M}\) is subdivided into two regions \(\delta\mathcal{M}_{A}\) and \(\delta\mathcal{M}_{B}\). It is conceivable that on these scales statistical fluctuations could lead to positive creation rates in one region and negative in the other. If the regions are small enough, we can assume that \(\Gamma_{\mathrm{x}}\) and \(\Gamma_{\mathrm{s}}\) vary slowly across them so that we can approximate the integrals for \(\delta N_{\delta,\mathcal{M}}^{\mathrm{x}}\) and \(\delta S_{\delta\mathcal{M}}\) as11 Footnote 11: We are also assuming that the \(\delta\mathcal{M}\) to be small enough that we can transform away gravity by means of Riemann normal coordinates [155]. \[\delta N_{\delta\mathcal{M}}^{\mathrm{x}}\approx\Gamma_{\mathrm{x}}\delta t \left(\delta l\right)^{3}\,\ \delta S_{\delta\mathcal{M}}\approx\Gamma_{\mathrm{s}}\delta t \left(\delta l\right)^{3}. \tag{3.23}\] However, the random nature of statistical fluctuations for a system purported to be in equilibrium implies that any non-zero creation rates inside \(\delta\mathcal{M}_{A}\) and \(\delta\mathcal{M}_{B}\) must balance on average so that \[\delta N_{\delta\mathcal{M}}^{\mathrm{x}}=\delta N_{\delta\mathcal{ M}_{A}}^{\mathrm{x}}+\delta N_{\delta\mathcal{M}_{B}}^{\mathrm{x}}\approx\left( \Gamma_{\mathrm{x}}^{A}+\Gamma_{\mathrm{x}}^{B}\right)\delta t\left(\delta l \right)^{3}=0\implies\Gamma_{\mathrm{x}}=\Gamma_{\mathrm{x}}^{A}+\Gamma_{ \mathrm{x}}^{B}=0\, \tag{3.24a}\] \[\delta S_{\delta\mathcal{M}}=\delta S_{\delta\mathcal{M}_{A}}+ \delta S_{\delta\mathcal{M}_{B}}\approx\left(\Gamma_{\mathrm{s}}^{A}+\Gamma_{ \mathrm{s}}^{B}\right)\delta t\left(\delta l\right)^{3}=0\implies\Gamma_{ \mathrm{s}}=\Gamma_{\mathrm{s}}^{A}+\Gamma_{\mathrm{s}}^{B}=0\ . \tag{3.24b}\] One conclusion from this exercise is that the characteristic time and volume scales of \(\delta\mathcal{M}\) must be large enough that statistical fluctuations will, on average, balance out for a system in equilibrium. The second conclusion is that having \(\delta N_{\delta\mathcal{M}}^{\mathrm{x}}=0\) (\(\delta S_{\delta\mathcal{M}}=0\)) on the one hand means \(\Gamma_{\mathrm{x}}=0\) (\(\Gamma_{\mathrm{s}}=0\)) on the other, and vice versa. Putting both together we will assume that the equilibrium state for multi-fluid systems must be such that regions like \(\delta\mathcal{M}\) set the scales for fluid elements and we have \(\Gamma_{\mathrm{x}}=0\) and \(\Gamma_{\mathrm{s}}=0\) everywhere in \(\mathcal{M}\). #### Local analysis of the non-dissipative limit This next step begins where the previous one left off; that is, a necessary condition for a multi-fluid system to be in equilibrium is that the flux creation rates \(\Gamma_{\rm x}\) (now including the entropy) vanish everywhere. We will use the field equations themselves to investigate three different ways for the action-based system to have zero particle creation rates: 1) The limit where the dissipation terms \(R_{a}^{\rm x}\) and \(D_{ab}^{\rm x}\) are zero, 2) the limit where the dissipation terms are non-zero but the fluid motion is such that the dissipative channels are dynamically suppressed, and 3) a combination of dynamical suppression with constraints between the dissipation terms that lead to Killing vector fields. But before moving on with the analysis, it is advantageous to consider the simplest non-dissipative fluid model which can be derived from the action above--the ordinary perfect fluid, where all particle species and entropy flow together and the total particle numbers and entropy are conserved individually. The calculation is straightforward [67]. All the fluxes have the same four-velocity, say, \(u^{a}\), and so \(n_{\rm x}^{a}=n_{\rm x}u^{a}\). If each particle number flux is conserved individually, then \[\nabla_{a}n_{\rm x}^{a}=\nabla_{a}\left(n_{\rm x}u^{a}\right)=u^{a}\nabla_{a}n _{\rm x}+n_{\rm x}\nabla_{a}u^{a}=0\implies u^{a}\nabla_{a}\ln n_{\rm x}=- \nabla_{a}u^{a}. \tag{3.25}\] Obviously, the total particle flux \(n^{a}=\sum_{\rm x}n_{\rm x}^{a}\) is also conserved and hence \[u^{a}\nabla_{a}\ln n=-\nabla_{a}u^{a}\,\ n=\sum_{\rm x}n_{\rm x}. \tag{3.26}\] Therefore, we have \[u^{a}\nabla_{a}\ln n_{\rm x}-u^{a}\nabla_{a}\ln n=0\implies u^{a}\nabla_{a} \left(\frac{n_{\rm x}}{n}\right)=0. \tag{3.27}\] The upshot is that each species fraction \(n_{\rm x}/n\) must also be conserved along the flow, and this includes the entropy as well. This implies that only one matter space is required. In the action principle, this means that for each x--including the entropy--we have \(\xi_{\rm x}^{a}=\xi^{a}\), and there is only one Euler equation of the form \[\sum_{\rm x}f_{a}^{\rm x}=0\, \tag{3.28}\] where the \(f_{a}^{\rm x}\) are exactly as in eq. (2.44). With this example in mind we can now proceed with the description of three different non-dissipative limits consistent with the action-based model. Note that we will also impose another condition which defines the equilibrium. We will assume that all distinct fluids are comoving, so that we are not considering systems with superfluid/superconducting phases, or a perfect heat-conducting limit [52]. This means that there is a common four-velocity for all species, \(u_{\rm x}^{a}=u^{a}\). However, it is important to point out a subtlety about this comoving limit: For a multi-fluid system each species has its own evolution equation. Even in the comoving limit there are still \({\rm x}\) fluid equations. Now consider the field equations for a multi-species, single fluid system--as we see from eq. (3.28), it has only one fluid evolution equation. Therefore, the comoving limit of the multi-fluid system (\({\rm x}\) equations) is not equal to the single-fluid system (one equation). This is not an error, rather, it is a consequence of the fact that the number of independent field equations of the system is fixed by the number of independent fluids chosen before the action principle is applied. Note also that we can use the common four-velocity \(u^{a}\) to introduce a spatial covariant derivative \(D_{a}\)--acting in directions perpendicular to \(u^{a}\)--and a time derivative \({}^{a}\,{}^{\cdots}\,{}^{\prime}=u^{a}\nabla_{a}\). For a scalar \(A\) we have \[D_{a}A=\perp_{a}^{b}\nabla_{b}A=\left(\delta_{a}^{b}+u_{a}u^{b}\right)\nabla_{ b}A=\nabla_{a}A+\dot{A}u_{a}\, \tag{3.29}\] and for a vector \[D_{a}A_{b}=\perp_{a}^{c}\perp_{b}^{d}\nabla_{c}A_{d}. \tag{3.30}\] ##### Dynamical suppression of dissipation We start by considering the consequences of the non-dissipative limit if the fluid flow is such that the dissipation mechanisms are not triggered. If we look at each species creation rate we have \[\mu_{\rm x}\Gamma_{\rm x}=-R_{a}^{\rm x}\,u^{a}-D_{ab}^{\rm x}\nabla^{a}u^{b}= 0\, \tag{3.31}\] so that, summing over all species \[\sum_{\rm x}\mu_{\rm x}\Gamma_{\rm x}=-\left(\sum_{\rm x}R_{a}^{\rm x}\right)\, u^{a}-D_{ab}\nabla^{a}u^{b}=-D_{ab}D^{(a}u^{b)}=0\, \tag{3.32}\] where we have used the identities \(u_{\rm x}^{b}D_{ab}^{\rm x}=0\), \(\sum_{\rm x}R_{a}^{\rm x}=0\) and the fact that \(D^{ab}\) is symmetric. Using the standard decomposition of the four velocity gradients it is easy to see that eq. (3.32) implies \[D_{(a}u_{b)}=\perp_{(a}^{c}\perp_{b)}^{d}\nabla_{c}u_{d}=\nabla_{(a}u_{b)}+u_{ (a}\dot{u}_{b)}=\sigma_{ab}+\frac{1}{3}\theta\perp_{ab}=0. \tag{3.33}\] In particular, this tells us that the (dynamically-suppressed) non-dissipative flow has zero expansion \(\theta=0\), and zero shear \(\sigma_{ab}=0\). What is left of the motion is captured by \[\nabla_{a}u_{b}=\omega_{ab}-\dot{u}_{b}u_{a}\ \, \tag{3.34}\] which is consistent with rigid rotation. From the definition of creation rates, we can now write \[\Gamma_{\rm x}=\nabla_{a}n^{a}_{\rm x}=\dot{n}_{\rm x}+n_{\rm x}\theta=\dot{n}_{ \rm x}=0. \tag{3.35}\] Assuming a thermodynamical relation in the standard way, namely that the energy functional of the system is12\(\varepsilon=\varepsilon(n_{\rm x})\), we see that the chemical potential of each species is \(\mu_{\rm x}=\mu_{\rm x}(n_{\rm x})\) and likewise for the pressure \(p\). Therefore, we have \(\dot{\mu}_{\rm x}=0\) and \(\dot{p}=0\), as well. The proposed scenario is consistent with the minimum requirements for the system being non-dissipative--as explained above. This is not, however, the situation we will use as basis for the expansion. Footnote 12: Note that we are here using a compact notation, so that the chemical index \({\rm x}\) in \(\varepsilon(n_{\rm x})\) runs over all the species/constituents in the sytem under consideration. ##### The Euler limit Later (in section 3.4) we will use thermodynamics arguments to show that the dissipative terms all vanish at equilibrium: \(D^{\rm x,\;e}_{ab}\) and \(R^{\rm x,\;e}_{a}=0\)--where we have introduced the superscript "e" to stress that the dissipative terms are evaluated at equilibrium, consistently with the notation used later on. We now consider the non-dissipative limit with these additional constraints and show its compatibility with the Euler equations. Since the fluids are comoving at equilibrium we have for the fluxes \(n^{a}_{\rm x}=n_{\rm x}u^{a}_{\rm e}\), and so the four-momenta become \[\mu^{\rm x}_{a}=\Big{(}\mathcal{B}_{\rm x}n_{\rm x}+\sum_{y\neq\rm x}\mathcal{ A}_{\rm xy}n_{\rm y}\Big{)}u^{\rm e}_{a}=\mu_{\rm x}u^{\rm e}_{a}. \tag{3.36}\] and the equation of motion for the x-species is \[f^{\rm x}_{a}=2n^{b}_{\rm x}\nabla_{[b}\mu^{\rm x}_{a]}=n_{\rm x}\mu^{\rm x} \dot{u}^{\rm e}_{a}+n_{\rm x}\Big{(}u^{b}_{\rm e}u^{\rm e}_{a}+\delta^{b}_{a} \Big{)}\nabla_{b}\mu^{\rm x}=n_{\rm x}\mu^{\rm x}\dot{u}^{\rm e}_{a}+n_{\rm x} D_{a}\mu^{\rm x}=0. \tag{3.37}\] The first term in \(f^{\rm x}_{a}\) then looks like the mass/energy per volume times the acceleration while we can show that the second is a "pressure-like" term in the sense of being the gradient of a thermodynamic scalar. In fact, we have \[\frac{\partial\Lambda}{\partial n_{\rm x}}=-\Bigg{(}\mathcal{B}_{\rm x}n_{\rm x }-\sum_{y\neq\rm x}\mathcal{A}_{\rm xy}n^{a}_{\rm y}u^{\rm x}_{a}\Bigg{)}=- \mu_{\rm x}\, \tag{3.38}\] and the sum of these terms provides the derivative of the total pressure \(\Psi\): \[\sum_{\rm x}n_{\rm x}D_{a}\mu^{\rm x}=D_{a}\Big{(}\sum_{\rm x}n_{\rm x}\mu^{ \rm x}+\Lambda\Big{)}=D_{a}\Psi. \tag{3.39}\] It is important to note that each individual term cannot (in general) be considered as the derivative of the x-species contribution to the total pressure. Partial pressures exist only when the various species do not interact. Even though the comoving limit of the multi-fluid system is not the same as the single fluid, multi-species system, there is some overlap: Taking the sum over the chemical species of eq. (3.37) we find13 the standard relativistic Euler equation we derived in section 1.3. One can also show that eq. (3.28) can be written in this form. This is an important self-consistency check, but because the multi-fluid comoving limit is not the same as the single fluid limit, we need to go back to the individual fluid equations of the multi-fluid system. Footnote 13: We have used the standard Euler relation \(\sum_{\kappa}n_{\kappa}\mu^{\kappa}=p+\varepsilon\), where \(p\), \(\varepsilon\) are the equilibrium pressure and energy density, respectively. We can rewrite the individual equations of motion as \[\dot{u}_{p}^{\rm e}=-D_{b}(\log\mu_{\rm x})\ ; \tag{3.40}\] thus, for each combination of \({\rm x}\neq{\rm y}\), \[D_{a}(\log\mu_{\rm x})=D_{a}(\log\mu_{\rm y})\ \Longrightarrow\ D_{a}\left(\log\frac{\mu_{\rm x }}{\mu_{\rm y}}\right)=0. \tag{3.41}\] This self-consistency therefore requires the various chemical potentials \(\mu_{\rm x}\) and \(\mu_{\rm y}\) (as functions on spacetime) to be proportional to each other by some factor \(C_{\rm y}^{\rm x}\), which is constant in the spatial directions; namely, \[\mu_{\rm x}=C_{\rm y}^{\rm x}\mu_{\rm y}\,\ D_{a}C_{\rm y}^{\rm x}=0. \tag{3.42}\] This is to be contrasted with the single-fluid case, where there is no such restriction--in the sense of being forced by the evolution equations--between the chemical potentials. Usually, one must provide additional information. For example, for neutron stars one typically imposes that beta decay and inverse beta decay are in equilibrium. If we combine this with the "dynamical suppression of dissipation", the factor \(C_{\rm y}^{\rm x}\) is in fact constant in all the space-time directions. ##### Dynamical suppression and Killing vectors In a local region of spacetime, freely falling frames exist and the Killing equation will be satisfied approximately. In these local regions having an equilibrium will be consistent with the existence of Killing fields. However, local regions which are far removed from each other will not be (on the relevant dynamical timescale) in equilibrium with each other. This kind of "quasi-local" regression towards equilibrium has been discussed in the work of Fukuma and Sakatani [85], explicitly introducing two different spacetime scales to describe the evolution of general relativistic dissipative systems. The hypothesis of Local Thermodynamic Equilibrium applies on the smaller scale--which is of the size of a fluid element--while the regression (in the sense of Onsager [166]) towards equilibrium takes place on the larger one, which can still be smaller than the body size. A relation between the perfect fluid four-velocity and Killing vectors, for stationary axially symmetric rotating stars,14 has been discussed by Gourgoulhon [99]. A similar discussion about thermodynamic equilibrium in General Relativity and the existence of Killing vectors was provided by Becattini [37]. Specifically, he showed that there must be global Killing vector fields if the total entropy of the system is to be independent of the spacelike hypersurface over which the integration is performed. As for the work presented here, we will now show under what conditions the combination \(\xi_{\rm x}^{a}=\mu_{\rm x}^{-1}u_{\rm e}^{a}\) can be turned into Killing vector fields. Footnote 14: Note that Gourgoulhon [99] works with the enthalpy per particle instead of chemical potentials. However, this makes no difference for barotropic perfect fluids. From eq. (3.40) it can be seen that \[\nabla_{a}\xi_{b}^{\rm x}+\nabla_{b}\xi_{a}^{\rm x}=\frac{1}{\mu_{\rm x}}\big{(} u_{(a}^{\rm e}u_{b)}^{\rm e}\mathrm{log}\mu_{\rm x}+D_{(a}u_{b)}^{\rm e}\big{)}\, \tag{3.43}\] so that, if dynamical suppression has worked, the right hand side becomes zero and the \(\xi_{\rm x}^{a}\) will be a timelike Killing vector field, along which the local thermodynamical parameters \(n_{\rm x}\), \(\mu^{\rm x}\), \(\varepsilon\), and \(p\) become constants of motion. Put in different words, if we want the system to be (at least quasi-locally) at equilibrium--i.e. stationary-- we also need to require rigid body motion. #### A final comment on equilibrium To conclude we will come full circle and consider again the change in total entropy given by eq. (3.19). The result only references spacelike hypersurfaces as part of the (ad hoc) choice of the boundary of the spacetime region for which the entropy change is being determined. There are no restrictions placed on the spacetime geometry in this construct; in particular, no requirement of global Killing vectors. As a matter of practice, the change in entropy of a system is clearly dependent on the spatial size and the amount of time the system has had to evolve. Coupling this with the fact that a separation of space from time is always a choice--an arbitrary spacetime has no preferred directions, no natural "moments-of-time"--we see that the ad hoc nature of the boundary in eq. (3.19) is not a drawback. This is precisely the freedom needed in order to incorporate a system's spatial extent and evolution time, and the fact that a separation of space from time in spacetime is always a choice. The main reason why this is intriguing is that the second law of thermodynamics only refers to the change in total entropy, not the value of entropy itself at specific moments of time (i.e. spacelike hypersurfaces). It may be that questions of equilibrium are not to be settled by the "moment-to-moment" behaviour of three-dimensional integrals, but rather by global statements of the type represented by eq. (3.19). This is something we are currently investigating and hope to be able to give more detail on in a future work. ### 3.3 Perturbations with respect to equilibrium With the equations of motion obtained from an action principle, we can consider perturbations away from equilibrium configurations (of the kind described above) in a way that is closely related--at least from the formal perspective--to standard hydrodynamical perturbation theory. The general approach to Lagrangian perturbation theory is perhaps best described by Friedman and Schutz [82]. Roughly speaking, the evolution equations for the perturbed fields can be obtained by perturbing the equations that follow from the action. It is also clear--at least in principle--how to construct a Lagrangian whose variation gives the perturbed equations (see SS2 of [82]). However, since we are not focussing on a stability analysis of fluid oscillations we will not consider this additional aspect here. To set the stage for the perturbative expansion, we consider the family of worldlines (not necessarily geodesics) that each constituent of a multifluid system traces out in spacetime. Our definition of equilibrium includes the assumption that all species are comoving. Therefore, our fiducial set of worldlines representing equilibrium are those the system would have followed if it were comoving throughout its history. This then allows us to view each of the "final" worldlines \(x_{\rm f}^{a}(\bar{\tau})\) as a curve in spacetime which is close to the equilibrium one \(x_{\rm e}^{a}(\tau)\), with \(\bar{\tau}\) and \(\tau\) being the proper times of the respective curves. See fig. 3.2 for an illustration of the idea. The unit four-velocities associated with the two worldlines are \[u_{\rm f}^{a}=\frac{dx_{\rm f}^{a}}{d\bar{\tau}}\,\qquad u_{\rm e}^{a}=\frac{ dx_{\rm e}^{a}}{d\tau}. \tag{3.44}\] Obviously, \(u_{\rm e}^{a}\) represents the comoving frame introduced earlier. We assume another family of curves \(x_{\rm ef}^{a}(\lambda)\), where \(\lambda\) is an affine parameter (say, the proper length), that connects the equilibrium worldline to the actual one. This means that for any point \(x_{\rm e}^{a}(\tau_{e})\) on the equilibrium worldline, there is a unique point \(x_{\rm f}^{a}(\bar{\tau}_{f})\) on the perturbed worldline, and a unique curve \(x_{\rm ef}^{a}(\lambda)\) between them having two points \(x_{\rm ef}^{a}\left(\lambda_{\rm e}\right)\) and \(x_{\rm ef}^{a}\left(\lambda_{t}\right)\) such that \[x_{\rm f}^{a}\left(\bar{\tau}_{t}\right)=x_{\rm ef}^{a}\left(\lambda_{t} \right)\,\qquad x_{\rm e}^{a}\left(\tau_{\rm e}\right)=x_{\rm ef}^{a}\left( \lambda_{\rm e}\right). \tag{3.45}\] Taylor expanding the perturbed worldline about the equilibrium up to second order, we get \[x_{\rm f}^{a}(\bar{\tau}_{\rm f}) =x_{\rm e}^{a}(\tau_{\rm e})+\frac{dx_{\rm ef}^{a}}{d\lambda}\Big{|} _{\lambda_{\rm e}}(\lambda_{\rm f}-\lambda_{\rm e})+\frac{1}{2}\frac{d^{2}x_{ \rm ef}^{a}}{d^{2}\lambda}\Big{|}_{\lambda_{\rm e}}(\lambda_{\rm f}-\lambda_{ \rm e})^{2}\] \[=x_{\rm e}^{a}(\tau_{\rm e})+\xi^{a}\Delta\lambda+\frac{1}{2} \Big{(}\xi^{b}\partial_{b}\xi^{a}\Big{)}\Delta\lambda^{2}\, \tag{3.46}\] where we introduced the tangent vector \[\frac{d}{d\lambda}=\frac{dx_{\rm ef}^{a}}{d\lambda}\Big{|}_{\lambda_{\rm e}} \frac{\partial}{\partial x^{a}}=\xi^{a}\partial_{a}. \tag{3.47}\] The first objects we want to perturb are the fluid element "names". That is, we attach a label \(X^{A}\), where the index \(A=1,2,3\), to each of the worldlines used to cover the region of spacetime occupied by the fluid. By definition of the Lagrangian variation [82, 83] we then have \[\Delta X^{A}=\Big{(}\phi_{*}X^{A}(x_{\rm f})\Big{)}(x_{\rm e})-\bar{X}^{A}(x_ {\rm e})=X^{A}(x_{\rm f})-\bar{X}^{A}(x_{\rm e})=0\, \tag{3.48}\] where \(\phi\) is the diffeomorphism that connects the perturbed and unperturbed worldlines, via the flow lines \(x_{\rm ef}^{a}\), and and \(\phi_{*}\) denotes the pull-back from perturbed to equilibrium manifold. The last equality then follows from the fact that the fluid label does not change as we follow it. As a result we have, to first order \[\delta X^{A}=-\mathcal{L}_{g_{\rm x}}X^{A}=-\xi_{\rm x}^{a}\Psi^{A}_{\rm e}=- \xi_{\rm x}^{A}\, \tag{3.49}\] Figure 3.2: An illustration of worldlines associated with the fluid elements (solid vertical red lines, parameterized by \(\tau,\ \bar{\tau}\)) and “Lagrangian displacements” which connect fluid elements (dashed horizontal blue lines, parameterized by \(\lambda\)). where we introduced the Lagrangian displacement vector \(\xi^{a}_{\rm x}=x^{a}_{\rm f}-x^{a}_{\rm e}\). It is important to note that these displacement vectors are different from the ones introduced when obtaining the equations of motion from the action principle (see eq. (2.56)), even though the mathematics appears the same. In the present case the displacement vector connects two configurations that are "close" in the space of physical solutions--in field-theory parlance they are both "on-shell". We also note that, to compute the second order variation we cannot rely on the simple relation that exists between Lagrangian and Eulerian variation (at first order). We need to perform such calculations explicitly. At this point, it is worth pausing to consider what is behind the perturbation scheme we are building. Since we assume the existence of a well defined equilibrium timelike congruence \(x^{a}_{\rm e}\) with four velocity \(u^{a}_{\rm e}\), we may imagine riding along with the equilibrium fluid element observing the evolution of the system (towards equilibrium) from this perspective. This means that the x-species four-velocity \(u^{a}_{\rm x}\) can be decomposed (in the usual way) as \[u^{a}_{\rm x}=\gamma_{\rm x}\Big{(}u^{a}_{\rm e}+w^{a}_{\rm x}\Big{)}\,\ \mbox{where}\quad w^{a}_{\rm x}u^{a}_{a}=0\,\quad \gamma_{\rm x}=\Big{(}1-w^{a}_{\rm x}w^{a}_{a}\Big{)}^{-1/2}. \tag{3.50}\] Moreover, since we are working up to first order we have \[\gamma_{\rm x}=1+\frac{1}{2}w^{2}_{\rm x}\approx 1+\mathcal{O}(w^{2}_{\rm x}) \quad\Longrightarrow\quad u^{a}_{\rm x}=u^{a}_{\rm e}+w^{a}_{\rm x}. \tag{3.51}\] We note that this linear expansion in the relative velocities, although in a different spirit, has also been discussed in the context of extensions to magneto-hydrodynamics [16, 24, 25]. Also, it is interesting in itself (and necessary for perturbing the full set of fluid equations) to understand the relation between the spatial velocity \(w^{a}_{\rm x}\) as measured by the equilibrium observer and the Lagrangian displacement \(\xi^{a}_{\rm x}\). We consider the displacement to live in the local present of the equilibrium observer, i.e., to be such that \(\xi^{a}_{\rm x}u^{a}_{a}=\zeta^{a}_{\rm x}u^{a}_{a}=0\).15 This implies that the vectors \(\xi^{a}_{\rm x}\) and \(\zeta^{a}_{\rm x}\) are spacelike non-null vector fields in spacetime. As a result, if we consider the proper time of the perturbed worldline, we have Footnote 15: This is essentially a gauge choice, see [21] for discussion. \[-d\bar{\tau}^{2}=g_{ab}\,dx^{a}_{\rm f}\,dx^{b}_{\rm f}=g_{ab}\,dx^{a}_{\rm e} \,dx^{b}_{\rm e}+g_{ab}\Big{(}dx^{a}_{\rm e}\,\zeta^{b}\Delta\lambda+dx^{b}_{ \rm e}\,\zeta^{a}\Delta\lambda\Big{)}=-d\tau^{2}\, \tag{3.52}\] where we used the fact that \[x^{a}_{\rm e}=x^{a}_{\rm e}(\tau)\Longrightarrow dx^{a}_{\rm e}=u^{a}_{\rm e} d\tau. \tag{3.53}\] As a consequence, the proper time of the perturbed and equilibrium worldline is the same, so we have \[u_{\rm x}^{a}=\frac{dx_{\rm f}^{a}}{d\bar{\tau}}\approx\frac{dx_{\rm e}^{a}}{d \tau}+\frac{d}{d\tau}\bar{\xi}_{\rm x}^{a}=u_{\rm e}^{a}+\dot{\xi}_{\rm x}^{a}\, \tag{3.54}\] where (again) the dot represents the covariant directional derivative in the direction of the equilibrium four-velocity.16 We observe that from the construction we have \(w_{\rm x}^{a}=\dot{\xi}_{\rm x}^{a}\) and it is clear that when pushing the expansion to second order their relation will become more involved--both because the difference between the proper times (\(\bar{\tau}\) versus \(\tau\)) appears at second order and because the Taylor expansion gets more complicated. Footnote 16: To be more precise, one should distinguish between \(\frac{d}{d\tau}=u_{\rm e}^{b}\partial_{b}\) and \(\frac{D}{D\bar{\tau}}=u_{\rm e}^{b}\nabla_{b}\). Since we are introducing a decomposition of a vector as a sum of two, \(\dot{\xi}_{\rm x}^{a}\) must be a vector as well so that the dot represents a covariant directional derivative. We now aim to understand how to construct the expansion directly in matter space. We start by noting that, since we are considering each displacement \(\xi_{\rm x}^{a}\) to be orthogonal to \(u_{\rm e}^{a}\) there is no loss of information in projecting the Lagrangian displacements onto the equilibrium matter space and dealing with \(\dot{\xi}_{\rm x}^{A}\). The general picture is thus as follows: in the general non-linear theory each matter space can be considered as an independent but interacting manifold, but this changes when we consider a perturbative expansion. In fact, the fundamental assumption of perturbation theory is that the two configurations (perturbed and unperturbed) are related by some diffeomorphism. This implies that the perturbed and unperturbed matter spaces17 are diffeomorphic, that is they are _the same_ abstract manifold. Therefore we can use the same chart on the two manifolds \(X^{A}\) (label the worldlines in the same way) and the difference will be only in that \(X_{\rm x}^{A}(x^{a})\neq X_{\rm e}^{A}(x^{a})\). The difference between the two will be exactly what we found above, namely \(-\dot{\xi}_{\rm x}^{A}\). We also note that, by our definition of the unperturbed state, all the perturbed matter spaces are diffeomorphic to the same unperturbed one, and thus to each other. Footnote 17: Recall that the matter space is obtained by taking the quotient of the spacetime over the corresponding worldline, i.e. identifying the worldline as a single point. Given this, we can work out how a general matter space tensor transforms under diffeomorphisms [50]. For instance, if we consider the projected metric \(g_{\rm x}^{AB}\) we have18 Footnote 18: For the Lie derivative we use the formula with partial derivatives in order to avoid the possible confusion arising from the choice of the connection used on the matter space. \[\delta g_{\rm x}^{AB}=-\mathcal{L}_{-\xi_{\rm x}}g_{\rm x}^{AB}=\mathcal{L}_{ \xi_{\rm x}}g_{\rm x}^{AB}=\bar{\xi}_{\rm x}^{C}\partial_{C}\bar{\xi}_{\rm e}^ {AB}-g_{\rm e}^{CB}\partial_{C}\bar{\xi}_{\rm x}^{A}-g_{\rm e}^{AC}\partial_{C }\bar{\xi}_{\rm x}^{B}\, \tag{3.55}\] where the partial derivatives are taken with respect to the equilibrium matter space coordinates. We now observe that, considering \(\bar{\xi}_{\rm x}^{A}\) as a scalar field in spacetime we can write \[-g_{\rm e}^{CB}\partial_{C}\bar{\xi}^{A}=-g^{ab}\nabla_{\rm e\,a}^{C}\nabla_{ \rm e\,b}^{B}\partial_{C}\bar{\xi}_{\rm x}^{A}=-\nabla_{\rm e\,b}^{B}\nabla^{ b}\bar{\xi}_{\rm x}^{A}. \tag{3.56}\] We also note that, since19\(\partial_{C}\Psi^{A}_{e\,a}=\partial_{a}\delta^{A}_{C}=0\), we have Footnote 19: If this is not immediately convincing one can prove it by taking the explicit definition of a derivative on the coordinate functions \(X^{A}(\bar{X})=\delta^{A}_{C}\bar{X}^{C}=\bar{X}^{A}\) and using the linearity of the derivative. \[\partial_{C}g^{AB}_{\rm e}=2\,g^{ab}\bigg{(}\frac{\partial}{\partial X^{C}_{ \rm e}}\Psi^{A}_{e\,a}\bigg{)}\Psi^{B}_{e\,b}=0. \tag{3.57}\] As a result, the projected metrics transform as \[\delta g^{AB}_{\rm x}=-\Psi^{B}_{e\,a}\nabla^{a}\xi^{A}_{\rm x}-\Psi^{A}_{e\,a }\nabla^{a}\xi^{B}_{\rm x}. \tag{3.58}\] This also tells us that building the variation of the metric tensor in this way, we are only comparing the difference in the position of the particles, keeping fixed the spacetime metric. We can now use the definition in eq. (3.29) to decompose the displacement gradients as \[\nabla_{a}\xi^{A}_{\rm x}=-w^{A}_{\rm x}u^{\rm e}_{a}+D_{a}\xi^{A}_{\rm x}\, \tag{3.59}\] and rewrite \[\delta g^{AB}_{\rm x} =\Psi^{B}_{e\,a}(w^{A}_{\rm x}u^{a}_{\rm e}-D^{a}\xi^{A}_{\rm x}) +\Psi^{A}_{e\,a}(w^{B}_{\rm x}u^{a}_{\rm e}-D^{a}\xi^{B}_{\rm x})\] \[=-D^{B}\xi^{A}_{\rm x}-D^{A}\xi^{B}_{\rm x}\, \tag{3.60}\] where we introduced the short-hand notation \(D^{A}=\Psi^{A}_{e\,b}\,g^{ab}D_{a}\). It is worth noting that eq. (3.60) is not a strain-rate tensor of the type usually introduced in fluid dynamics, because it involves gradients in the displacements instead of velocities. The usual strain rate tensor is in fact20 Footnote 20: To see this one has to use \(\mathcal{L}_{u_{\rm x}}\Psi^{A}_{x}=0\). \[\delta^{AB}_{\rm x} =-2\,\Psi^{A}_{\rm x\,(a}\Psi^{B}_{\rm x\,b)}\big{[}-u^{b}_{\rm x}u ^{a}_{\rm x}+\sigma^{ab}_{\rm x}+\sigma^{ab}_{\rm x}+\frac{1}{3}\theta_{\rm x }\perp^{ab}_{\rm x}\big{]}=\] \[=-2\,\Psi^{A}_{\rm e\,(a}\Psi^{B}_{\rm e\,b)}\big{(}\sigma^{ab}_{ \rm x}+\frac{1}{3}\theta_{\rm x}\perp^{ab}_{\rm e}\big{)}+\mathcal{O}(2)=-2 \big{(}\sigma^{AB}_{\rm x}+\frac{1}{3}\theta_{\rm x}g^{AB}_{\rm e}\big{)}\, \tag{3.61}\] We will comment on the implications of this difference later. Even if it is not entirely obvious what kind of object the mixed projected metric \(g^{AB}_{\rm xy}\) is in the general non-linear case, in the context of a perturbative expansion there is no real difference between the various matter spaces (they are all diffeomorphic to the equilibrium one). This means that we can use the same fundamental formula also for \(g_{\rm xy}\) to get \[\delta g^{AB}_{\rm xy} =g^{AB}_{\rm xy}-g^{AB}_{\rm e}=g^{ab}\Big{(}\delta\Psi^{A}_{xa} \Psi^{B}_{e\,b}+\delta\Psi^{B}_{y\,b}\Psi^{A}_{e\,a}\Big{)}=\] \[=-\Psi^{B}_{e\,a}\nabla^{a}\xi^{A}_{\rm x}-\Psi^{A}_{e\,a}\nabla^ {a}\xi^{B}_{\rm y}. \tag{3.62}\] It is interesting to note that since \(\delta g^{ab}=0\) we have \[[\delta,\nabla_{a}]=[\delta,\partial_{a}]=0. \tag{3.63}\] That is, the variation commutes with both partial and covariant derivatives. This will become relevant when we need to work out the variation of the resistive terms that stem from a dependence of the \(\mathcal{N}_{\mathrm{x}}\) on \(g^{AB}_{\mathrm{xy}}\) and \(g^{AB}_{\mathrm{y}}\). As there has been a number of recent efforts aimed at building first-order dissipative hydrodynamic models starting from a field-theory perspective (cf. section 2.4), it makes sense to point out the differences between the present expansion and the field-theory-based ones. In that context, the models are said to be of first order if the constitutive equations involve all permissible terms with just one derivative--as when the system is close to equilibrium one can expect the gradients in temperature, chemical potential etc... to be small, so that terms with two or more derivatives are dominated by first-order ones. In contrast, we here assume the variables that define the physical state of the system to take values close to the equilibrium ones, and by "first order" we mean the deviations are expanded up to \(\mathcal{O}(\xi_{\mathrm{x}})\). It is therefore clear that the present approach differs from the field-theory-based (gradient) expansions. The ultimate reason is that the action-based model provides the exact equations, which we then approximate, while in the field-theory approach one is trying to build the full equations as successive expansions. ### 3.4 Energy density is stationary at equilibrium As discussed in section 2.1.2, in order to describe out-of-equilibrium systems with the Extended Irreversible Thermodynamic (EIT) paradigm, one postulates the existence of a generalized entropy--a function of a larger set of Degrees of Freedom than the corresponding equilibrium ones--which is maximized at equilibrium. The starting point for the formalism used here is a generalized energy where the only degrees of freedom are the fluxes. The action-based model provides the total stress-energy-momentum tensor \(T_{ab}\) of the system, so that we can easily extract the total energy density \(\varepsilon\) for some observer having four-velocity \(u^{a}\) via the projection \(\varepsilon=u^{a}u^{b}T_{ab}\). We will now show that requiring the local energy density to be at a minimum in equilibrium means the viscous stress tensors have to be zero. When specific modeling is carried out, such as a numerical evolution, we would need to provide an equation of state and specify values for the microphysical input parameters. From the phenomenological point of view, this corresponds to assuming the existence of a function--in our case, energy density--defined on some "thermodynamical manifold" whose coordinates are the relevant degrees of freedom. Practically speaking, the formalism developed here suggests we may identify the thermodynamical manifold with the matter space used in the variational model. As the general discussion gets quite complex, we focus on the specific example of a two-component system, with the components representing matter and entropy (see [142, 22]). Let us first consider the non-dissipative limit. Thermodynamics of a single fluid is described by some equilibrium energy \(\varepsilon_{\rm e}(n,s)\) such that \[d\varepsilon_{\rm e}=Tds+\mu dn=\sum_{{\rm x}={\rm n},s}\mu^{\rm x}dn_{\rm x}. \tag{3.64}\] On the other hand, the conservative variational model is built using a master function \(\Lambda(n_{\rm n}^{2},n_{\rm s}^{2},n_{\rm ns}^{2})\). Because of our assumption that all species are comoving in equilibrium there is no heat flux relative to the matter and therefore \(n_{\rm ns}^{2}=-g_{ab}n_{\rm n}^{a}n_{\rm s}^{b}=+n_{\rm n}n_{\rm s}\), and the master function only depends on two variables, \(\Lambda_{\rm e}=\Lambda_{\rm e}(n_{\rm n},n_{\rm s})\). It is indeed easy to see that the equilibrium energy density, as measured by the equilibrium observer, is \[\varepsilon_{\rm e}=T_{ab}^{\rm e}\,u_{\rm e}^{a}u_{\rm e}^{b}=\left[\Psi_{\rm e }g_{ab}+(\Psi_{\rm e}-\Lambda_{\rm e})u_{a}^{\rm e}u_{b}^{\rm e}\right]u_{\rm e }^{a}u_{\rm e}^{b}=-\Lambda_{\rm e}. \tag{3.65}\] Since we have already identified the matter space normalizations of the three-forms with the rest frame densities \({\cal N}_{\rm x}=n_{\rm x}\), we can think of the thermodynamic energy as a function defined on the matter space, and write \[\varepsilon_{\rm e}=\varepsilon_{\rm e}({\cal N}_{\rm n},{\cal N}_{\rm s})=- \Lambda_{\rm e}({\cal N}_{\rm n},{\cal N}_{\rm s}). \tag{3.66}\] The equilibrium case suggests that we could try to extend this identification to the non-equilibrium setting, and "build" the thermodynamics on the matter space. This raises the (difficult) question of what the global matter space is in the full non-linear case. We will not address that issue here. Instead, we focus on the near-equilibrium case, where we only have to deal with the equilibrium matter space. Because of the way we have built the expansion, it is natural to project tensor quantities --fluxes, stress-energy-momentum tensor, etcetera--into the frame of the equilibrium observer, as defined by the equilibrium worldlines congruence \(u_{\rm e}^{a}\). Quantities measured in this frame will be indicated by a "hat" in the following. Objects without a hat are measured in fluid rest frames, which are defined by the \(u_{\rm x}^{a}\). The equilibrium value of a quantity in the equilibrium frame will be indicated with a "bar". For instance, the particle density measured in the equilibrium frame is \(\hat{n}_{\rm x}=-u_{a}^{\rm e}n_{\rm x}^{a}\); in the x-fluid rest frame it is \(n_{\rm x}=-u_{a}^{\rm x}n_{\rm x}^{a}\); and the equilibrium value in the equilibrium frame is \(\bar{n}_{\rm x}=\hat{n}_{\rm x}\big{|}_{\rm e}\). The "out-of-equilibrium" energy density \(\hat{\varepsilon}_{\rm o.e.}\) of the system as determined in the equilibrium rest frame is given by \[\hat{\varepsilon}_{\rm o.e.}=\left(T_{\rm n.d.}^{ab}+\sum_{\rm x}D_{\rm x}^{ ab}\right)u_{a}^{\rm e}u_{b}^{\rm e}=\varepsilon_{\rm o.e.}^{\rm n.d.}+D^{ab}u_{a}^{ \rm e}u_{b}^{\rm e}\, \tag{3.67}\] where we have separated the contribution from the viscous stress tensor \(D_{ab}\) from those having the "non-dissipative" form \[T^{ab}_{\text{n.d.}}=\Big{(}\Lambda-\sum_{\text{x}}n^{c}_{\text{x}}\mu^{x}_{c} \Big{)}g^{ab}+\sum_{\text{x}}n^{a}_{\text{x}}\mu^{b}_{\text{x}}=\Psi\,g^{ab}+ \sum_{\text{x}}n^{a}_{\text{x}}\mu^{b}_{\text{x}}. \tag{3.68}\] The expression for \(\hat{\varepsilon}_{\text{o.e.}}\) can be made more explicit by means of eq. (3.38), which leads to \(\Psi=\Lambda+\sum_{\text{x}}n_{\text{x}}\mu_{\text{x}}\) and \[\hat{\varepsilon}^{\text{n.d.}}_{\text{o.e.}}=u^{\text{e}}_{a}u^{\text{e}}_{b }T^{ab}_{\text{n.d.}}=-\Lambda-\sum_{\text{x}=n,s}\big{(}n_{\text{x}}\mu_{ \text{x}}-\hat{n}_{\text{x}}\hat{n}_{\text{x}}\big{)}. \tag{3.69}\] Because the flux is a vector, the two densities \(\hat{n}_{\text{x}}\) and \(n_{\text{x}}\) are easily shown to be related by \[\hat{n}_{\text{x}}=-n^{a}_{\text{x}}u_{a}=-n_{\text{x}}u^{a}_{\text{x}}u_{a}= (1-w^{a}_{\text{x}}w^{\text{x}}_{a})^{-1/2}n_{\text{x}}=\Big{(}1+\frac{1}{2}w ^{2}_{\text{x}}\Big{)}n_{\text{x}}+\mathcal{O}(w^{3}_{\text{x}}). \tag{3.70}\] Meanwhile, the corresponding momentum relation is a bit more involved because of entrainment: \[\mu_{\text{x}} =-\mu^{x}_{b}u^{b}_{\text{x}}=-\gamma_{\text{x}}(u^{b}+w^{b}_{ \text{x}})\big{(}\mathcal{B}_{\text{x}}n_{\text{x}}u^{\text{x}}_{b}+\sum_{ \text{y}\neq\text{x}}\mathcal{A}_{\text{xy}}n_{\text{y}}u^{b}_{\text{y}}\big{)}\] \[=\gamma_{\text{x}}\Big{(}\hat{n}_{\text{x}}-\mathcal{B}_{\text{x} }n_{\text{x}}\gamma_{\text{x}}w^{2}_{\text{x}}-\sum_{\text{y}\neq\text{x}} \mathcal{A}_{\text{xy}}n_{\text{y}}\gamma_{\text{y}}w^{a}_{\text{x}}w^{\text{ y}}_{a}\Big{)}. \tag{3.71}\] We can rearrange this as \[\hat{n}_{\text{x}}=\mu_{\text{x}}-\frac{1}{2}\bar{\mu}_{\text{x}}w^{2}_{\text{ x}}+\bar{\mathcal{B}}_{\text{x}}\bar{n}_{\text{x}}w^{2}_{\text{x}}+\sum_{\text{y} \neq\text{x}}\bar{\mathcal{A}}_{\text{xy}}\bar{n}_{\text{y}}w^{a}_{\text{x}}w^ {\text{y}}_{a}\, \tag{3.72}\] and, wrapping up, we get \[\varepsilon^{n.d.}_{\text{o.e.}} =-\Lambda+\bar{\mathcal{B}}_{\text{n}}\bar{n}^{2}_{\text{n}}w^{2} _{\text{n}}+\bar{\mathcal{B}}_{\text{s}}\bar{n}^{2}_{\text{s}}w^{2}_{\text{s} }+2\bar{\mathcal{A}}_{\text{ns}}\bar{n}_{\text{s}}\bar{n}_{\text{n}}w^{a}_{ \text{n}}w^{s}_{a}\] \[=-\Lambda+\bar{\mu}_{\text{n}}\bar{n}_{\text{n}}w^{2}_{\text{n}}+ \bar{\mu}_{\text{s}}\bar{n}_{\text{s}}w^{2}_{\text{s}}-\mathcal{A}_{\text{ns} }\bar{n}_{\text{n}}\bar{n}_{\text{s}}w^{2}_{\text{ns}}\, \tag{3.73}\] where \[w^{2}_{\text{xy}}=g_{ab}\big{(}w^{a}_{\text{x}}-w^{a}_{\text{y}}\big{)}\left(w^ {b}_{\text{x}}-w^{b}_{\text{y}}\right). \tag{3.74}\] It is now clear that, in order to proceed, we need an expansion for the master function, \(\Lambda\). Note that the dissipative action model assumes \(\Lambda\) depends on \((X^{A}_{\text{n}},X^{A}_{\text{s}},g^{AB}_{\text{n}},g^{AB}_{\text{s}},g^{AB}_{ \text{ns}})\) through the scalar product of the fluxes \(n^{2}_{\text{n}},n^{2}_{\text{s}},n^{2}_{\text{ns}}\). Therefore, we can expand \(\Lambda\) up to second order in the standard way (see [21]). We thus have \[\Lambda=\Lambda_{\rm e} -\frac{1}{2}\sum_{\rm x=n,s}\mathcal{B}_{\rm x}\delta n_{\rm x}^{2}- \mathcal{A}_{\rm ns}\delta n_{\rm ns}^{2}-\frac{1}{4}\sum_{\rm x=n,s}\frac{ \partial\mathcal{B}_{\rm x}}{\partial n_{\rm x}^{2}}(\delta n_{\rm x}^{2})^{2} -\frac{1}{2}\frac{\partial\mathcal{A}_{\rm ns}}{\partial n_{\rm ns}^{2}}( \delta n_{\rm ns}^{2})^{2}\] \[-\frac{1}{2}\frac{\partial\mathcal{B}_{\rm n}}{\partial n_{\rm s} ^{2}}(\delta n_{\rm n}^{2})(\delta n_{\rm s}^{2})-\frac{\partial\mathcal{A}_ {\rm ns}}{\partial n_{\rm n}^{2}}(\delta n_{\rm n}^{2})(\delta n_{\rm ns}^{2} )-\frac{\partial\mathcal{A}_{\rm ns}}{\partial n_{\rm s}^{2}}(\delta n_{\rm s }^{2})(\delta n_{\rm ns})^{2}. \tag{3.75}\] To make contact with the previous expansion on the matter space we need explicit expressions for \(\delta n_{\rm x}^{2}\) and all other similar terms that appear in this expression. For the four-current we have \[\delta n_{\rm x}^{a} =n_{\rm x}^{a}-\bar{n}_{\rm x}^{a}=(\bar{n}_{\rm x}+\delta n_{\rm x })\Big{[}\big{(}1+\frac{1}{2}w_{\rm x}^{2}\big{)}u^{a}+w_{\rm x}^{a}\Big{]}- \bar{n}_{\rm x}u^{a}\] \[=\frac{1}{2}\bar{n}_{\rm x}w_{\rm x}^{2}u^{a}+\bar{n}_{\rm x}w_{ \rm x}^{a}+\delta n_{\rm x}u^{a}+\delta n_{\rm x}w_{\rm x}^{a}\, \tag{3.76}\] and we see that it--quite intuitively--changes both as the density and the four-velocity change. By means of eq. (3.76) we get \[\delta n_{\rm x}^{2}=-\big{(}2\bar{n}_{\rm x}^{a}\delta n_{a}^{\rm x}+\delta n _{\rm x}^{a}\delta n_{a}^{\rm x}\big{)}=2\bar{n}_{\rm x}\delta n_{\rm x}+( \delta n_{\rm x})^{2}. \tag{3.77}\] Similarly, we have \[\delta n_{\rm xy}^{2} =-\Big{(}\bar{n}_{\rm x}^{a}\delta n_{a}^{\rm y}+\bar{n}_{\rm y} ^{a}\delta n_{a}^{\rm x}+\delta n_{\rm x}^{a}\delta n_{a}^{\rm y}\Big{)}\] \[=\bar{n}_{\rm x}\delta n_{\rm y}+\bar{n}_{\rm y}\delta n_{\rm x}+ \delta n_{\rm x}\delta n_{\rm y}+\frac{1}{2}\bar{n}_{\rm x}\bar{n}_{\rm y}w_{ \rm xy}^{2}. \tag{3.78}\] In order to complete the second order expansion of \(\Lambda\) we also need the products (for every possible combination) of eq. (3.77) and eq. (3.78). These are found to be \[\big{(}\delta n_{\rm x}^{2}\big{)}^{2} =4\bar{n}_{\rm x}^{2}(\delta n_{\rm x})^{2}\, \tag{3.79a}\] \[(\delta n_{\rm xy}^{2})^{2} =\bar{n}_{\rm x}^{2}(\delta n_{\rm y})^{2}+\bar{n}_{\rm y}^{2}( \delta n_{\rm x})^{2}+2\bar{n}_{\rm x}\bar{n}_{\rm y}\delta n_{\rm x}\delta n _{\rm y}\,\] (3.79b) \[(\delta n_{\rm x}^{2})(\delta n_{\rm y}^{2}) =4\bar{n}_{\rm x}\bar{n}_{\rm y}\delta n_{\rm x}\delta n_{\rm y}\,\] (3.79c) \[(\delta n_{\rm xy}^{2})(\delta n_{\rm x}^{2}) =2\bar{n}_{\rm x}(\delta n_{\rm x})\big{(}\bar{n}_{\rm y}\delta n _{\rm x}+\bar{n}_{\rm x}\delta n_{\rm y}\big{)}. \tag{3.79d}\] Plugging these expressions into eq. (3.75) we find (up to second order) \[\xi_{\rm o.e.}^{n.d.} =\varepsilon_{\rm e}(\bar{n}_{\rm n},\bar{n}_{\rm s})+\bar{n}_{ \rm n}\delta n_{\rm n}+\bar{n}_{\rm s}\delta n_{\rm s}+\frac{1}{2}\big{(}\bar{ \mathcal{B}}_{\rm n}\varepsilon_{\rm n}^{2}-\bar{\mathcal{A}}_{uu}^{\rm nn} \big{)}(\delta n_{\rm n})^{2}\] \[+\frac{1}{2}\big{(}\mathcal{B}_{\rm s}\varepsilon_{\rm s}^{2}- \bar{\mathcal{A}}_{uu}^{\rm ss}\big{)}(\delta n_{\rm s})^{2}-\big{(}\bar{ \mathcal{A}}_{uu}^{\rm ss}+\bar{\mathcal{A}}_{uu}^{\rm ns}\big{)}(\delta n_{ \rm n})(\delta n_{\rm s})+\bar{n}_{\rm n}\bar{n}_{\rm n}w_{\rm n}^{2}\] \[+\bar{n}_{\rm s}\bar{n}_{\rm s}w_{\rm s}^{2}-\frac{1}{2}\bar{ \mathcal{A}}_{\rm ns}\bar{n}_{\rm n}\bar{n}_{\rm s}w_{\rm ns}^{2}\, \tag{3.80}\] where we have made use of eq. (3.73) and defined (see Andersson and Comer [21]) \[\tilde{c}_{\rm x}^{2} =1+2\frac{\tilde{n}_{\rm x}^{2}}{\tilde{\cal B}_{\rm x}}\frac{ \partial\tilde{\cal B}_{\rm x}}{\partial n_{\rm x}^{2}}\, \tag{3.81a}\] \[{\cal A}_{ab}^{\rm xx} =-\Big{(}\tilde{n}_{\rm y}^{2}\frac{\partial\tilde{\cal A}_{\rm xy }}{\partial n_{\rm xy}^{2}}+4\tilde{n}_{\rm x}\tilde{n}_{\rm y}\frac{\partial \tilde{\cal A}_{\rm xy}}{\partial n_{\rm x}^{2}}\Big{)}u_{a}^{\rm e}u_{b}^{\rm e }\doteq\tilde{\cal A}_{uu}^{\rm xx}u_{a}^{\rm e}u_{b}^{\rm e}\,\] (3.81b) \[{\cal A}_{ab}^{\rm ns} =\tilde{\cal A}^{\rm ns}\perp_{ab}-\Big{(}\tilde{\cal A}_{\rm ns }+2\tilde{n}_{\rm n}^{2}\frac{\partial\tilde{\cal A}^{\rm ns}}{\partial n_{ \rm n}^{2}}+2\tilde{n}_{\rm s}^{2}\frac{\partial\tilde{\cal A}^{\rm ns}}{ \partial n_{\rm s}^{2}}+\tilde{n}_{\rm n}\tilde{n}_{\rm s}\frac{\partial \tilde{\cal A}^{\rm ns}}{\partial n_{\rm ns}^{2}}\Big{)}u_{a}^{\rm e}u_{b}^{\rm e}\] \[\doteq\tilde{\cal A}^{\rm ns}\perp_{ab}+\tilde{\cal A}_{uu}^{\rm ns }u_{a}^{\rm e}u_{b}^{\rm e}\,\] (3.81c) \[\tilde{\cal X}_{uu}^{\rm ns} =-2\tilde{n}_{\rm n}\tilde{n}_{\rm s}\frac{\partial\tilde{\cal B }^{\rm n}}{\partial n_{\rm s}^{2}}=-2\tilde{n}_{\rm n}\tilde{n}_{\rm s}\frac{ \partial\tilde{\cal B}^{\rm s}}{\partial n_{\rm n}^{2}}. \tag{3.81d}\] Noting that the quantity \(\delta n_{\rm x}\) is the variation of the rest frame density, we can relate it to a variation of \({\cal N}_{\rm x}\) and "close the loop". Since the \({\cal N}_{\rm x}\) are functions on matter space of the variables \((X_{\rm n},\ X_{\rm s},\ g_{\rm n}^{AB},\ g_{\rm s}^{AB},\ g_{\rm ns}^{AB})\) the expression for the energy is actually a second order expansion in terms of those variables. We note also that, because of the "two-layer structure", the \(\delta n_{\rm x}\) above contain second-order terms. A priori, the expression in eq. (3.80) does not provide the total out-of-equilibrium energy because we also need to account for the dissipative terms. However, we will now show that these actually do not contribute. To do this, we assume an expansion for all the viscous stress tensors of form \[S_{AB}=S_{AB}^{\rm e}+S_{AB}^{1}+S_{AB}^{2}+{\cal O}(\xi^{3})\, \tag{3.82}\] without providing (for now) the explicit expressions. Recalling \(\Psi_{\rm e}^{A}u_{\rm e}^{a}=0\), we can write \[S_{ab}u_{\rm e}^{a}u_{\rm e}^{b}=S_{AB}(X_{\rm e}^{A}+\delta X^{A})_{,a}(X_{ \rm e}^{B}+\delta X^{B})_{,b}u_{\rm e}^{a}u_{\rm e}^{b}=S_{AB}^{\rm e}\,\delta X _{\,a}^{A}\,\delta X_{\,b}^{B}u_{\rm e}^{a}u_{\rm e}^{b}\, \tag{3.83}\] where the expansion is up to second order. It is clear that this argument is valid for each viscous stress tensor, and for \(D_{ab}u_{\rm e}^{a}u_{\rm e}^{b}\) as well, so that the dissipative contributions to the off-equilibrium energy are, at least, of second order. Assuming that the energy is stationary, that is \[\tilde{\epsilon}_{\rm o.e.}^{n.d.}-\varepsilon_{\rm e}(\tilde{n}_{\rm n}, \tilde{n}_{\rm s})={\cal O}(\xi^{2})\, \tag{3.84}\] we then have \[\tilde{n}_{\rm n}\delta n_{\rm n}+\tilde{n}_{\rm s}\delta n_{\rm s}={\cal O}( \xi^{2})\, \tag{3.85}\] which has a clear thermodynamical interpretation and is consistent with the EIT picture, since, up to first order, the generalized energy is a function of the \(n_{\rm x}\) only. We want to translate the above result into conditions for the matter space functions \({\cal N}_{\rm x}\). We start by observing that in the conservative case, the three-form \(n_{ABC}^{\rm x}\) is a function of the \(X_{\rm x}^{A}\) coordinates only. Therefore, \(\tilde{\cal N}_{\rm x}\) is just a function of \(X_{\rm x}^{A}\), while, because \(\mathcal{N}_{\rm x}\sqrt{g^{\rm x}}\), the latter depends also on the projected metric \[\frac{\partial\mathcal{N}_{\rm x}}{\partial g^{AB}_{\rm X}}=\frac{1}{2}\sqrt{g^{ \rm x}}\mathcal{N}_{\rm x}g^{\rm x}_{AB}=\frac{1}{2}\mathcal{N}_{\rm x}g^{\rm x }_{AB}. \tag{3.86}\] When considering the expansion of \(n_{\rm x}\) (and hence \(\mathcal{N}_{\rm x}\)) we assume that we can write \[\mathcal{N}_{\rm x}=\mathcal{N}_{\rm x}^{\rm e}+\mathcal{N}_{\rm x}^{\rm d}\, \tag{3.87}\] where \(\mathcal{N}_{\rm x}^{\rm e}\) is the same as in the non-dissipative limit while the dissipative contribution \(\mathcal{N}_{\rm x}^{\rm d}\) is a function also of the additional variables that encode the dissipation. Given the separation of \(\mathcal{N}_{\rm x}\) into two pieces it is natural to assume that \(\mathcal{N}_{\rm x}^{\rm d}\), but not its derivatives, vanishes at equilibrium. Since the equilibrium evolves in a conservative fashion, we can write \[\delta n_{\rm x} \equiv\mathcal{N}_{\rm x}-\mathcal{N}_{\rm x}^{\rm e}=\mathcal{N }_{\rm x}^{\rm d}=\mathcal{N}_{\rm x}^{\rm d}-\mathcal{N}_{\rm x}^{\rm d}\Big{|} _{\rm e}\] \[=\frac{\partial\mathcal{N}_{\rm x}^{\rm d}}{\partial X_{\rm x}^{A }}\delta X_{\rm x}^{A}+\frac{\partial\mathcal{N}_{\rm x}^{\rm d}}{\partial X_ {\rm y}^{A}}\delta X_{\rm y}^{A}+\frac{\partial\mathcal{N}_{\rm x}^{\rm d}}{ \partial g^{AB}_{\rm x}}\delta g^{AB}_{\rm X}+\frac{\partial\mathcal{N}_{\rm x }^{\rm d}}{\partial g^{AB}_{\rm y}}\delta g^{AB}_{\rm y}+\frac{\partial \mathcal{N}_{\rm x}^{\rm d}}{\partial g^{AB}_{\rm xy}}\delta g^{AB}_{\rm xy}+ \mathcal{O}(2)\, \tag{3.88}\] where here, and in similar expansions below, each quantity is to be evaluated at equilibrium. With this assumption it is easy to read off from eq. (3.85) the first order relation \[\mathcal{M}_{\rm n}\delta\mathcal{N}_{\rm n}^{\rm d}+\mathcal{M}_{\rm s} \delta\mathcal{N}_{\rm s}^{\rm d}=0. \tag{3.89}\] This leads to \[\mathcal{M}_{\rm n}\frac{\partial\mathcal{N}_{\rm n}^{\rm d}}{\partial X_{ \rm n}^{A}}+\mathcal{M}_{\rm s}\frac{\partial\mathcal{N}_{\rm s}^{\rm d}}{ \partial X_{\rm n}^{A}}=0\, \tag{3.90}\] and analogous results for variations with respect to \(X_{\rm s}^{A}\), \(g_{\rm n}^{AB}\), \(g_{\rm s}^{AB}\) and \(g_{\rm ns}^{AB}\) follow immediately. In particular, this shows that the total viscous stress tensor, acting on each component \(D_{ab}^{\rm x}\), vanishes when the energy is stationary. To see this explicitly we note that (see eqs. (3.14) and (3.15)) \[\mathcal{S}_{AB}^{\rm xy,e}\equiv 2\mathcal{M}_{\rm x}\frac{\partial\mathcal{N} _{\rm x}}{\partial g^{AB}_{\rm xy}}=-2\mathcal{M}_{\rm y}\frac{\partial \mathcal{N}_{\rm y}}{\partial g^{AB}_{\rm xy}}=-\mathcal{S}_{BA}^{\rm yx,e}\, \tag{3.91}\] where we made use of the symmetry property of the mixed metric, namely \(g^{AB}_{\rm xy}=g^{BA}_{\rm yx}\). Similarly, \[S_{AB}^{\rm x,e} \equiv 2\mathcal{M}_{\rm x}\Big{(}\frac{\partial\mathcal{N}_{\rm x }}{\partial g^{AB}_{\rm x}}-\frac{1}{2}\mathcal{N}_{\rm x}g^{\rm x}_{AB}\Big{)}\] \[=2\mathcal{M}_{\rm x}\Big{[}\frac{\partial\mathcal{N}_{\rm x}^{ \rm d}}{\partial g^{AB}_{\rm x}}-\frac{1}{2}\big{(}\mathcal{N}_{\rm x}- \mathcal{N}_{\rm x}^{\rm e}\big{)}g^{\rm x}_{AB}\Big{]}\] \[=2\mathcal{M}_{\rm x}\frac{\partial\mathcal{N}_{\rm x}^{\rm d}}{ \partial g^{AB}_{\rm x}}=-2\mathcal{M}_{\rm y}\frac{\partial\mathcal{N}_{\rm y }^{\rm d}}{\partial g^{AB}_{\rm x}}=-s^{\rm yx,e}_{AB}. \tag{3.92}\] It is now clear that, by means of eq. (3.91) and eq. (3.92), the x-species viscous stress tensor vanishes: \[D^{\rm x,e}_{AB}=S^{\rm x,e}_{AB}+s^{\rm yx,e}_{AB}+\frac{1}{2}(\mathcal{S}^{\rm xy,e}_{AB}+\mathcal{S}^{\rm yx,e}_{BA})=0. \tag{3.93}\] We have considered the fully general case with all the additional dependences in \(\mathcal{N}_{\rm x}\) and all the viscous tensors \(S^{\rm x}_{ab}\), \(\mathcal{S}^{\rm xy}_{ab}\) and \(s^{\rm xy}_{ab}\). The same result--that is, each \(D^{\rm x,e}_{ab}\) vanishes --holds even in a less rich situation when the model is built from fewer viscous tensors. In that case we have to go back to eq. (3.89) and modify the argument accordingly. It is important to stress that we have shown that the full stress-energy-momentum tensor at equilibrium is made out of just the non-dissipative part, and that the dissipative parts of the total stress-energy-momentum tensor do not contribute to the total energy density at second order. However, we note that the energy minimum conditions in eq. (3.89) do not set the purely resistive terms to zero (eq. (3.13)). In fact, it only leads to \[\mathcal{M}_{\rm n}\frac{\partial\mathcal{N}_{\rm n}^{\rm d}}{ \partial X_{\rm n}^{A}} =-\mathrm{R}^{\rm sn,e}_{A}\, \tag{3.94a}\] \[\mathcal{M}_{\rm s}\frac{\partial\mathcal{N}_{\rm s}^{A}}{ \partial X_{\rm s}^{A}} =-\mathrm{R}^{\rm ns,e}_{A}. \tag{3.94b}\] The reason for this is pretty clear as these terms do not enter the expression for the energy density. We nonetheless might want to consider the case where the equilibrium equations are exactly as the conservative ones. The motivation for this can be found in the derivation of the purely resistive terms. If the different species are comoving at the action level, there is no distinction between the different \(X_{\rm x}^{A}\) and no resistive term of this form would appear. We can enforce consistency with this observation in two ways: either we assume that we use the complete dependence on \(X_{\rm x}^{A}\) in the conservative part, in which case \[\frac{\partial\mathcal{N}_{\rm x}^{\rm d}}{\partial X_{\rm x}^{A}}\Big{|}_{ \rm e}=0\Longrightarrow\mathrm{R}^{\rm xy,e}_{A}=0\, \tag{3.95}\] or, we just set the terms \(\mathrm{R}^{\rm x}_{a}\) to zero, so that \[\mathcal{M}_{\rm n}\frac{\partial\mathcal{N}_{\rm n}^{\rm d}}{\partial X_{\rm n }^{A}}\Big{|}_{\rm e}=\mathcal{M}_{\rm s}\frac{\partial\mathcal{N}_{\rm s}^{ \rm d}}{\partial X_{\rm s}^{A}}\Big{|}_{\rm e}. \tag{3.96}\] The latter, less restrictive assumption reminds us of the dynamical nature of chemical equilibrium in nature. Reactions happen also at equilibrium, although they do so in such a way that there is no net particle production. Finally, it is quite easy to see that if we choose a different observer, such as the ones associated with the Eckart or Landau frame, the differences in the energy density will be of second order. Crucially, the equilibrium conditions in eq. (3.89) do not depend on the choice of frame. ### The last piece of the puzzle In order to work out the perturbative expressions we need to expand the various dissipative terms. It should now be clear that for the viscous stress tensors we can write21 Footnote 21: All the derivatives are intended to be evaluated at equilibrium. \[\delta s^{\rm xy}_{AB} =2\frac{\partial\mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm xy }}\delta\mathcal{M}_{\rm x}+2\mathcal{M}_{\rm x}\delta\left(\frac{\partial \mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm xy}}\right)\,, \tag{3.97a}\] \[\delta\mathcal{S}^{\rm xy}_{AB} =2\frac{\partial\mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm xy }}\delta\mathcal{M}_{\rm x}+2\mathcal{M}_{\rm x}\delta\left(\frac{\partial \mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm xy}}\right)\,,\] (3.97b) \[\delta S^{\rm x}_{AB} =2\frac{\partial\mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{ \rm xy}}\delta\mathcal{M}_{\rm x}+2\mathcal{M}_{\rm x}\delta\left(\frac{ \partial\mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm xy}}\right)- \mathcal{M}_{\rm x}(\delta\mathcal{N}^{\rm d}_{\rm x})g^{\rm e}_{AB}\,, \tag{3.97c}\] where we recall that \[s^{\rm xy}_{AB}=2\mathcal{M}_{\rm x}\frac{\partial\mathcal{N}^{ \rm d}_{\rm x}}{\partial g^{AB}_{\rm xy}}\,, \tag{3.98a}\] \[\mathcal{S}^{\rm xy}_{AB}=2\mathcal{M}_{\rm x}\frac{\partial \mathcal{N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm xy}}\,,\] (3.98b) \[S^{\rm x}_{AB}=2\left(\mathcal{M}_{\rm x}\frac{\partial\mathcal{ N}^{\rm d}_{\rm x}}{\partial g^{AB}_{\rm x}}-\frac{1}{2}\mathcal{M}^{\rm x} \mathcal{N}^{\rm d}_{\rm x}\,g^{\rm x}_{AB}\right)\,. \tag{3.98c}\] Similarly, for the "purely resistive" terms we have \[\delta\mathcal{R}^{\rm xy}_{A}=\frac{\partial\mathcal{N}^{\rm d}_{\rm x}}{ \partial\mathcal{X}^{A}_{\rm y}}\delta\mathcal{M}_{\rm x}+\mathcal{M}_{\rm x} \delta\left(\frac{\partial\mathcal{N}^{\rm d}_{\rm x}}{\partial X^{A}_{\rm y} }\right)\,. \tag{3.99}\] Since \(\mathcal{N}^{\rm d}_{\rm x}\) is a function of \((X_{\rm x},\ X_{\rm y},\ g^{AB}_{\rm x},\ g^{AB}_{\rm y},\ g^{AB}_{\rm xy})\), its derivatives are as well, so that we have \[\delta\left(\frac{\partial\mathcal{N}^{\rm d}_{\rm x}}{\partial \mathcal{X}^{A}_{\rm y}}\right)= \frac{\partial^{2}\mathcal{N}^{\rm d}_{\rm x}}{\partial X^{B}_{\rm x }\partial\mathcal{X}^{A}_{\rm y}}\delta X^{B}_{\rm x}+\frac{\partial^{2} \mathcal{N}^{\rm d}_{\rm x}}{\partial X^{B}_{\rm y}\partial X^{A}_{\rm y}} \delta X^{B}_{\rm y}+\frac{\partial^{2}\mathcal{N}^{\rm d}_{\rm x}}{\partial g ^{BC}_{\rm x}\partial\mathcal{X}^{A}_{\rm y}}\delta g^{BC}_{\rm x}\] \[+\frac{\partial^{2}\mathcal{N}^{\rm d}_{\rm x}}{\partial g^{BC}_{ \rm y}\partial X^{A}_{\rm y}}\delta g^{BC}_{\rm y}+\frac{\partial^{2} \mathcal{N}^{\rm d}_{\rm x}}{\partial g^{BC}_{\rm xy}\partial\mathcal{X}^{A}_{ \rm y}}\delta g^{BC}_{\rm xy}\,. \tag{3.100}\] Similar results hold for the other variations that were not explicitly written in eq. (3.97) and eq. (3.99). Concerning the purely resistive term we note that \(u^{a}_{\rm x}R^{\rm yx}_{a}=0\) by construction. Because we are doing an expansion with undetermined coefficients, we need to impose this by hand at every order; specifically, at the linear level. This then leads to \[\delta\Big{(}u^{a}_{\rm x}\mathbb{R}^{yx}_{a}\Big{)}=\mathbb{R}^{yx,\,\rm e}_{A} \Big{(}w^{A}_{\rm x}-\hat{\xi}^{A}_{\rm x}\Big{)}=0\, \tag{3.101}\] so that not only do we have \(w^{a}_{\rm x}=\hat{\xi}^{a}_{\rm x}\) but also \(w^{A}_{\rm x}=\hat{\xi}^{A}_{\rm x}\). This then means that we must have22\(u^{a}_{\rm e}\hat{\sigma}^{xb}_{\rm x}X^{D}_{\rm e,jba}=0\), which in turn implies that the orthogonality conditions for the viscous stress tensors Footnote 22: Here the semicolon is, as usual, a short-hand notation for covariant derivative \(A_{\mu}=\nabla_{a}A\). \[S^{\rm x}_{ab}u^{a}_{\rm x}=\mathcal{S}^{xy}_{ab}u^{a}_{\rm x}=s^{xy}_{ab}u^{a }_{\rm y}=0\, \tag{3.102}\] are automatically satisfied at linear order. From eq. (3.97) we can find the expansion for the spacetime viscous tensors through \[\delta S^{\rm x}_{ab}=\delta S^{\rm x}_{DE}\Psi^{D}_{ea}\Psi^{E}_ {eb}-S^{\rm x}_{DE}\Big{(}\tilde{\xi}^{D}_{\rm x,a}\Psi^{E}_{eb}+\Psi^{D}_{ea }\tilde{\sigma}^{E}_{\rm x,b}\Big{)}\, \tag{3.103a}\] \[\delta s^{\rm xy}_{ab}=\delta S^{\rm xy}_{DE}\Psi^{D}_{ea}\Psi^{E }_{eb}-s^{\rm xy}_{DE}\Big{(}\tilde{\sigma}^{D}_{\rm y,a}\Psi^{E}_{eb}+\Psi^{ D}_{ea}\tilde{\sigma}^{E}_{\rm y,b}\Big{)}\,\] (3.103b) \[\delta\mathcal{S}^{\rm xy}_{ab}=\delta S^{\rm xy}_{DE}\Psi^{D}_{ea }\Psi^{E}_{eb}-\mathcal{S}^{\rm xy}_{DE}\Big{(}\tilde{\sigma}^{D}_{\rm x,a} \Psi^{E}_{eb}+\Psi^{D}_{ea}\tilde{\sigma}^{E}_{\rm y,b}\Big{)}\, \tag{3.103c}\] while for the resistive terms associated with \(s^{\rm xy}_{ab}\) and \(\mathcal{S}^{\rm xy}_{ab}\) we have \[\delta r^{\rm xy}_{a} =\frac{1}{2}\delta s^{\rm xy}_{DE}\nabla_{a}g^{DE}_{\rm e}-\frac{ 1}{2}s^{\rm xy}_{DE}\partial_{a}\big{[}g^{bc}(\tilde{\sigma}^{D}_{y,b}\Psi^{E }_{\rm e}+\Psi^{D}_{eb}\tilde{\sigma}^{E}_{\rm y,c})\big{]}\, \tag{3.104a}\] \[\delta\mathcal{R}^{\rm xy}_{a} =\frac{1}{4}\delta\mathcal{S}^{\rm xy}_{DE}\nabla_{a}g^{DE}_{\rm e }-\frac{1}{2}\mathcal{S}^{\rm xy}_{DE}g^{bc}\Big{(}\tilde{\xi}^{D}_{\rm x,b} \nabla_{a}\Psi^{E}_{ec}+\Psi^{D}_{eb}\nabla_{a}\tilde{\sigma}^{E}_{\rm y,c} \Big{)}\, \tag{3.104b}\] where we made use of the fact that \([\delta,\nabla_{a}]=0\) because of \(\delta g^{ab}=0\) (see the discussion at the end of section 3.3). Having "understood" how we may perturb the terms \(R^{\rm x}_{a}\) and \(D^{\rm x}_{ab}\), let us focus on the remaining pieces of the equation of motion. A quick look back at eq. (2.70) reveals that the only terms we still have to discuss are \(\delta\Gamma_{\rm x}\) and \(\delta\mu^{\rm x}_{a}\). For the particle creation rate we have (see eq. (3.76)) \[\delta\Gamma_{\rm x}=\nabla_{a}\delta n^{a}_{\rm x}=\delta\dot{n}_{\rm x}+ \nabla_{a}(\bar{n}_{\rm x}w^{a}_{\rm x})\, \tag{3.105}\] while for the x-species momentum, we get \[\delta\mu^{\rm x}_{a}=\delta(\mathcal{B}_{\rm x}n_{\rm x})u^{\rm e}_{b}+\bar{ \mathcal{B}}_{\rm x}\bar{n}_{\rm x}w^{\rm x}_{b}+\sum_{\rm y\neq x}\delta( \mathcal{A}_{\rm xy}n_{\rm y})u^{\rm e}_{b}+\bar{\mathcal{A}}_{\rm xy}\bar{n}_{ \rm y}w^{\rm y}_{b}. \tag{3.106}\] Using the fact that we identified \(\mathcal{M}_{\rm x}\) with \(\mu_{\rm x}\) we have \[\delta\mathcal{M}_{\rm x} =\delta\big{(}-\mu^{a}_{a}u^{\rm x}_{a}\big{)}=-\big{(}\bar{\mu}^ {\rm x}_{a}w^{a}_{\rm x}+\delta\mu^{a}_{x}u^{\rm e}_{a}\big{)}\] \[=\delta\Big{(}\mathcal{B}_{\rm x}n_{\rm x}+\sum_{\rm y\neq x} \mathcal{A}_{\rm xy}n_{\rm y}\Big{)}\, \tag{3.107}\] and since \(\mathcal{B}_{\rm x}\) and \(\mathcal{A}_{\rm xy}\) are ultimately functions of \(n_{\rm x}^{2}\) and \(n_{\rm xy}^{2}\), we may use \[\delta\mathcal{B}_{\rm x} =\left(2n_{\rm x}\frac{\partial\mathcal{B}_{\rm x}}{\partial n_{\rm x }^{2}}+n_{\rm y}\frac{\partial\mathcal{B}_{\rm x}}{\partial n_{\rm xy}^{2}} \right)\delta n_{\rm x}+\left(2n_{\rm y}\frac{\partial\mathcal{B}_{\rm x}}{ \partial n_{\rm y}^{2}}+n_{\rm x}\frac{\partial\mathcal{B}_{\rm x}}{\partial n _{\rm xy}^{2}}\right)\delta n_{\rm y}\, \tag{3.108a}\] \[\delta\mathcal{A}_{\rm xy} =\left(2n_{\rm x}\frac{\partial\mathcal{A}_{\rm xy}}{\partial n_{ \rm x}^{2}}+n_{\rm y}\frac{\partial\mathcal{A}_{\rm xy}}{\partial n_{\rm xy}^ {2}}\right)\delta n_{\rm x}+\left(2n_{\rm y}\frac{\partial\mathcal{A}_{\rm xy }}{\partial n_{\rm xy}^{2}}+n_{\rm x}\frac{\partial\mathcal{A}_{\rm xy}}{ \partial n_{\rm xy}^{2}}\right)\delta n_{\rm y}\, \tag{3.108b}\] in eq. (3.107). This way, making use of definitions in eq. (3.81), we arrive at \[\delta\mathcal{M}_{\rm x}=\big{(}\mathcal{B}_{\rm x}c_{\rm x}^{2}-\mathcal{A} _{uu}^{\rm xx}\big{)}\delta n_{\rm x}-\big{(}\mathcal{A}_{uu}^{\rm ns}+ \mathcal{A}_{uu}^{\rm ns}\big{)}\delta n_{\rm y}\, \tag{3.109}\] and we see that the parameters that enter the dissipative fluid equations are the entrainment coefficients (and first derivatives; that is, second order derivatives of \(\Lambda(n_{\rm x}^{2},n_{\rm xy}^{2})\)) and the (up to second order) derivatives of the function \(\mathcal{N}_{\rm x}(X_{\rm x},\,X_{\rm y},\,g_{\rm x}^{AB},\,g_{\rm y}^{AB},\, g_{\rm xy}^{AB})\). Having outlined the perturbative framework, it is natural to ask how many dissipative channels the (general) model contains. Or, to be more specific, how many "dissipation coefficients" would have to be determined from microphysics? According to the expansion scheme we have developed so far, the perturbative expressions for the dissipative terms will ultimately involve all second and first order derivatives of the \(\mathcal{N}_{\rm x}^{\rm d}\) when considered as functions of \(X_{\rm x},\,X_{\rm y},\,g_{\rm x}^{AB},\,g_{\rm y}^{AB},\,g_{\rm xy}^{AB}\). Also, to make use of the model we need to specify the entrainment coefficients and their derivatives in the combinations from eq. (3.81). This means that the most general model one can think of contains a large number of coefficients. However, they should be, in general, known once a specific model is chosen; that is, once the explicit functional forms of \(\Lambda\) and the \(\mathcal{N}_{\rm x}^{\rm d}\) have been provided. For example, if nuclear physics calculations are used to determine these explicit forms, they must be done in such a way that the constraints which arise from requiring a meaningful equilibrium configuration are taken into account, and they must ensure that the second law of thermodynamics is obeyed. If Onsager-type reasoning is invoked to ensure \(\Gamma_{\rm s}\) is positive (up to second order), then explicit use of \[T\Gamma_{\rm s}=-D_{ba}^{\rm s}\nabla^{b}u_{\rm s}^{a}-u_{\rm s}^{a}\Gamma_{a}^ {\rm s}\, \tag{3.110}\] where \(T=-u_{\rm s}^{a}\mu_{a}^{\rm s}\) is the temperature, would have to be made. ### 3.6 Model comparison As an intuitive application of the formalism we have developed, it is useful to make contact with existing models for general relativistic dissipative fluids, in particular, the classic work of Landau-Lifschitz and Eckart and the second-order Muller-Israel-Stewart model. Specifically, we want to understand how standard quantities (like shear and bulk viscosity) enter the present formalism. Therefore, we need to see if the dissipative terms of the existing models can be matched with terms in the action-based description. This procedure is fairly straightforward. The action-based model provides the total fluid stress-energy-momentum tensor, so we only have to decompose it in the usual way (cf. eq. (2.15)): \[T^{ab}=(\bar{p}+\chi)\perp^{ab}+\varepsilon u^{a}u^{b}+2q^{(a}u^{b)}+\chi^{ab}\, \tag{3.111}\] In this expression, the fluxes are defined with respect to some observer with four-velocity \(u^{a}\). In order to be consistent with the perturbative expansion outlined above, we take this observer to be associated with the thermodynamical equilibrium, i.e. \(u^{a}=u^{a}_{\rm e}\). Finally, we have split the isotropic pressure into an equilibrium contribution (denoted as \(\bar{p}\) as discussed above) and a non-equilibrium one. #### Equating the flux currents Let us first consider the heat. We can read off the heat flux from the total stress-energy-momentum tensor as \[q^{a}=-\varepsilon u^{a}_{\rm e}-T^{ab}u^{\rm e}_{b}=-\perp^{a}_{b}T^{bc}u^{ \rm e}_{c}. \tag{3.112}\] First, let us note that there is no contribution at linear order coming from the dissipative part of the stress-energy-momentum tensor \(D^{ab}\). In fact, making use of eqs. (3.91), (3.92) and (3.103), it is easy to show that \(D^{ab}u^{\rm e}_{a}=(\delta D^{ab})u^{\rm e}_{a}={\cal O}(2)\). Let us therefore consider the non-dissipative part of \(T^{ab}\). For the generalized pressure we have to first order \[\Psi=\Lambda+\sum_{x}n_{\rm x}\mu_{\rm x}=-\bar{\epsilon}_{\rm e}+\bar{\mu} \bar{n}+\bar{T}\bar{s}+\sum_{x=n,s}\bar{n}_{\rm x}\delta\mu_{\rm x}=\bar{p}+ \sum_{x=n,s}\bar{n}_{\rm x}\delta\mu_{\rm x}\, \tag{3.113}\] where we have used the minimum energy condition (eq. (3.85)) and the equilibrium Euler relation. Using \[\sum_{\rm x}n^{a}_{\rm x}\hat{\mu}_{\rm x}=\sum_{\rm x}n^{a}_{\rm x}\mu_{\rm x }+{\cal O}(2)\approx\sum_{\rm x}\left[\bar{n}_{\rm x}\bar{\mu}_{\rm x}u^{a}_{ \rm e}+\bar{n}_{\rm x}\delta\mu_{\rm x}u^{a}_{\rm e}+\bar{\mu}_{\rm x}\big{(} \delta n_{\rm x}u^{a}_{\rm e}+\bar{n}_{\rm x}w^{a}_{\rm x}\big{)}\right]\,, \tag{3.114}\] we then identify the heat flux as \[q^{a}=\sum_{\rm x}\bar{n}_{\rm x}\delta n^{a}_{\rm x}=\bar{\mu}\bar{n}\,w^{a} _{\rm n}+\bar{T}\bar{s}\,w^{a}_{\rm s}. \tag{3.115}\] Here, we have repeatedly used the Euler relation and the minimum energy condition eq. (3.85). We note that this quantity is consistent with the definition used in the classic models, see [21]. Let us now move on to the other fluxes and, as before, first focus on the non-dissipative contribution. It is easy to check that \[T^{ab}_{\rm n.d.}= \left(\bar{p}+\sum_{\rm x}\bar{n}_{\rm x}\delta\mu_{\rm x}\right)g^{ ab}+(\bar{p}+\bar{\varepsilon}_{\rm e})u^{a}_{\rm e}u^{b}_{\rm e}\] \[+\sum_{\rm x}\left[\bar{n}_{\rm x}\bar{n}_{\rm x}u^{b}_{\rm e}w^{a }_{\rm x}+\bar{n}_{\rm x}u^{a}_{\rm e}\big{(}\delta\mu_{\rm x}u^{b}_{\rm e}+ \mathcal{B}_{\rm x}\bar{n}_{\rm x}w^{b}_{\rm x}+\sum_{\rm y\neq x}\mathcal{A}_ {\rm xy}\bar{n}_{\rm y}w^{b}_{\rm y}\big{)}\right]\,, \tag{3.116}\] so that, using the standard decomposition above one arrives at \[(\bar{p}+\chi)\perp^{ab}+\chi^{ab}=\perp^{a}_{c}\perp^{b}_{d}T^{cd}=T^{ab}+T^{ ad}u^{\rm e}_{d}u^{b}_{\rm e}+T^{cb}u^{\rm e}_{c}u^{a}_{\rm e}+\varepsilon u^{a}_{ \rm e}u^{b}_{\rm e}. \tag{3.117}\] If we now use the non-dissipative contribution \(T^{ab}_{\rm n.d.}\) in this equation, we get \[(\bar{p}+\chi)\perp^{ab}+\chi^{ab}=(\bar{p}+\sum_{\rm x}\bar{n}_{\rm x}\, \delta\mu_{\rm x})\perp^{ab}=(\bar{p}+\delta\Psi)\perp^{ab}. \tag{3.118}\] That is, there may be a first-order correction in the pressure coming from \(T^{ab}_{\rm n.d.}\). Next, let us consider the contribution due to the non-dissipative part. From eq. (3.103) we see that \[\perp^{a}_{c}\perp^{b}_{d}D^{cd}=D^{ab}=\delta D^{ab}. \tag{3.119}\] Putting everything together, we have identified \[\hat{\chi} =\delta\Psi+\frac{1}{3}g_{ab}\delta D^{ab}\, \tag{3.120a}\] \[\hat{\chi}^{ab} =\delta D^{\langle ab\rangle}\,\] (3.120b) \[\hat{q}^{a} =\bar{\mu}\bar{n}w^{a}_{\rm n}+\bar{T}\bar{s}w^{a}_{\rm s}\, \tag{3.120c}\] where we reintroduced the "hat" to stress that these fluxes are measured by the equilibrium observer while the angle brackets mean that we are taking the trace-free symmetric part of the tensor. #### Example: A viscous single fluid We now consider the specific example of a two-component, single viscous fluid. The two species are matter, with non-equilibrium flux \(n^{a}=nu^{a}_{\rm f}\), and entropy, with non-equilibrium flux \(s^{a}=su^{a}_{\rm f}\). In this simple case, we assume that the non-equilibrium fluxes remain parallel23, meaning \(w^{a}_{\rm n}=w^{a}_{\rm s}=w^{a}\) and therefore Footnote 23: We note that a “real” two-fluid model would involve two independent fluid degrees of freedom \(n^{a}_{\rm n}\), \(n^{a}_{\rm s}\). By forcing them to move together we are imposing quite strong constraints on the model. Basically, we are assuming that the timescale over which the entropy current relaxes to the particle flow is short enough that it may be neglected. \[n^{a}_{\rm n}=nu^{a}_{\rm f}=n(u^{a}_{\rm e}+w^{a})\, \tag{3.121a}\] \[n^{a}_{\rm s}=su^{a}_{\rm f}=s(u^{a}_{\rm e}+w^{a})\, \tag{3.121b}\] where again \(u^{a}_{\rm e}\) is the equilibrium flow. In this case we do not have resistive terms because the two fluids are locked together from the beginning. Dissipation enters by assuming both currents depend on the (single) projected metric \[{\cal N}_{\rm n}={\cal N}_{\rm n}(X^{A},\,g^{AB})\, \tag{3.122a}\] \[{\cal N}_{\rm s}={\cal N}_{\rm s}(X^{A},\,g^{AB}). \tag{3.122b}\] In practice, this means that we will have additional terms due to \(S^{\rm s}_{ab}\) and \(S^{\rm n}_{ab}\) in the equations of motion. Also, the creation rate \(\Gamma_{\rm n}\) has to vanish24; this implies Footnote 24: The matter particle flux \(n^{a}_{\rm n}\) is conserved as it is identified with the baryon current. \[\Gamma_{\rm n}=-\frac{1}{\mu_{\rm n}}S^{\rm n}_{ab}\nabla^{a}u^{b}_{\rm f}=0 \Longrightarrow S^{\rm n}_{ab}=0\, \tag{3.123}\] as, by construction, the viscous stress tensor \(S^{\rm n}_{ab}\) must be orthogonal to \(u^{a}_{\rm f}\) (see eq. (3.15) and [20] for further details). As a result, the final form of the non-linear equation of motion is \[2n^{a}_{\rm n}\nabla_{[a}\mu^{\rm n}_{b]}+2n^{a}_{\rm s}\nabla_{[a}\mu^{\rm s} _{b]}+\Gamma_{\rm s}\mu^{\rm s}_{b}=-\nabla^{a}S^{\rm s}_{ab}. \tag{3.124}\] Note that, when we linearize, the term involving \(\Gamma_{\rm s}\) will not appear in the equations, because \(\Gamma_{\rm s}\) has no linear contributions--entropy is expanded around a maximum, leaving only second-order terms. Our next step is to use the expansion formalism developed in the previous sections to determine the explicit form of the viscous stress tensor \(S^{\rm s}_{ab}\). Let us start by considering the equilibrium (minimum energy) conditions. Clearly, we should have \[S^{\rm s,e}_{AB}=2{\cal M}_{\rm s}\frac{\partial{\cal N}^{\rm d}_{\rm s}}{ \partial g^{AB}}=0\Longrightarrow\frac{\partial{\cal N}^{\rm d}_{\rm s}}{ \partial g^{AB}}=0. \tag{3.125}\] It also makes sense to assume \(\partial\mathcal{N}_{\rm s}^{\rm d}/\partial X^{A}=0\). To see why, let us forget for the moment that the two species are locked together and consider: \[R_{A}^{\rm s} =\mathcal{M}_{\rm n}\frac{\partial\mathcal{N}_{\rm n}^{\rm d}}{ \partial X_{\rm s}^{A}}-\mathcal{M}_{\rm s}\frac{\partial\mathcal{N}_{\rm s}^{ \rm d}}{\partial X_{\rm n}^{A}}=-\mathcal{M}_{\rm s}\frac{\partial\mathcal{N} _{\rm s}^{\rm d}}{\partial X_{\rm s}^{A}}-\mathcal{M}_{\rm s}\frac{\partial \mathcal{N}_{\rm s}^{\rm d}}{\partial X_{\rm n}^{A}}\] \[=-2\mathcal{M}_{\rm s}\frac{\partial\mathcal{N}_{\rm s}^{\rm d}} {\partial X_{\rm s}^{A}}=-2\mathcal{M}_{\rm s}\frac{\partial\mathcal{N}_{\rm s }^{\rm d}}{\partial X^{A}}=0\, \tag{3.126}\] where we initially distinguished between the two constituents' matter-space coordinates, and used the equilibrium condition. The condition \(\partial\mathcal{N}_{\rm x}^{\rm d}/\partial X^{A}=0\) is thus motivated by the fact that the resistive term vanishes (because the two currents are effectively locked). As a result of these constraints we have \(\delta\mathcal{N}_{\rm x}^{\rm d}=\mathcal{O}(2)\), \(\delta\Psi=\mathcal{O}(2)\) and the viscous stress tensor becomes (see eqs. (3.97) and (3.103)) \[\delta S_{ab}^{\rm s} =\Bigg{[}2\mathcal{M}_{\rm s}\delta\bigg{(}\frac{\partial \mathcal{N}_{\rm s}^{\rm d}}{\partial g^{AB}}\bigg{)}\Bigg{]}\Psi_{ea}^{A}\Psi _{eb}^{B}\] \[=2\bar{T}\Bigg{[}\frac{\partial\mathcal{N}_{\rm s}^{\rm d}}{ \partial X^{C}\partial g^{AB}}\delta X^{C}+2\frac{\partial\mathcal{N}_{\rm s} ^{\rm d}}{\partial g^{DE}\partial g^{AB}}\delta g^{DE}\Bigg{]}\Psi_{ea}^{A} \Psi_{eb}^{B}\, \tag{3.127}\] Let us now rewrite the terms within the square brackets as \[A_{CAB} =2\frac{\partial\mathcal{N}_{\rm s}^{\rm d}}{\partial X^{C} \partial g^{AB}}\, \tag{3.128a}\] \[\Sigma_{DEAB} =4\frac{\partial\mathcal{N}_{\rm s}^{\rm d}}{\partial g^{DE} \partial g^{AB}}. \tag{3.128b}\] We further assume that \(A_{CAB}\) is zero as this involves degrees of freedom we do not need to recover Navier-Stokes equations. Before moving on with the model specification, let us introduce a slight generalization to the original model from [20]. The idea is to take the normalizations \(\mathcal{N}_{\rm x}^{\rm d}\) as functionals--instead of functions--of the additional variables. This does not constitute a major difference as the equations of motion, and particle production rate formulae, remain unchanged. Still, the step can be taken subject to the following caveat: The (functional) integration should extend (at most) to the spacetime region that is causally connected with each point. Here, we will assume that the analysis is done locally in space but not necessarily in time, i.e. on the world-tube formed by the spatial part of the region \(\delta\mathcal{M}\) in fig. 3.1. In the present example this would mean \[\mathcal{N}_{\rm x}^{\rm d}[g^{AB}]= \mathcal{N}_{\rm x}^{\rm d}[g_{\rm e}^{AB}]+\int\frac{\delta \mathcal{N}_{\rm x}^{\rm d}}{\delta g^{AB}(x)}\delta g^{AB}(x)\mathrm{d}^{4}x+\] \[+\frac{1}{2}\int\frac{\delta^{2}\mathcal{N}_{\rm x}^{\rm d}}{ \delta g^{AB}(x)\delta g^{CD}(y)}\delta g^{AB}(x)\delta g^{CD}(y)\mathrm{d}^{4 }x\mathrm{d}^{4}y \tag{3.129}\] where the first two terms vanish because (i) \({\cal N}_{\rm x}^{d}\) vanishes at equilibrium, and (ii) the minimum energy condition. The key step then is to replace the ordinary partial derivatives with functional derivatives in the various expressions we have discussed, so that the viscous stress tensor will be \[S_{AB}^{\rm s}(x)=2\bar{T}\delta\Bigg{(}\frac{\delta{\cal N}_{\rm s}^{\rm d}}{ \delta g^{AB}}\Bigg{)}(x)=2\bar{T}\int\frac{\delta^{2}{\cal N}_{\rm s}^{\rm d}} {\delta g^{AB}(x)\delta g^{CD}(y)}\delta g^{CD}(y){\rm d}^{4}y. \tag{3.130}\] We can now formally introduce a set of spatial coordinates \(\bar{x}\) comoving with the equilibrium observer and attached to the world-tube, and take the time coordinate to be the equilibrium worldline's proper time \(\tau\). Also, to enforce locality in space we let \[\frac{\delta^{2}{\cal N}_{\rm s}^{\rm d}}{\delta g^{AB}(x)\delta g^{CD}(y)}= \frac{1}{4}\Sigma_{ABCD}(\bar{x},\tau_{x}-\tau_{y})\,\delta^{3}(\bar{x}-\bar{y }). \tag{3.131}\] where the causality condition \(\tau_{x}-\tau_{y}\geq 0\) is assumed to be encoded within \(\Sigma_{ABCD}\). Let us first of all, as a consistency check, show that the formula for the particle production rate remains unaltered by these modifications. We have (see eq. (3.12)) \[\mu_{\rm x}\Gamma_{\rm x} =\frac{1}{3!}\mu_{\rm x}^{ABC}\frac{dn_{ABC}^{\kappa}}{d\tau_{\rm x }}=\bar{\cal M}_{\rm x}\frac{d}{d\tau_{\rm x}}\big{(}\bar{\cal N}_{\rm x}^{ \rm e}+\bar{\cal N}_{\rm x}^{\rm d}\big{)}\] \[={\cal M}_{\rm x}\Bigg{(}\frac{d{\cal N}_{\rm x}^{\rm d}}{d\tau_{ \rm x}}+\frac{1}{2}{\cal N}_{\rm x}^{\rm d}g^{\rm x}_{AB}\frac{dg^{AB}_{\rm x} }{d\tau_{\rm x}}\Bigg{)}. \tag{3.132}\] where, for a single viscous fluid the result simplifies to \[\Gamma_{\rm x}=\frac{d{\cal N}_{\rm x}^{\rm d}}{d\tau}+{\cal O}(3). \tag{3.133}\] If in particular we consider the entropy production rate \(\Gamma_{\rm s}\) we have \[{\cal N}_{\rm s}^{\rm d}=\frac{1}{8}\int\Sigma_{ABCD}(\bar{x},\tau-\tau^{ \prime})\delta g^{AB}(\bar{x},\tau)\delta g^{CD}(\bar{x},\tau^{\prime}){\rm d} ^{3}\bar{x}\,{\rm d}\tau\,{\rm d}\tau^{\prime}\, \tag{3.134}\] To compute the entropy creation rate we have to use the chain rule (generalized to functionals) on \({\cal N}_{\rm s}^{\rm d}[g^{AB}(x)]\) \[\frac{d{\cal N}_{\rm s}^{\rm d}}{d\tau}=\int\frac{\delta N_{\rm s}^{\rm d}}{ \delta g^{AB}(y)}\frac{\delta g^{AB}(y)}{\delta\tau(x)}{\rm d}^{4}y. \tag{3.135}\] But, because \(g^{AB}\) is a "normal" function of the spacetime coordinates \[\frac{\delta g^{AB}(y)}{\delta\tau(x)}=-2\delta^{4}(x-y)D^{(A}w^{B)}(x)\, \tag{3.136}\] so that we are left with \[\Gamma_{\rm s}=-\frac{1}{\bar{T}}S_{AB}(\bar{x},\tau)D^{(A}w^{B)}(\bar{x},\tau). \tag{3.137}\] We can now make use of this "functional generalization" to recover the Navier-Stokes model for a bulk- and shear viscous fluid. To focus on the key point, let us consider first the purely bulk-viscous case \[S(\tau)=\int K(\tau-\tau^{\prime})A(\tau^{\prime}){\rm d}\tau^{\prime}\, \tag{3.138}\] where \(S\) represents the trace of the viscous-stress tensor while A stands for the trace \(\mbox{tr}\,\delta g^{AB}\). Because of the difference between \(\delta g^{AB}\) and \(\dot{g}^{AB}\)--the former involves gradients in the displacement while the latter depends on the velocity--in order to recover a Navier-Stokes model we need to take \[K(\tau-\tau^{\prime})=-T\zeta\partial_{\tau^{\prime}}\delta(\tau-\tau^{\prime })\, \tag{3.139}\] since this would give \[S(\tau)=T\zeta\int[-\partial_{\tau^{\prime}}\delta(\tau-\tau^{\prime})]A(\tau ^{\prime}){\rm d}\tau^{\prime}=T\zeta\int\delta(\tau-\tau^{\prime})\partial_ {\tau^{\prime}}A(\tau^{\prime}){\rm d}\tau^{\prime}=T\zeta\frac{dA}{{\rm d} \tau}\, \tag{3.140}\] as desired. We now implement this for the bulk- and shear model. We can do this using the standard decomposition of the bulk and shear response as \[\Sigma_{ABCD}=\Sigma^{\rm b}_{ABCD}+\Sigma^{\rm s}_{ABCD}\, \tag{3.141}\] with \[\Sigma^{\rm b}_{ABCD} =\frac{\zeta(x)}{\bar{T}}\,g^{\rm e}_{ABCD}g^{\rm e}_{CD}\delta^ {3}(\bar{x}-\bar{y})\,q_{\rm b}(\tau_{x}-\tau_{y})\, \tag{3.142a}\] \[\Sigma^{\rm s}_{ABCD} =2\frac{\eta(x)}{\bar{T}}\,\bigg{(}g^{\rm e}_{A(C}g^{\rm e}_{B)D} -\frac{2}{3}g^{\rm e}_{AB}g^{\rm e}_{CD}\bigg{)}\delta^{3}(\bar{x}-\bar{y})\, q_{\rm s}(\tau_{x}-\tau_{y})\, \tag{3.142b}\] where the two kernels would be25 the same \(q_{\rm b}=q_{\rm s}=-\partial_{\tau_{y}}\delta(\tau_{x}-\tau_{y})\). It then follows that the only viscosity tensor of the model is Footnote 25: Note that we have chosen to separate the bulk- and shear channels as usual, even though the present construction allows for anisotropic response in the velocity gradients to viscosity relation. We have also introduced two independent kernels to allow for different response to bulk and shear strain rates. \[S^{\rm s}_{ab}=\chi_{ab}+\chi\perp_{ab}=\frac{1}{3}\zeta\,\theta\perp_{ab}+ \eta\,\sigma_{ab}. \tag{3.143}\] It is also easy to check that we can enforce compatibility with the second law by fixing the sign of the bulk-and shear viscosity coefficients. In fact, \[\Gamma_{s}=\frac{\zeta}{\bar{T}}\,\theta^{2}+\frac{\eta}{\bar{T}}\,\sigma^{ab} \sigma_{ab}\geq 0. \tag{3.144}\] where \(\zeta,\eta\geq 0\). With these relations we have recovered the usual relativistic Navier-Stokes equations (the Landau-Lifschitz-Eckart model for a viscous fluid). Let us conclude by pointing out that, to write down the full set of equations one should also expand the "Euler part" of the equation of motion, i.e. the left-hand-side of eq. (3.124). We have provided all the ingredients necessary for the explicit calculation, but leave it out here--and point to [21] for further details--as the result is not new and not key to the present discussion. ### 3.7 Cattaneo-type equations As a practical example of the first-order expansion we outlined the model for a single (bulk and shear) viscous fluid, and showed how this leads to the expected form of the relativistic Navier-Stokes equations. The derivation shows that the action-based formalism encodes the previous models. It is also clear that the formalism allows us to consider much more complicated settings, should we need to do so. However, the discussion of the first-order results is clearly not complete, because the final set of equations is widely known to suffer from causality/stability issues (see discussion in section 2.2) and it is then natural to wonder how we may fix this. When it comes to heat-conducting systems, the way forward has been discussed in [142, 22], where it is demonstrated that one can resolve the stability/causality issues at first order by properly accounting for the entrainment between matter and entropy currents--retaining the compatibility with the second law. However, for the single viscous fluid under consideration, the problem must be addressed in a different way, as the key ingredient used to solve the heat-flux case accounting for the entropy inertia, will not work for the present case as our model setting does not involve relative flows. We now show how we can make progress in the single viscous fluid case by using a functional form that is different from eq. (3.142). Notably, the argument stresses the importance of the "principle of memory or heredity" (see [121]). Let us first focus on the bulk viscosity case, and then extend the results to the bulk- and shear viscous model. Recalling eq. (3.138), the first step would again be to assume \[K(\tau-\tau^{\prime})=-\partial_{\tau^{\prime}}g(\tau-\tau^{\prime})\, \tag{3.145}\] so that \[S(\tau)=\int K(\tau-\tau^{\prime})A(\tau^{\prime})\mathrm{d}\tau^{\prime}=\int g (\tau-\tau^{\prime})\frac{dA(\tau^{\prime})}{\mathrm{d}\tau^{\prime}}\mathrm{ d}\tau^{\prime}. \tag{3.146}\] We can then look for the convolution kernel \(g\) such that the bulk-viscous scalar \(S\) satisfies an equation of the Cattaneo type \[t_{\rm b}\dot{S}=-S-\zeta\frac{dA}{d\tau}\, \tag{3.147}\] where \(t_{\rm b}\) is the relaxation time-scale of the bulk-viscosity response. In terms of the convolution kernel \(g\) this would mean \[g(\tau-\tau^{\prime})=-\frac{\zeta}{t_{\rm b}}e^{-(\tau-\tau^{\prime})/t_{\rm b }}\theta(\tau-\tau^{\prime})\, \tag{3.148}\] and one can check by direct computation this leads to eq. (3.147) as \[t_{\rm b}\partial_{\tau}g(\tau-\tau^{\prime})+g(\tau-\tau^{\prime})=-\zeta \delta(\tau-\tau^{\prime}). \tag{3.149}\] By inspecting the last expression, we also see that in the "fast relaxation limit" \(t_{\rm b}\to 0\) we recover a Navier-Stokes-type response as we would intuitively expect. We also point to chapter 6 where this fast relaxation limit is discussed in light of the (perhaps inevitable) resolution limitations faced in numerical implementations. We are now ready to go back to the full bulk- and shear viscous model. We will retain the structure and symmetries from before (see eqs. (3.141) and (3.142)), but introduce two different convolutions \(q_{\rm b}\) and \(q_{\rm s}\) to account for retarded response to bulk and shear strain rates. In essence, we have shown how we can implement a retarded response of the Cattaneo type in the action-based model by assuming that \(S_{AB}\) (and therefore \(D_{ab}\) as well) is an integral function of \(g^{AB}\). The question then is, does this mean that the final fluid equations are integro-differential equations? Fortunately the answer is no. In fact, we have shown that, by a suitable choice of the response function \(q(\tau-\tau^{\prime})\), the fluxes satisfy an equation of the Cattaneo-type. Therefore, instead of solving an integro-differential equation, one should treat \(S_{ab}^{\rm s}=\chi\perp_{ab}+\chi_{ab}\) as an unknown in eq. (3.124), and add two equations to the system \[\chi+t_{\rm b}\dot{\chi} =-\zeta\,\theta\, \tag{3.150a}\] \[\chi_{ab}+t_{\rm s}\,\dot{\chi}_{ab} =-\eta\,\sigma_{ab}. \tag{3.150b}\] This means that, at the end of the day, to actually solve a set of differential dissipative equations at first order, we have to treat the fluxes as additional unknowns, for which one has to provide equations that are not given by the stress-energy-momentum conservation law \(\nabla_{a}T^{ab}=0\). This is reminiscent of the EIT paradigm, where one postulates from the beginning an entropy function that depends on an additional set of quantities--the thermodynamical fluxes. The difference is that the microphysical origin of the equation for the fluxes is now more clear. It is also worth noting that equations of the Cattaneo-type for the fluxes cannot be obtained in the field-theory-based models, as the constitutive equations are given in terms of the usual equilibrium variables (like \(\mu,\,T\)) and their derivatives--so that terms with derivatives of the fluxes (like \(\dot{\chi}\)) do not appear. Equations 3.150 are (formally) the same as in the linearized version of Mueller-Israel-Stewart model, which has been shown to be stable and causal. In theory, nothing prevents us from choosing a different form for the retarded response \(q\) which could lead to acausal/unstable behaviour. However, the form of \(q\) suggested above has a clear physical interpretation and is microphysically motivated. If one wants to come up with an alternative, this would need to be motivated by microphysical arguments, as well. Let us now consider the implications of the Cattaneo laws for the entropy production rate. As in the Navier-Stokes model sketched above we have \[\Gamma_{\rm s} =\int\Sigma_{ABCD}(x,\tau-\tau^{\prime})D^{(C}w^{D)}(\tilde{x}, \tau^{\prime})D^{(A}w^{B)}(\tilde{x},\tau)\,{\rm d}\tau^{\prime}=\] \[=\frac{1}{\tilde{T}}S_{AB}(\tilde{x},\tau)D^{(A}w^{B)}(\tilde{x}, \tau). \tag{3.151}\] Again, let us use the bulk-viscous case to highlight the relevant features. We then have \[\Gamma_{\rm s}=\frac{\zeta}{t_{\rm b}}\int_{-\infty}^{\tau}e^{-(\tau-\tau^{ \prime})/t_{\rm b}}\theta(\tau)\theta(\tau^{\prime})\,{\rm d}\tau^{\prime}\, \tag{3.152}\] where \(\theta\) is the expansion rate. It is clear that, because the expansion rate is evaluated at different times, we cannot guarantee the positivity of the entropy production rate just by fixing the sign of the bulk-viscosity coefficient as in the previous Navier-Stokes case. However, we will argue this result is not as dramatic as it may appear at first sight--at least not in the regimes relevant for physical predictions. Because of the exponential in the integral, we can assume that the values for the expansion at times \(\tau^{\prime}\) can be neglected for \(\tau^{\prime}\) that is a few \(t_{\rm b}\) away from \(\tau\). As a result we can expand \(\theta(\tau^{\prime})\) as \[\theta(\tau^{\prime})=\theta(\tau)+\frac{d\theta(\tau)}{{\rm d}\tau}(\tau^{ \prime}-\tau)=\theta(\tau)+\dot{\theta}(\tau)(\tau^{\prime}-\tau)\, \tag{3.153}\] so that we obtain \[\Gamma_{\rm s}=\zeta\big{[}\theta^{2}(\tau)-t_{\rm b}\theta(\tau)\theta(\tau) \big{]}. \tag{3.154}\] Again, because of the product of the expansion rate with its time derivative, the entropy production rate cannot be made generally positive simply by fixing the sign of the bulk-viscosity coefficient \(\zeta\). However, as discussed in [90, 140], physical fluid states relax--on a timescale characteristic of the microscopic particle interactions--to ones that are essentially indistinguishable from the simple relativistic Navier-Stokes description. Translated to the present context this would mean \[t_{\rm b}\dot{\theta}(\tau)\approx t_{\rm b}\frac{\Delta\theta}{\tau_{\rm hydro }}\to 0. \tag{3.155}\] because the ratio between the relaxation timescale \(t_{\rm b}\) and the hydrodynamical timescale \(\tau_{\rm hydro}\) is effectively negligible. As a result, for actual physical applications we can neglect the second term in the entropy production rate and compatibility with thermodynamic second law is restored. In a way, this would mean that for bulk- and shear viscous fluids, we can introduce Cattaneo laws for the fluxes--to fix the causality and stability issues of the Navier-Stokes model--while the physical content will be precisely that of Navier-Stokes. ### 3.8 Summary and outlook We have considered the close-to-equilibrium regime of the action-based model of Andersson and Comer [20] for dissipative multi-fluid systems. In particular, we have shown that, starting from a set of fully non-linear dynamical equations with only the fluxes as the degrees of freedom, an expansion with respect to (a self-consistently defined) equilibrium can be introduced in a clear fashion, with the line of reasoning being similar to that of usual hydrodynamical perturbation theory. After discussing the aspects of equilibrium which can be inferred from the action-based model itself, we established how to construct an expansion in deviations away from equilibrium in a general setting, so that the framework is of wider relevance. In the process we demonstrated the importance of the frame-of-reference of the equilibrium observer. We also noted that the construction promotes the role of the matter space: Instead of it being a mathematical "trick" to facilitate a constrained variation, it might well be the arena where the microphysical details are encoded. This is a novel perspective that needs further discussion and consideration. We then focused on a particular first-order viscous fluid model, with shear- and bulk-viscosity, paying particular attention to the key causality issues. We showed that causal behaviour can be linked to a retarded response function that keeps track of a system's history. The specific form of the response function can be modelled in a phenomenological way--as we did--but should ideally be provided by specific microphysical calculations, for instance by means of the fluctuation-dissipation theorem (see [181] for a general discussion and [121] for comments on its role from the EIT perspective).26 In a sense, the action-based model provides the "context", determining the geometric structure and form of the equations of motion, while the detailed microphysics is encoded in the specific response function. Footnote 26: We also note that there have been recent efforts to make explicit use of the fluctuation-dissipation theorem to compute response coefficients through Green-Kubo-like formulae, see [156, 157]—although in a special relativistic setting. Nevertheless, there are similarities with the way we deal with causality issues here. Building the first-order expansion we made this connection clear, and showed how and where the microphysics enters the discussion. An interesting outcome of this analysis is that we showed that there is no need to go to second order in deviations from equilibrium to implement a causal response in the model. This has already been demonstrated for the heat-flux problem (see [22, 142]), where the Cattaneo-type equation for the heat flux is ultimately related to the multi-fluid nature of the problem. The entrainment effect (through which the entropy current gains an effective mass [19]) results in an inertial heat response. The case of a single viscous fluid is different since its retarded response cannot be associated with the multifluid nature of the problem. As the variational model is designed for dealing with multi-fluids, the route to further extensions is--at least at the formal level--quite clear. A natural next step would be the modelling of a viscous fluid allowing for the heat to flow differently from the matter. This application should be fairly straightforward since the two main issues of the problem have now been studied separately. A more challenging step will be the inclusion of superfluidity. The presence of currents that persist for very long times changes drastically the non-dissipative limit. The model would require the use of more than one equilibrium worldline congruence [20, 87], one for each "superfluid condensate" and one for all the remaining constituents. **Part II** **A covariant approach to large-eddy filtering** ## Chapter 4 Filtering relativistic hydrodynamics In the first part of this thesis we focused on modelling dissipative fluids in relativity, presenting the different ideas/schemes currently on the market, and then focusing on the only scheme that naturally lends itself to multi-fluid extensions. However, any discussion of dissipative relativistic fluids, would not be complete unless it touches upon (at the very least) the issues that arise when modelling turbulent flows. Turbulent flows are not only ubiquitous in the real-world--as turbulence is a manifestation of highly non-linear behaviour intrinsic to hydrodynamic equations--but also known to transport quantities like energy and momentum at a much faster rate than we would expect from microphysical transport mechanisms. We start with a brief introduction to hydrodynamic turbulence in section 4.1, whilst the rest of this chapter is devoted to address some of the issues that arise when modelling turbulent flows in relativity. We focus on the formal aspects associated with averaging/filtering the fluid dynamics which enter most of the recent developments/discussions. To make progress, we develop a new covariant framework for filtering/averaging based on the fibration of spacetime associated with fluid elements and the use of Fermi coordinates to facilitate a meaningful local analysis. We demonstrate how "effective" dissipative terms arise because of the coarse-graining, paying particular attention to the thermodynamical interpretation of the resolved quantities. In particular, as the smoothing of the fluid dynamics inevitably leads to a closure problem, we discuss a new closure scheme inspired by the recent progress in modelling dissipative relativistic fluids that we briefly touched upon in section 2.4. The results presented in this chapter have been published in Celora et al. [58]. We continue in chapter 5 by discussing the first steps towards extending the framework to charged multi-fluids mixtures. In particular, we argue it is somewhat natural to begin with a discussion of magneto-hydrodynamics (MHD). We will do so after having introduced and discussed the main differences between hydrodynamics and MHD turbulence. There we will also derive the relativistic MHD equations, given that electromagnetic aspects have not been discussed in the earlier parts of this thesis. ### 4.1 A brief introduction to hydrodynamic turbulence As a warm up, we begin with a very brief introduction to hydrodynamic turbulence, while referring to monographs such as [138] for an exhaustive discussion. Everyday life gives an intuitive understanding of hydrodynamic turbulence: the flow of a river downstream of an obstacle or atmospheric/oceanic currents are just two examples. Turbulence typically involves an abrupt change in, say, the velocity field (or better, changes over very small scales) and is often driven by the development of fluid instabilities. A turbulent flow is also often associated with chaotic behaviour and a lack of predictability. While "chaos" can be precisely defined for simple mechanical systems--and turbulent flows are not necessarily chaotic in this sense--it is true nonetheless that uncertainties in the observation of the initial state forbid a precise prediction at later times. This is, in fact the reason why turbulent flows appear chaotic. On the other hand, there is ample evidence that Navier-Stokes equations describe well turbulent flows. An important quantity in the characterization of turbulence is the Reynolds number \[\mathrm{Re}=\frac{\rho\,L^{2}/\eta}{L/V}=\frac{L\,V\,\rho}{\eta}\, \tag{4.1}\] where \(L\) and \(V\) are the characteristic length and velocity of the flow, while \(\rho\), \(\eta\) are the mass density and (dynamic) shear-viscosity coefficient--which has units (in cgs) of g cm\({}^{-1}\)s\({}^{-1}\). The Reynolds number is formed as the ratio of the timescales over which flow properties are transferred by molecular diffusion as compared to macroscopic convection. It then quantifies the importance of inertial over viscous effects1. As viscosity tends to make neighbouring fluid elements move together, it is intuitively clear that for high Reynolds numbers we expect the fluid flow to change over very small scales. Footnote 1: Similarly one can define the Peclet number \(\mathrm{Pe}=L\,V/\kappa\) for turbulent heat diffusion, where \(\kappa\) is the heat conductivity. From these considerations, we can conclude that fluid turbulence is (or can be described as) a deterministic phenomenon, although the evolution in time is very much complicated by the non-linearities in the fluid equations. Having said that, while a precise definition of turbulence does not exist, we will try to define it anyway pointing out some of the common properties of turbulent flows. First, a turbulent flow is disordered in (space and time) and often presents well-organized structures such as vortices. Second, turbulent flows are able to mix transported quantities--like energy and momentum--much faster than if only molecular diffusion processes were involved. Thirdly, it involves a wide range of spatial wavelengths. The latter are typically divided into three broad ranges (see fig. 4.1) : * the _large scale_ is defined by the problem domain geometry. * the _integral scale_ is a fraction of the large scale, often associated with a single wavelength \(k_{I}\). It is defined as the maximum of the energy spectrum, and often associated with the scale at which energy is put into the system to sustain turbulence. * the _inertial range_, which covers a wider range of length scales, is characterized by the fact that viscous effects are negligible. * the _dissipation range_, typically associated with the Kolmogorov scale \(k_{K}\), where viscosity effects are dominant over inertial ones. Because of the complexity of turbulent flows, a deterministic analytical description is practically excluded. One option, not further developed in the present work, is to resort to statistical analysis. A very important result in this direction was obtained by Kolmogorov in 1941 (see [128, 129, 130] and the more recent discussion by Frisch [84]). Assuming statistical isotropy and homogeneity, Kolmogorov derived a simple set of scaling laws that are in very good agreement with observations--the most famous being the "\(5/3\) law". Using simple dimensional arguments he showed that \[E(k)=C_{K}\varepsilon^{2/3}k^{-5/3}\, \tag{4.2}\] where \(E(k)\) is the energy spectrum, \(\varepsilon\) is the rate with which energy is pumped into the system at large scales, and \(C_{K}\) is a constant--the Kolmogorov constant. For stationary, Figure 4.1: Cartoon of the scales of the turbulent energy spectrum. Figure adapted from McDonough [151]. homogeneous and isotropic turbulence \(\varepsilon\) is also the constant energy flux from scale to scale, and hence the rate at which energy is dissipated once we hit the Kolmogorov scale. An alternative option, which is often preferred, especially with the increase of computing power, is to rely on numerical simulations. For relatively simple flows one can try to evolve directly the Navier-Stokes equations. However, it has been shown that the number of grid points needed to fully resolve the flow scale as2 Re\({}^{9/4}\). This means that most of the interesting turbulent flows, characterized by very high Reynolds numbers of the order of \(10^{4}\) or bigger, are in fact out of reach for direct numerical simulations--now and in the not so near future. Then, what do we do? The strategy, which motivates the analysis in the rest of this chapter, is to use mean flow models or large-eddy-simulations. This boils down to averaging/smoothing the fluid dynamics, and resolving scales only up to some value in the inertial sub-range, while taking into account the smaller scales through sub-grid models. Footnote 2: This is for the three dimensional case, the two dimensional scaling is Re\({}^{2}\)[136]. ### 4.2 Averaging turbulent flows Fluid models inevitably involve aspects of averaging--we have to average over a large number of particles in order to describe a fluid system in terms of a small number of macroscopic quantities (in the thermodynamic sense). However, in the simplest settings we do not have to worry (too much) about the actual process of averaging. For example, the transition from particle kinetic theory to a fluid model follows intuitively when the momentum distribution develops a well-defined peak. Similarly, the notion of a fluid element enters naturally on scales much larger than the individual particle mean-free paths (cf. discussion in section 3.1.1). However, the story changes when we turn to dynamical simulations and problems involving, for example, turbulence. When we consider the problem from a simulation point of view, we have to consider the scale associated with the numerical resolution. This numerical scale tends to be vast compared to (say) the size of a fluid element. For example, in the high-density core of a neutron star we would typically deal with mean-free paths of a fraction of a millimeter, while the best current large-scale numerical simulations of neutron-star mergers involve a resolution of order 10 meters (see [126]). This scale discrepancy has "uncomfortable" implications. In a highly dynamical situation we may not be able to resolve the full range of scales involved. Quite a lot of action can be hidden inside each computational cell. As we discussed in section 4.1, this is a well-known fact that motivates the (considerable) effort going into developing "large-eddy" simulations schemes in computational fluid dynamics (the subject of numerous textbooks, see for example [136, 151, 138]). The astrophysical significance of the problem is obvious given that many relevant situations involve/require the modelling of hydro(-magnetic) turbulence. Again, the dynamics of neutron stars comes to mind. A topical example is the turbulent flow caused by the Kelvin-Helmholtz instability, which in turn drives the amplification of the magnetic field (through the dynamo effect) in binary neutron star mergers ([173, 125, 15, 229]). Traditionally--in the context of Newtonian theory--turbulent flows have been studied in terms of Reynolds averaged equations, which, roughly speaking, are obtained via time-averaging the fluid dynamics. This smoothing requires the modelling of features that are not captured by the resolved flow, bringing in the need to introduce suitable "closure conditions" (necessary as the averaged scheme introduces more degrees of freedom than there are equations of motion). Most recent work replaces the averaging with spatial filtering, leading to what is known as Large Eddy Simulations (LES). This strategy is often preferred because it involves less modelling [151], although Reynolds-type averaging is still widely used in the context of magnetic dynamos, see the reviews by Brandenburg and Subramanian [47] and Rincon [187]. Because of the relevance for, in particular, binary neutron-star merger simulations, there have been several recent efforts to extend the "familiar logic" from the Newtonian setting to relativity. These range from the more formal discussion in [77] to the actual simulations in [176, 177]. Also worth noting is the recent work in [72] in which the results from [176] are contrasted with those obtained modelling the turbulent flow as effectively viscous on a larger scale (see e.g. [202]). The general relativistic magneto-hydrodynamics (MHD) merger simulations of [94] are another relevant example. Most of these results are based on spatial averaging, with subgrid models tailored to account for small-scale dynamo action. Recently, a more refined gradient subgrid-scale model for general relativistic simulations was developed in [219, 49, 220] and applied to binary neutron-star mergers with impressive results [4, 169, 5]. In short, while there has been notable effort to carry out large-eddy simulations in relativity, the formal underpinnings for this effort are not as firmly established as one might like--the exception being the discussion in [77]. This is the gap we are trying to bridge here. Starting from the beginning, we bring to the fore the fundamental issues associated with any effort to "average" or "filter" in a curved spacetime. We want to consider the problem from a covariant spacetime point of view, a key point being that the underlying principles--for both time-averaging and space filtering--should be the same (or at least "similar"). Both strategies combine "smoothing" with suitable closure relations to determine contributions that may not be "directly" calculable (i.e. represented on the resolved scale). The issues we are interested in can be approached at the level of an "effective theory" based on fairly simple rules, avoiding a detailed discussion of the underlying averaging/filtering process. This is a useful strategy as it leads to a relatively straightforward derivation of the dynamical equations. At the same time, one has to pay attention to the details as a number of issues come into play when we consider the problem from the covariant perspective of General Relativity. In essence, we want to establish what a consistent spacetime scheme for averaging or filtering should look like, and highlight issues relating to the formulation of such a scheme. As the final aim is to develop a consistent set-up for simulations, there are important numerical issues to be discussed; e.g. the implicit filtering associated with numerical discretisation. In order not to confuse these with the foundational issues, we leave them out of the present discussion. As we need to keep track of the relevant scales and quantities associated with different "observers", the notation easily gets somewhat messy. This may be inevitable, but let us try to ease the pain by explaining the notation used in this chapter from the outset. First of all, we need to distinguish between fine-scale and coarse-scale quantities. To do so, we use bars and angle brackets--i.e. \(\overline{A}\) and \(\langle A\rangle\)--for averaged and filtered quantities (respectively), obtained from the fine-scale one, \(A\). However, as we will see, these quantities are not necessarily the most natural to evolve. Thus, we use tildes, e.g. \(\tilde{A}\), to identify the evolved/resolved quantities. Finally, while discussing the linear stability of the proposed closure scheme (section 4.8.1), we drop the tilde notation--as all the quantities are then assumed to be evolved and there is no need to make the distinction--and use instead sub/superscripts to represent quantities evaluated on the background, like \(A_{0}\). This subscript should not be confused with the spacetime indices, which are represented by latin letters \(a,b,c\ldots=0,1,2,3\) throughout. ### 4.3 Averaging vs filtering In order to provide the appropriate context and establish the general strategy, it is useful to briefly summarise the standard approach for (typically incompressible) fluid dynamics in Newtonian gravity. Traditionally, small scale fluctuations are considered in terms of averaging, following the pioneering work of Reynolds and others (see [136]). In effect, this means that we have \(A=\overline{A}+\delta A\), with the fluctuations represented by \(\delta A\) at each spacetime point. Introducing this formal split has the advantage of providing a straightforward derivation of the dynamical equations and a relatively clear interpretation of the involved quantities. One may also resort to an expansion for "small" \(\delta A\) (see [72] for a relevant example of this). Typically, progress is made by assuming that the average of the linear fluctuations vanishes, which may not be a faithful representation of the physics the model aims to describe (see [151] for a more extensive discussion). Noting this, the typical strategy for spatial filtering--forming the basis for modern large-eddy simulations--is different. In particular, the filtered fluctuations are not taken to vanish, nor does the argument involve expanding in the fluctuations. Instead, one typically proceeds by introducing a new set of variables to represent the filtered dynamics. From the conceptual point of view, each of the two strategies has attractive features and--as we are interested in the formal aspects of the problem--we will consider both of them in the following. Let us first consider the standard averaging problem. The standard strategy is to derive the averaged equations by applying a simple set of rules. In essence, one would _assume_ that (using a bar over quantities to represent averaging) \[\overline{c} =c\, \text{for constants }, \tag{4.3a}\] \[\overline{A+B} =\overline{A}+\overline{B}\, \text{linearity of the procedure },\] (4.3b) \[\overline{\partial_{a}A} =\partial_{a}\overline{A}\, \text{averaging commutes with derivatives }. \tag{4.3c}\] It is immediately clear that, while the last of these relations is intuitive for a time average in Newtonian physics, we need to tread carefully when we turn to the relativistic setting. First of all, we need to face the fact that we do not have an observer-independent space-time split. Secondly, the derivatives we need to consider will be covariant, and hence we must consider the spacetime curvature. Simply noting these reservations for the moment (they will be discussed in section 4.4), the stated rules imply that \[\overline{cA}=c\,\overline{A}. \tag{4.4}\] Moreover--and this is where the main distinction from large-eddy models comes in--it is common to further assume that the average of the fluctuations vanishes so we have \[\overline{\delta A}=0. \tag{4.5}\] It then follows that \[\overline{\overline{A}}=\overline{A}\, \tag{4.6}\] which means that the field \(\overline{A}\) remains unchanged after the averaging. In effect, this additional rule leads to \[\overline{\overline{A}B}=\overline{A}\,\overline{B}. \tag{4.7}\] This simplifies the discussion considerably as we can ignore all linear fluctuation terms in the averaged equations. Time averaging is the (conceptually) simplest approach to the problem, but (strictly speaking) it removes dynamical features associated with the fluctuations3, which is unlikely to be realistic. A faithful representation of the physics may require a different prescription. One option would be to not introduce the assumption from (4.5). The typical description then involves (spatial) filtering, using some specified kernel to define the separation of scales (see [136]). This (effectively) leads to the same kind of rules as before--with the exception of eqs. (4.5) and (4.6)--although we now have (indicating filtering by angle brackets) \[\langle\langle A\rangle B\rangle\neq\langle A\rangle\langle B\rangle\, \tag{4.8}\] which means that filtered fluctuations do not have to vanish. That is, in general we have \(\langle\delta A\rangle\neq 0\). ### 4.4 The spacetime view: Fermi Coordinates As a first--and essential--step towards a relativistic model for averaging/filtering, we have to consider the spacetime aspects of the problem. In particular, we need to introduce an unambiguous space-time decomposition--otherwise we cannot meaningfully consider "time" averages or "space" filtering. This is more than semantics [77]. An interesting discussion of the problem (mainly from the cosmology perspective) has been provided by Ellis [75], and it is evident that the issue is conceptually problematic since the notions of time and space are observer dependent. The problem is particularly vexing for a foliation based approach to spacetime (as assumed in numerical relativity [72], where the spacetime foliation is manifestly gauge dependent). However, for a fluid there does exist a natural fibration of spacetime [21]. If we take the associated fluid frame as our starting point, we can introduce a meaningful "local analysis" which allows us to make progress. Moreover, it is natural to use the fluid frame to make the all-important connection with the microphysics and the equation of state [21]. The strategy also allows us to consider thermodynamical aspects of the averaging/filtering scheme. Let us explore the steps involved in an averaging/filtering procedure based on a spacetime fibration. In particular, we want to establish under which conditions we may assume that the covariant derivative commutes with the averaging/filtering procedure. Intuitively, we need to assume from the outset that there is a separation of scales between the metric fluctuations and the fluid fluctuations. The natural approach to the problem then involves Fermi-type coordinates (cf. appendix A). In order to establish the logic, consider the following situation: The fluid four-velocity (and other physical properties) varies over a resolved spacetime region (this can be thought of as a numerical cell, even though such numerical cells would typically be defined in terms of a foliation). However, we assume that it is still possible to identify a family of observers associated with a four-velocity vector field \(U^{a}\) which can be taken to be constant over the resolved region and which is "close enough" to the actual fluid four-velocity. (The latter assumption is not strictly required for the definition of an averaging procedure, nor for the spacetime decomposition, but it helps develop the logic). Then we can use the worldlines with tangent \(U^{a}\) to construct Fermi-type coordinates and explore the details of a given averaging/filtering procedure. Fermi coordinates were first introduced by Fermi in 1922 [78, 79] and then developed by, in particular, Manasse and Misner [148] (see also, for example, [208, 179]). We will not dwell on the construction itself here as this is not a new result (we point to appendix A for more details). Instead, we focus on the properties and region of validity of the associated coordinate system. The set of coordinates is essentially built from a spacetime tetrad transported along a central worldline (naturally taken to be timelike in our case). This is convenient because the metric and the Christoffel symbols take a very simple form along the central curve. Let us introduce Fermi coordinates \(x^{\dot{a}}=\{x^{0},\,x^{\dot{1}},\,x^{\dot{2}},\,x^{\dot{3}}\}\) (distinguished by hats on the indices) such that, on the central worldline \(G\), the metric reduces to the Minkowski form \(g_{\dot{a}\dot{b}}=\eta_{\dot{a}\dot{b}}\) while its first derivatives can be obtained from the Christoffel symbols (see [155]) \[g_{\dot{a}\dot{b},\dot{0}} =0\, \tag{4.9a}\] \[g_{\dot{0}\dot{0},\dot{j}} =-2a_{\dot{j}}\,\] (4.9b) \[g_{\dot{0}\dot{j}} =0\,\] (4.9c) \[g_{\ddot{\gamma}\dot{k},\dot{m}} =0\, \tag{4.9d}\] where the commas represent partial derivatives. We have introduced the non-vanishing piece of the four acceleration, \(a_{\dot{j}}\), of the worldline4 and chosen to construct the tetrad in such a way that the associated observer is non-rotating (which seems natural). With this construction we can formulate an expansion of the metric in the neighbourhood of the worldline. This leads to Footnote 4: That is, the four acceleration is \(a_{\dot{b}}=U^{a}\nabla_{a}U_{b}\) here. \[g_{\dot{0}0} =g_{\dot{0}0}\big{|}_{G}+g_{\dot{0}0,\dot{a}}x^{\dot{a}}=-(1+2a_{ \dot{j}}x^{\dot{j}})+\mathcal{O}(x^{\dot{j}})^{2}\, \tag{4.10a}\] \[g_{\dot{0}\dot{\gamma}} =g_{\dot{0}\dot{\gamma}}\big{|}_{G}+g_{\dot{0}\dot{\gamma},\dot{a }}x^{\dot{a}}=\mathcal{O}(x^{\dot{j}})^{2}\,\] (4.10b) \[g_{\ddot{i}\dot{j}} =g_{\dot{i}\dot{j}}\big{|}_{G}+g_{\dot{i}\dot{j},\dot{a}}x^{\dot{ a}}=\eta_{\dot{i}\dot{j}}+\mathcal{O}(x^{\dot{j}})^{2}\, \tag{4.10c}\] where \(|_{G}\) indicates that the quantity is evaluated on the worldline. This is just a Taylor expansion for the metric where the "small parameter", \(s\) (say), is taken to be the proper distance from the central curve. That is, we have \(s^{2}=(x^{\dot{1}})^{2}+(x^{\dot{2}})^{2}+(x^{\dot{3}})^{2}\). We see that, if the worldline is a geodesic then \(a^{\dot{\gamma}}=0\) and there are no corrections up to second order in the metric. However, there will always be corrections at second order due to the spacetime curvature. These corrections can be expressed in terms of the Riemann tensor (again evaluated on the worldline \(G\)), but we will not need the explicit results here. Because we are assuming that the metric fluctuations happen on a larger scale (with respect to the fluid variations), we can make use of these expansions in the following. Next, we can use the coordinates we have introduced to define a formal averaging or filtering procedure. Focusing on time-averaging first, we may use the spacetime split associated with the coordinates and define the procedure as \[\overline{A}(\hat{x})=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}d\hat{\tau}A(\hat{ x},\hat{\tau}). \tag{4.11}\] That is, given a point on the fluid element trajectory we average in the proper time associated with the worldline. In terms of the Fermi coordinates, the time coordinate is exactly the proper time of the central curve. Note that there is no problem in taking the limit \(T\to\infty\) since the Fermi coordinates are formally defined over the entire worldline. The region of validity is only limited in the spatial directions orthogonal to the central curve. From the definition, it is clear that time-averaged quantities must be time-independent, and it immediately follows that \(\overline{\overline{A}}=\overline{A}\) (making contact with (4.6)). We stress that this property follows if and only if we take the limit \(T\to\infty\) in the definition, exactly as in the Newtonian case. As such, this implies that we should (strictly) neglect time derivatives in the averaged equations. The upshot is that the time-averaging strategy is (formally) valid for stationary flows only. The same is true in the Newtonian context--even though time derivatives are typically retained in the equations5. In fact, this is one of the main motivations in favour of spatial filtering and large-eddy models. In the following, we follow the "tradition" and retain terms involving time derivatives, as the main point of our discussion of the time-averaging case is pedagogical. Footnote 5: See [151] for a more extensive discussion on this. As an application, let us consider the averaged metric. From the metric expansion above, we immediately see that the "time-time" component gains a correction due to the acceleration, which does depend on the proper time. However, we only need to integrate over points on the central worldline, where we have the Minkowski metric (in Cartesian coordinates) by construction. The situation is similar for all the remaining components. As a result, each component of the averaged metric takes the non-averaged value from the central worldline. That is, the metric is constant (in the sense described in section 4.3), and we have \[\overline{g}_{\hat{x}\hat{b}}=g_{\hat{x}\hat{b}}. \tag{4.12}\] Similarly, \(U^{a}\) is constant under averaging. To see this it is sufficient to note that (in terms of the Fermi coordinates) we have \(U^{\hat{a}}=(1,0,0,0)^{\top}\) so that \(\overline{U}^{\hat{a}}=U^{a}\). Analogously, we can use the spacetime split to define a space-filtering. We first have to assume that the width \(L\) of the region over which we are filtering--the "resolved box"--is small enough (in terms of the distance \(\lambda\)) that the Fermi coordinates are well defined on it. For instance, one such condition is \(L<1/a\) where \(a\) is the magnitude of \(a^{\hat{j}}\) (see [163] for a detailed discussion). In this case, the filtering procedure may be defined through \[\langle A\rangle(\hat{x},\hat{t})=\int dV\,A(\hat{x}+\hat{y},\hat{t})f(\hat{y})\, \tag{4.13}\] where we introduced the filter \(f\) (normalised over the resolved box). The expression simplifies further if we note that, by construction, the spacetime point \((\hat{x},\hat{t})\) lies on the central worldline \((x^{\hat{0}}=\hat{t}=\tau\), \(x^{\hat{i}}=0)\) where \(\tau\) is the relevant proper time. Also, in terms of the Fermi coordinates, the volume element is \[dV=U^{\hat{0}}\sqrt{-g}dy^{\hat{1}}dy^{\hat{2}}dy^{\hat{3}}=(1+2a_{\hat{i}}y^{ \hat{i}})^{1/2}dy^{\hat{1}}dy^{\hat{2}}dy^{\hat{3}}\approx(1+a_{\hat{i}}y^{ \hat{i}})d^{3}\hat{y}. \tag{4.14}\] Note that we have not specified the exact filter to be used. This is not required at this stage, but we will assume the filter to be an even function (noting that this is the case for the three most common filters used in large-eddy models, see [136] for instance) and normalized over the resolved box such that we have6 Footnote 6: If the filter has a sharp boundary—i.e. vanishes at the boundary of the resolved box—the argument does not involve extending the spatial integral to infinity. However, one can think of filters with no sharp boundary, like a Gaussian filter (see [151]), which may give rise to formal issues. In practice though, the exponential tail of the Gaussian should suppress anything beyond the Fermi coordinate boundary. \[\int d^{3}\hat{y}\,f(\hat{y})=1\quad,\quad\int d^{3}\hat{y}\,y^{\hat{i}}f(\hat {y})=0. \tag{4.15}\] Again, let us first apply this to the filtered metric. Since there are no first-order corrections in the expansion in eq. (4.10), \(g_{\hat{0}\hat{i}}\) and \(g_{\hat{i}\hat{i}}\) are constant over the box, and we have \[\langle g_{\hat{0}\hat{i}}\rangle =0=g_{\hat{0}\hat{i}}\big{|}_{G}\, \tag{4.16a}\] \[\langle g_{\hat{i}\hat{j}}\rangle =\int d^{3}\hat{y}(1+a_{\hat{i}}y^{\hat{i}})f(\hat{y})\eta_{\hat{ i}\hat{j}}=\eta_{\hat{i}\hat{j}}=g_{\hat{i}\hat{j}}\big{|}_{G}. \tag{4.16b}\] We also have \[\langle g_{\hat{0}\hat{0}}\rangle =-\int d^{3}\hat{y}(1+2a_{\hat{i}}y^{\hat{i}})^{3/2}f(\hat{y})\] \[=-1-3a_{\hat{i}}\int d^{3}\hat{y}\,y^{\hat{i}}f(\hat{y})=-1=g_{ \hat{0}\hat{0}}\big{|}_{G}\, \tag{4.17}\] where the last integral vanishes because of the assumed symmetry of the kernel. Once again, each component of the filtered metric takes the non-averaged value from the central worldline throughout the region under consideration. That is, the metric is constant \[\langle g_{\hat{a}\hat{b}}\rangle=g_{\hat{a}\hat{b}}. \tag{4.18}\] We also note that, by construction \(U^{a}\) is constant over the box so we have \(\langle U^{a}\rangle=U^{a}\). Finally, since we have shown that the metric can (effectively) be taken to be constant under both averaging and filtering, it is easy to show that partial derivatives commute with each procedure. That is (obviously connecting with (4.3)) \[\partial_{a}\langle A\rangle=\langle\partial_{a}A\rangle\quad\text{and}\quad \partial_{a}\overline{A}=\overline{\partial_{a}A}. \tag{4.19}\] We now show that these relations hold given the definitions of the average/filter above. Let us start with the time-averaging case. When we consider the partial derivative with respect to the spatial coordinates the argument is straightforward, as the partial derivative (in the spatial direction) can be brought inside the integral. For the time derivative, we have \[\overline{\partial_{i}A}(\hat{x},\hat{t})=\lim_{T\to\infty}\int_{0}^{T}d\hat {\tau}\partial_{\hat{\tau}}A(\hat{x},\hat{\tau})=\lim_{T\to\infty}\frac{A(T)-A (0)}{T}=0. \tag{4.20}\] On the other hand, if we take the time derivative of the averaged quantity, this trivially vanishes as it is time-independent. The argument for the time derivative in the filtering case is similarly straightforward. For spatial derivatives, we have on the one hand \[\langle\partial_{\hat{t}}A\rangle(\hat{x},\hat{t})=\int dV\left(\partial_{ \hat{t}}A\right)\,(\hat{x}+\hat{y},\hat{t})f(\hat{y})\, \tag{4.21}\] while, on the other hand, \[\frac{\partial}{\partial x^{i}}\langle A\rangle(\hat{x},\hat{t})\big{|}_{ \hat{x}_{0}}=\int dV\frac{\partial}{\partial x^{i}}A(\hat{x}+\hat{y},\hat{t}) \big{|}_{\hat{x}_{0}}f(\hat{y}). \tag{4.22}\] Using the chain-rule in the last equation, we see that \[\frac{\partial}{\partial x^{i}}A(\hat{x}+\hat{y})\big{|}_{\hat{x}_{0}}=\frac{ \partial A(\hat{z})}{\partial z^{i}}\big{|}_{\hat{z}=\hat{x}_{0}+\hat{y}}\, \tag{4.23}\] and it is clear that the two relations lead to the same result. We now have all the ingredients we need to prove that covariant derivatives commute with the averaging/filtering procedure. In fact, given that the metric is constant (in the sense of eq. (4.3)), we have \[\langle\partial_{c}g_{ab}\rangle =\partial_{c}\langle g_{ab}\rangle=\partial_{c}\delta_{ab}\, \tag{4.24a}\] \[\overline{\partial_{c}g_{ab}} =\partial_{c}\overline{g_{ab}}=\partial_{c}g_{ab}. \tag{4.24b}\] As a result, the Christoffel symbols--which are obtained from combinations of first derivatives of the metric--are (locally) constant under the procedure, as well. Therefore, we have \[\langle\nabla_{a}A^{b}\rangle=\partial_{a}\langle A^{b}\rangle+\Gamma^{b}_{ac }\langle A^{c}\rangle=\nabla_{a}\langle A^{b}\rangle\, \tag{4.25}\] with an analogous result for the time-averaging case. At the end of the day, the argument is quite intuitive. #### On covariance and the Einstein equations It makes sense now, before we move on, to spell out the covariance of the proposed averaging/filtering procedure, and comment on its compatibility with the field equations of General Relativity. It is, in fact, clear that the integrals we used to define the averaging/filtering procedure are not of the usual type. We have to define each procedure in such a way that the integration preserves the tensorial nature of the input7. Given this, we define the procedure for scalar quantities and then apply it to each component8 of, say, the metric tensor. We then require the averaged/filtered quantity to transform as a tensor on the resolved scale. The proposed definition reduces to the one of [77] in the special relativity context and it also leads back to the Newtonian ones [136, 151]. The main difference is that, in special relativity such integrals are chart independent--as long as the integration is performed on each component using a fixed basis (see [101])--while this is not the case in General Relativity. However, even though the special relativistic integrals are chart independent, the results are not, because the notions of length- and time scales are observer dependent. We also note that, as shown in [77], the observer-dependence cannot be resolved by the introduction of some kind of "spacetime" filter. However, we are setting up the averaging/filtering using Fermi coordinates defined from the fibration. As this is naturally associated with the fluid motion, the "gauge" dependence of the procedure is more physical. We execute the smoothing in "some" local frame \(U^{a}\) which we can choose to "associate" to the "micro-scale" fluid motion. Footnote 7: Recall that the usual integral on a (sub-)manifold of dimension \(p\) takes a \(p\)-form as input and outputs a scalar [101]. Footnote 8: Intuitively, because the procedures are based on the Fermi-coordinates construction in terms of a non-rotating tetrad with respect to \(U^{a}\), we can effectively think of the components as scalars. Let us now discuss the averaging/filtering of the Einstein equations, focusing on the geometry. First of all, consider the Einstein tensor \(G^{ab}\). From eqs. (4.19) and (4.24) it follows that \[\langle g_{ab,cd}\rangle =g_{ab,cd}\, \tag{4.26a}\] \[\overline{g}_{ab,cd} =g_{ab,cd}. \tag{4.26b}\] Since the Einstein tensor is ultimately a combination of the metric and its (up-to-second order) derivatives \(\mathbf{G}=\mathbf{G}(g,\,\partial g,\,\partial^{2}g)\), this implies that we must have \[\overline{\mathbf{G}(g,\,\partial g,\,\partial^{2}g)}=\mathbf{G}(\overline{g},\, \partial\overline{g},\,\partial^{2}\overline{g})=\mathbf{G}(g,\,\partial g,\, \partial^{2}g)\, \tag{4.27}\] and analogously for the filtering case. The net result is that the coarse-grained theory remains consistent with General Relativity. In particular, the Einstein equations become \[G^{ab} =8\pi\overline{T^{ab}}\, \tag{4.28a}\] \[G^{ab} =8\pi\langle T^{ab}\rangle. \tag{4.28b}\] These results follow from the Fermi-coordinate construction, and the assumed separation of scales in the metric fluctuations with respect to the fluid variables. This should be a safe assumption for binary neutron star merger applications, but not necessarily for problems relating to the very early universe (where quantum fluctuations in the gravitational field may play an important role). In this sense the Fermi-coordinate construction should be regarded as a pragmatic argument rather than a mathematical proof. Having said that, we can now focus the discussion on the matter side, i.e. the stress-energy-momentum tensor \(T^{ab}\). ### 4.5 Averaging in the fluid frame Backed up by the Fermi-coordinate argument, let us first explore the problem of spacetime averaging. We start by introducing a fine-grained congruence of wordlines with tangent vector field \(u^{a}\), e.g. associated with individual fluid elements. Then, working on a slightly larger (coarse grained) scale, we introduce another vector field \(U^{a}\)--the one used to define Fermi coordinates--such that small scale features are smoothed. In effect, we can then use the decomposition \[u^{a}=\gamma(U^{a}+\delta v^{a})\, \tag{4.29}\] with \[U_{a}\delta v^{a}=0\, \tag{4.30}\] and \[\gamma=\left(1-\delta v^{2}\right)^{-1/2}\approx 1+\frac{1}{2}g_{ab}\delta v ^{a}\delta v^{b}. \tag{4.31}\] At this point we take the view that it is natural to assume \(\delta v\ll 1\) (the speed of the fluctuations is well below that of light) as this should be a safe assumption for the problems we are interested in [72]. This allows us to develop the logic more explicitly, even though we will drop this assumption later. One may view the linear assumption as an additional constraint (alongside the assumptions of the Fermi frame) on the size of the region we average over, although we will not try to make this statement precise. Finally, let us assume that it makes sense to work with an ordered expansion in the fluctuations. Working to second order--throughout the discussion of averaging but not in the filtering case that follows, where the expressions are not expanded in this sense--we then have \[u^{a}\approx\left(1+\frac{1}{2}g_{bc}\delta v^{b}\delta v^{c}\right)U^{a}+ \delta v^{a}. \tag{4.32}\] Let us now consider the average of this four velocity. Given the set-up we have \(\overline{U}^{a}=U^{a}\) and one might expect to have \(\overline{u}^{a}=U^{a}\), as well. However, the problem turns out to be a little bit more intricate than that. First of all, from the discussion of time-averaging in section 4.3 we assume \[\overline{\delta v^{a}}=0. \tag{4.33}\] It is also worth noting that the averaging procedure preserves directionality; that is \[U_{a}\delta v^{a}=0\implies U_{a}\overline{\delta v^{a}}=0\, \tag{4.34}\] This holds as long as we satisfy the conditions laid out in section 4.4--not because of eq. (4.33). We also get (to second order) \[\overline{\gamma}=1+\frac{1}{2}g_{ab}\overline{\delta v^{a}\delta v^{b}}\, \tag{4.35}\] and it follows that \[\overline{u}^{a}=\left(1+\frac{1}{2}g_{bc}\overline{\delta v^{b}\delta v^{c}} \right)U^{a}=\overline{\gamma}U^{a}. \tag{4.36}\] At this point we reach an impasse. It is clear that \(\overline{u}^{a}\) is not (automatically) normalised and therefore can not serve as a four velocity. We would have to re-calibrate co-moving clocks to depend on the averaged fluctuations. This is problematic as we need a projection to effect the space-time split and one would expect this to involve the fluid four velocity. There seems to be two ways to proceed. First, we could (perhaps pragmatically) opt to work with \(U^{a}\) as the variable representing the flow. Alternatively, we may constrain the fluctuations to ensure that the averaging procedure returns \(\overline{u}^{a}=U^{a}\). This would follow if we were to assume the fluctuations to be such that \[g_{ab}\overline{\delta v^{a}\delta v^{b}}=0\implies\overline{\gamma}=1. \tag{4.37}\] This allows us to move on, working with \(\overline{u}^{a}\) to represent the flow, which might seem the natural generalisation of the Newtonian logic. However, considering the expected nature of small-scale turbulence, condition (4.37) seems too restrictive. By assuming that the variance of the velocity fluctuations vanishes, we effectively remove the small scale kinetic energy that links to large-scale features in standard eddy-based models for turbulence. The kinetic energy (per particle) of the fluctuations is defined as \[k=\frac{1}{2}g_{ab}\overline{\delta v^{a}\delta v^{b}}\, \tag{4.38}\] which clearly vanishes if we impose (4.37). The second of the suggested approaches thus seems unattractive and we will not pursue it further. A third--indeed, likely preferred--possibility will become apparent when we consider the equation for the conserved matter flux. #### Baryon number conservation Having discussed the issues associated with averaging the four-velocity, the natural next step is to consider baryon number conservation. Letting \(n=\overline{n}+\delta n\) represent the baryon number, the matter flux takes the form \[n^{a}=nu^{a}\approx(\overline{n}+\delta n)\left(\gamma U^{a}+\delta v^{a}\right)\,, \tag{4.39}\] such that (since \(\overline{\delta n}=0\) from the averaging) \[\overline{n}^{a}=\overline{n}\,\overline{\gamma}\,U^{a}+\overline{\delta n \delta v^{a}}\,. \tag{4.40}\] The number density measured by an observer moving along with \(U^{a}\) is then given by \[n_{0}=-U_{a}\overline{n}^{a}=\overline{n}\,\overline{\gamma}\, \tag{4.41}\] and we can write the averaged flux as \[\overline{n}^{a}=n_{0}U^{a}+\overline{\delta n\delta v^{a}}. \tag{4.42}\] The continuity equation then becomes \[\overline{\nabla_{a}n^{a}}=\nabla_{a}\overline{n}^{a}=\nabla_{a}(n_{0}U^{a})+ \nabla_{a}\left(\overline{\delta n\delta v^{a}}\right)=0\, \tag{4.43}\] or \[U^{a}\nabla_{a}n_{0}+n_{0}\nabla_{a}U^{a}=-\nabla_{a}\overline{\delta n\delta v ^{a}}. \tag{4.44}\] This last equation shows that there is particle diffusion at second order (relative to \(U^{a}\)). Fluctuations lead to drift from large scale elements to their neighbours. While nothing prevents us from taking this as given and moving on to the stress-energy-momentum tensor, the other equations of motion and the equation of state, it is useful to consider a density-weighted velocity, now starting from the flux \(n^{a}\). The advantage of this is that we can arrange things in such a way that the new four velocity is normalised, while the non-linear fluctuations are hidden in its definition. In essence, we use the weighting with the number density to adjust the co-moving clocks in the desired way. Suppose we define \[\overline{n}^{a}=\tilde{n}\tilde{u}^{a}\, \tag{4.45}\] while insisting that \(\tilde{u}_{a}\tilde{u}^{a}=-1\). This immediately leads to9 Footnote 9: From eqs. (4.45) and (4.46) we see that this density-weighted average corresponds to the Favre-type averaging often used in the Newtonian context (see, for instance, [138, 193, 70]). \[\tilde{n}=\overline{n}\,\overline{\gamma}\, \tag{4.46}\] and \[\tilde{u}^{a}=U^{a}+\frac{1}{\tilde{n}}\overline{\delta n\delta v^{a}}. \tag{4.47}\] It is easy to see that this retains the required normalisation as long as we ignore terms beyond second order. Crucially, this would remain true (by construction) if we did not expand in small fluctuations. We can now meaningfully introduce the projection with respect to the "Favre"-filtered observer \(\tilde{u}^{a}\), namely \[\tilde{\perp}^{a}_{b}=\delta^{a}_{b}+\tilde{u}^{a}\tilde{u}_{b}. \tag{4.48}\] As anticipated, it also follows that \[\tilde{n}=-\tilde{u}_{a}\overline{n}^{a}\, \tag{4.49}\] and the continuity equation takes the form \[\nabla_{a}(\tilde{n}\tilde{u}^{a})=\dot{\tilde{n}}+\tilde{n}\nabla_{a}\tilde{ u}^{a}=0\, \tag{4.50}\] where the "dot" represent the covariant derivative with respect to \(\tilde{u}^{a}\), i.e. \(\dot{\tilde{n}}=\tilde{u}^{a}\nabla_{a}\tilde{n}\). This is attractive because--as the fluctuations are "hidden"--we are left with a conservation law of the pre-averaged form. Note that we can remove the need of a closure in this equation, but this forces us to use the \(\tilde{u}^{a}\) observer. One can imagine making a different choice, which would lead to drift terms entering the continuity equation, as in (4.44). #### Averaged matter dynamics Next, consider the perfect fluid stress-energy-momentum tensor (derived in section 1.3). Starting from \[T^{ab}=(p+\varepsilon)u^{a}u^{b}+pg^{ab}\, \tag{4.51}\] we make use of (4.32) (noting that, to second order in the fluctuations, we have \(\overline{\gamma^{2}}=\overline{\gamma}^{2}\)), then after averaging we introduce \(\tilde{u}^{a}\) according to (4.47). This leads to \[\overline{T}^{ab}=(\overline{p}+\overline{\varepsilon})\overline{\gamma}^{2} \tilde{u}^{a}\tilde{u}^{b}+\overline{p}g^{ab}+2\tilde{u}^{(a}q^{b)}+s^{ab}\, \tag{4.52}\] with \[q^{a}=-\frac{\overline{p}+\overline{\varepsilon}}{\overline{n}}\overline{ \delta n\delta v^{a}}+\overline{(\delta p+\delta\varepsilon)\delta v^{a}}\, \tag{4.53}\] and \[s^{ab}=(\overline{p}+\overline{\varepsilon})\overline{\delta v^{a}\delta v^{ b}}. \tag{4.54}\] It is convenient to work with the energy density measured by an observer moving along with \(\bar{a}^{a}\). This follows from \[\bar{\epsilon}=\bar{u}_{a}\bar{u}_{b}\overline{T}^{ab}=\overline{\gamma}^{2}\bar{ \epsilon}+\left(\overline{\gamma}^{2}-1\right)\overline{p}. \tag{4.55}\] As a result, we have \[\overline{p}+\bar{\epsilon}=(\overline{p}+\bar{\epsilon})\overline{\gamma}^{2}\, \tag{4.56}\] which means that we can rewrite the stress-energy-momentum tensor as \[\overline{T}^{ab}=(\overline{p}+\bar{\epsilon})\bar{u}^{a}\bar{u}^{b}+ \overline{p}g^{ab}+2\bar{u}^{(a}q^{b)}+s^{ab}. \tag{4.57}\] The equations of motion \[\nabla_{a}\overline{T}^{ab}=0\, \tag{4.58}\] then lead to the energy equation; \[\dot{\bar{\epsilon}}+(\overline{p}+\bar{\epsilon})\nabla_{a}\bar{u}^{a}=\bar{ u}_{b}\bar{u}^{a}\nabla_{a}q^{b}-\nabla_{a}q^{a}+\bar{u}_{b}\nabla_{a}s^{ab}\, \tag{4.59}\] and the momentum equation; \[(\overline{p}+\bar{\epsilon})\,\bar{a}^{b}+\,\underline{\mathbb{I}}^{\,ab} \nabla_{a}\overline{p}=-\,\underline{\mathbb{I}}^{\,b}_{\ c}\bar{n}^{a}\nabla_ {a}q^{c}-q^{b}\nabla_{a}\bar{u}^{a}-q^{a}\nabla_{a}\bar{u}^{b}-\,\underline{ \mathbb{I}}^{\,b}_{\ c}\nabla_{a}s^{ac}\, \tag{4.60}\] where \(\bar{a}^{b}=\bar{u}^{a}\nabla_{a}\bar{u}^{b}\). It is worth remarking that with this choice of resolved variables, the final equations of motion resemble those of a general dissipative fluid with viscosity and heat-flux (cf. discussion in chapter 2). #### The equation of state A key step of any fluid model involves the connection to the microphysics as represented by the equation of state. As a first stab at this, let us outline the logic for the simple barotropic case (and then return to the issue in section 4.7, with a more realistic model in mind). In a barotropic model the starting point is a one-parameter energy density \(\varepsilon=\varepsilon(n)\), which leads to the thermodynamical (Gibbs) relation \[p+\varepsilon=n\mu\, \tag{4.61}\] where the chemical potential is defined as \[\mu=\frac{d\varepsilon}{dn}. \tag{4.62}\] Introducing fluctuations in all the scalars, as before (eg. \(p=\overline{p}+\delta p\)), we have \[\overline{p}+\delta p+\bar{\epsilon}+\delta\varepsilon=(\overline{n}+\delta n )(\overline{\mu}+\delta\mu)=\overline{n}\ \overline{\mu}+\overline{n}\delta\mu+\overline{\mu}\delta n+\delta n \delta\mu. \tag{4.63}\] Since the average of linear fluctuations are all taken to vanish in the present case, this leads to \[\overline{p}+\overline{\varepsilon}=\overline{n}\ \overline{\mu}+\overline{ \delta n\delta\mu}\, \tag{4.64}\] which shows that the number density fluctuations impact on the equation of state inversion required to evolve the system. In order to proceed, we need to provide a closure relation for \(\overline{\delta n\delta\mu}\). Noting that the energy was assumed to depend only on the number density, the fluctuations in (4.64) should not be independent. In order to take a closer look at this, we may Taylor expand for small fluctuations \(\delta n\) (recalling that we have already considered expanding the Lorentz factor in this way). This then leads to \[\mu\approx\mu(\overline{n})+\mu^{\prime}(\overline{n})\delta n+\frac{1}{2}\mu^ {\prime\prime}(\overline{n})\delta n^{2}. \tag{4.65}\] That is, we identify \[\overline{\mu}=\mu(\overline{n})+\frac{1}{2}\mu^{\prime\prime}(\overline{n}) \overline{\delta n^{2}}\, \tag{4.66}\] and \[\delta\mu=\mu^{\prime}(\overline{n})\delta n. \tag{4.67}\] The averaged Gibbs relation then becomes \[\overline{p}+\overline{\varepsilon}=\overline{n}\mu(\overline{n})+\left[ \frac{\overline{n}}{2}\mu^{\prime\prime}(\overline{n})+\mu^{\prime}(\overline {n})\right]\overline{\delta n^{2}}\, \tag{4.68}\] showing that--in addition to the derivatives of the chemical potential-- we need to provide a (closure) relation for \(\overline{\delta n^{2}}\). The other fluctuations are similarly slaved to \(\delta n\). We get \[\varepsilon(n)\approx\varepsilon(\overline{n})+\varepsilon^{\prime}(\overline {n})\delta n+\frac{1}{2}\varepsilon^{\prime\prime}(\overline{n})\delta n^{2}\, \tag{4.69}\] which leads to \[\overline{\varepsilon}=\varepsilon(\overline{n})+\frac{1}{2}\varepsilon^{ \prime\prime}(\overline{n})\overline{\delta n^{2}}\, \tag{4.70}\] and \[\delta\varepsilon=\varepsilon^{\prime}(\overline{n})\delta n=\mu(\overline{n })\delta n. \tag{4.71}\] In summary, we have equations describing the (proper time) evolution of \(\tilde{n}\), \(\tilde{\varepsilon}\) and \(\tilde{u}^{a}\), represented by equations (4.50), (4.59) and (4.60). In order to be able to solve these equations, we need to provide closure relations for the fluctuations involved in \(\overline{\gamma}\), \(q^{a}\) and \(s^{ab}\), that is \(\overline{\delta n\delta v^{a}}\) and \(\overline{\delta v^{a}\delta v^{b}}\). Once these, and \(\overline{\delta n^{2}}\), are provided, we can work out \(\overline{n}\) from \(\tilde{n}\) and--assuming that we have access to the derivatives of the chemical potential--then we have \(\overline{\varepsilon}\), as well. Using either (4.68) or (4.56) we can then rewrite \(\overline{p}\) in terms of resolved variables and the closure terms, which completes the set of quantities we need to close the system and carry on the evolution. In principle, the fibration based averaging model is complete. ### 4.6 Fluid element filtering Having established a workable procedure for averaging in the spacetime setting, let us turn to the issue of filtering. This is important because most actual numerical simulations assume filtering rather than averaging. Formally, we expect the two problems to be similar, but we know from section 4.3 that the filtering problem involves a slightly different logic. In particular, it is less natural--and also not desirable since the filtered fluctuations are unlikely to vanish--to consider an expansion in terms of the fluctuations. This is an important difference, so we need to carefully consider the steps in the analysis. The natural way to proceed is to use a weighted average, in the spirit of (4.47). We would then start from10 Footnote 10: It is important to note that quantities like \(\tilde{n}\) are not the same in the averaging and filtering cases. Still, we are using the same notation because they play the same role in each evolution scheme. Note also that our definition does not mean that \(\langle\tilde{n}^{a}\rangle=\tilde{n}^{a}\). \[\langle n^{a}\rangle\equiv\tilde{n}\tilde{u}^{a}\, \tag{4.72}\] with (by construction) \(\tilde{u}_{a}\tilde{u}^{a}=-1\), leading to \[\tilde{n}=-\tilde{u}_{a}\langle n^{a}\rangle. \tag{4.73}\] As in the averaging case, it is easy to see that the continuity equation then becomes \[\langle\nabla_{a}n^{a}\rangle=\nabla_{a}\langle n^{a}\rangle=0\quad \Longrightarrow\quad\dot{\tilde{n}}+\tilde{n}\nabla_{a}\tilde{u}^{a}=0. \tag{4.74}\] The main lesson is that, formally, the equation we arrive at takes the same form as in the averaging case; equation (4.50). Still, there are differences relating to i) the nonlinear quantities that have to be provided by a closure model and ii) the interpretation of the resolved/evolved variables. Deferring the discussion of more general aspects of the equation of state for a moment (these will be discussed in section 4.7), let us move on to write down the equations of motion consistent with (4.74). The filtered version of the perfect fluid stress-energy-momentum tensor can be written as \[\langle T^{ab}\rangle=(\langle p\rangle+\langle\varepsilon\rangle)\tilde{n}^{a }\tilde{u}^{b}+\langle p\rangle g^{ab}+\tau^{ab}\, \tag{4.75}\] where \[\tau^{ab}=\langle(p+\varepsilon)u^{a}u^{b}\rangle-(\langle p\rangle+\langle \varepsilon\rangle)\tilde{n}^{a}\tilde{u}^{b}\, \tag{4.76}\] requires a closure relation. Introducing, as before, the energy density measured by \(\bar{u}^{a}\): \[\bar{\epsilon}=\bar{u}_{a}\bar{u}_{b}\langle T^{ab}\rangle=\langle\epsilon \rangle+\bar{u}_{a}\bar{u}_{b}\tau^{ab}\, \tag{4.77}\] we can rewrite the filtered stress-energy-momentum tensor as \[\langle T^{ab}\rangle=(\langle p\rangle+\bar{\epsilon})\bar{u}^{a}\bar{u}^{b}+ \langle p\rangle g^{ab}+2\bar{u}^{(a}q^{b)}+s^{ab}\, \tag{4.78}\] with \[q^{a}=-\,\tilde{\perp}_{b}^{a}\bar{u}_{c}\tau^{cb}=-\,\tilde{\perp}_{b}^{a} \bar{u}_{c}\,\langle(p+\epsilon)u^{a}u^{b}\rangle\, \tag{4.79}\] and \[s^{ab}=\,\tilde{\perp}_{c}^{a}\,\tilde{\perp}_{d}^{b}\tau^{cd}=\,\tilde{\perp }_{c}^{a}\,\tilde{\perp}_{d}^{b}\,\langle(p+\epsilon)u^{a}u^{b}\rangle. \tag{4.80}\] It is easy to see that the energy and momentum equations take exactly the same form (once we change \(\langle p\rangle\to\overline{p}\)) as in the averaged case (cf. equations (4.59) and (4.60)). As in that case, the equations of motion provide all the information we need to solve the system once the relevant closure relations are provided. The key to a workable model is to provide appropriate closure relations, so it is important to understand what this involves. ### 4.7 Filtered Thermodynamics So far we have only outlined the argument for the simple case of a barotropic fluid. As the equation of state is a central issue for any realistic model--and, obviously, any numerical simulation--let us rethink this. A quick look back at eq. (4.64) shows that, even if we start from a barotropic Gibbs relation at the fine scale, the averaged/filtered result is effectively "non-barotropic". In fact, the \(\overline{\delta n\delta\mu}\) closure term could be interpreted as an "entropy-like" contribution associated with the fluctuations. This suggests that it makes sense to start straight away from a non-barotropic model, i.e. with a Gibbs relation of the form \[p+\epsilon=\mu n+Ts\, \tag{4.81}\] where \(s\) is the entropy density and \(T\) is the associated temperature. The barotropic example also shows that the "effective" term that stems from the averaging or filtering procedure does not relate to the actual entropy, in the sense that it is not associated with some dissipative process and/or entropy production rate. This is evident from the fact that we have freedom in the choice of the averaging/filtering observer, and therefore in the variables to be evolved, and one might choose to frame the model in such a way that the fluctuation terms are reabsorbed in the definition of the variables themselves, just like we did for the weighted four-velocity \(\bar{u}^{a}\) in eq. (4.47). Let us try to make these points more concrete, by considering a model that is non-barotropic from the get-go. Because there is no formal difference in the resulting equations, we will do this without distinguishing between the averaging and filtering cases (although, as we are now familiar with the logic, using the slightly more abstract notation from the latter). We start by discussing some subtleties of the barotropic example that we previously left aside. The energy density of a barotropic fluid is a function of the matter density only, which means there is no need to evolve both quantities separately. On the fine-scale, the energy equation contains the same information as the continuity equation: \[\frac{d\varepsilon}{d\tau}+(p+\varepsilon)\nabla_{a}u^{a}=\mu\frac{dn}{d\tau }+\mu n\nabla_{a}u^{a}=\mu(\nabla_{a}n^{a})=0\, \tag{4.82}\] where \(\tau\) is the proper time associated with the "actual" fluid worldlines with tangent \(u^{a}\). Note that the Gibbs relation eq. (4.61) is crucial for this argument. However, the situation changes on the coarse-grained scale. The link between the micro-scale equation of state and the resolved energy is not trivial--it may have to be established by a set of high resolution simulations, even though this may not be practical/feasible. In effect, the equation of state we are working with here is not the one you get from nuclear physics. This is, in fact, true also in the simpler case where we set to zero the contribution coming from the \(\tau^{ab}\) residual as the evolved density is not obtained by averaging the fine-scale one \(\bar{n}\neq\langle n\rangle\). The net result is that we have to treat the resolved energy \(\bar{\varepsilon}\) and the resolved density \(\bar{n}\) as independent variables, and evolve both of them. #### The effective entropy This subtle difference in the counting of independent variables between the fine- and coarse scale models obviously no longer exists for a non-barotropic fluid. For a two-parameter equation of state, the energy and particle density can be taken as independent variables already at the fine scale. This is, indeed, standard practice in numerical relativity simulations. Let us also recall that, if the fluid is ideal there is no additional information gained from evolving the entropy current. In fact, we have seen in section 1.3 that the entropy current is automatically advected as a consequence of the perfect fluid equations, provided this is a function of the energy and particle number densities, \(s=s(n,\varepsilon)\). To complete the model set-up, however, we still have to clarify how the filtered pressure relates to the evolved variables. The barotropic model has been discussed in section 4.5.3 for the averaging case, and the filtering case would work analogously. We now look at the non-barotropic fluid case. We can work with a resolved entropy defined as the usual thermodynamic potential \(\bar{s}\doteq s(\bar{\varepsilon},\,\bar{n})\). The resolved temperature and chemical potential then follow from the standard definitions \[\frac{1}{\tilde{T}}\doteq\left(\frac{\partial\tilde{s}}{\partial \tilde{\varepsilon}}\right)_{\tilde{n}}(\tilde{\varepsilon},\,\tilde{n})\, \tag{4.83a}\] \[-\frac{\tilde{n}}{\tilde{T}}\doteq\left(\frac{\partial\tilde{s}}{ \partial\tilde{n}}\right)_{\tilde{\varepsilon}}(\tilde{\varepsilon},\,\tilde{n}). \tag{4.83b}\] We stress that we chose to use as thermodynamic potential the entropy, as it is a function of the chosen independent variables in the equations of motion, \(\tilde{n}\) and \(\tilde{\varepsilon}\). Note also that, \(\tilde{s}\) does not represent the true entropy and \(\tilde{T}\) is not the actual temperature, either. We are simply assuming that the usual thermodynamical definitions "make sense" at the filtering scale. Then, following the same logic we used for the barotropic case (see section 4.5.3) we filter eq. (4.81) and rewrite it as \[\langle p\rangle=-\tilde{\varepsilon}+\tilde{\mu}\tilde{n}+\tilde{T}\tilde{s} +M\, \tag{4.84}\] with \[M=\left(\langle T\tilde{s}\rangle-\tilde{T}\tilde{s}\right)+\left(\langle\mu n \rangle-\tilde{\mu}\tilde{n}\right)-\left(\langle\varepsilon\rangle-\tilde{ \varepsilon}\right)\.\] (4.85a) The argument is now complete. We have explained how to express the averaged/filtered pressure that enters the equations of motion in terms of the resolved variables \[\tilde{\varepsilon},\,\tilde{n}\] and the (new) residual \[M\]. Having considered the resolved thermodynamics, we can turn to the "entropy production" associated with the averaging/filtering procedure. The final equations (i.e. eqs. (4.59) and (4.60)) clearly remind us of the result for a dissipative fluid (see [21]) so we are motivated to consider possible constraints stemming from the second law of thermodynamics. To do this we can work through steps analogous to eq. (1.12) to establish the impact of the averaging/filtering procedure. Because the resolved entropy \(\tilde{s}\) is taken to be a function of the resolved energy \(\tilde{\varepsilon}\) and the number density \(\tilde{n}\) we have \[\tilde{T}\nabla_{a}(\tilde{s}\tilde{u}^{a})=\tilde{T}\tilde{s}\nabla_{a} \tilde{u}^{a}+\tilde{T}\tilde{s}=\tilde{T}\tilde{s}\nabla_{a}\tilde{u}^{a}+ \tilde{\varepsilon}-\tilde{\mu}\tilde{n}. \tag{4.86}\] Now, by means of eqs. (4.50) and (4.59) we obtain \[\tilde{T}\nabla_{a}(\tilde{s}\tilde{u}^{a}) = \left(\tilde{T}\tilde{s}+\tilde{\mu}\tilde{n}-\langle p\rangle- \tilde{\varepsilon}\right)\nabla_{a}\tilde{u}^{a}-q^{a}\tilde{a}_{a}-\nabla_{ a}q^{a}-s^{ab}\nabla_{a}\tilde{u}_{b} \tag{4.87}\] \[= -M\nabla_{a}\tilde{u}^{a}-q^{a}\tilde{a}_{a}-\nabla_{a}q^{a}-s^{ ab}\nabla_{a}\tilde{u}_{b}\.\] This shows that the entropy is no longer advected at the coarse scale, as a result of the averaging/filtering procedure. However, the fine scale ("exact") theory is ideal, so the actual entropy is advected. For this reason, the model is not constrained by the second law at the coarse-grained scale. This is a very important point as it impacts on the closure relations (see below), which (evidently) can be discussed without considering the thermodynamical restrictions for "real" dissipative fluids. Effectively, the heat-flux term in eq. (4.53) or eq. (4.79) is associated with energy transfer from large eddies to small ones (or vice versa) rather than being a faithful heat transfer. We also note that this would not change even if we started from a fluid that is dissipative already at the fine scale. Restrictions stemming from the second law of thermodynamics apply only at the fine-scale level, not at the coarse one. #### Energy cascade argument Having discussed the thermodynamical interpretation of the quantities that enter the equations of motion, it makes sense to consider the involved energy cascade. This is relevant because an analogous argument is used in standard work on turbulence (see, for instance, [139, 137]) to motivate the closure of the fluid equations. It is useful to spell out the relativistic analogue of the classical argument. The starting point is the energy equation (see eq. (4.59)) rewritten as \[\underbrace{\dot{\varepsilon}+(\langle p\rangle+\bar{\varepsilon})\nabla_{a} \bar{u}^{a}}_{\rm macro}=\overbrace{-q^{b}\bar{a}_{b}+\nabla_{a}q^{a}+\bar{u} _{b}\nabla_{a}s^{ab}}^{\rm mixed}. \tag{4.88}\] Here, we have highlighted that the terms on the left-hand side can be considered as macroscopic, in the sense that they involve only resolved quantities and describe an ideal evolution--intended to correctly capture the large-scale dynamics. In contrast, the terms on the right-hand side are "mixed" as they involve unresolved quantities--the residuals--and couple macro- and micro-scale terms. In effect, they can be thought of as transferring energy from one scale to another. To see this, we may, for a moment, assume a steady state evolution. As a consequence of the matter continuity equation, we then have \(\nabla_{a}\bar{u}^{a}=0\) and therefore rewrite the energy equation as (setting to zero terms involving time derivatives with respect to \(\bar{u}^{a}\)) \[\nabla_{a}q^{a}=s^{ab}\nabla_{a}\bar{u}_{b}. \tag{4.89}\] In this relation, the term on the left-hand side should represent the energy sink (source) due to the (inverse) energy cascade--subtracting energy from the macro-scale into the micro one (or vice versa). In analogy with Newton's law of viscosity, Boussinesq suggested that one should relate the turbulent stress to the mean shear flow (see, e.g., [151]). In our case, this leads to \[s^{ab}\propto\bar{\sigma}^{ab}\, \tag{4.90}\] where the shear rate \(\bar{\sigma}^{ab}\) is defined in the usual way but in terms of the filtered four velocity, namely \[\bar{\sigma}_{ab}=\left[\bar{\perp}^{c}_{(a}\bar{\perp}^{d}_{b)}-\frac{1}{3} \bar{\perp}^{cd}\bar{\perp}_{ab}\right]\nabla_{c}\bar{u}_{d}. \tag{4.91}\] Motivated by this argument we move on to develop a closure scheme to complete the "fibration framework" we are proposing. ### 4.8 An explicit closure model Our ultimate aim is to develop a consistent scheme for large-eddy simulations in relativity. Even though this involves numerical aspects which we will not touch upon here, we need to provide a strategy for closing the system of equations already at the fibration level. This is the problem we focus on now. As we have seen, in order to carry out an evolution we need to provide some prescription for the residual terms, that is \(\tau^{ab}\), \(M\) in the filtering case or, equivalently, \(\overline{\delta n\delta n}\), \(\overline{\delta n\delta v^{a}}\), \(\overline{\delta v^{a}\delta v^{b}}\) in the averaging one. In classical computational fluid dynamics, one of the earliest closures proposed--still widely used--is due to Smagorinsky [205]. This model effectively boils down to retaining only the \(s^{ab}\) term and modelling it as a traceless tensor proportional to the (resolved) shear-flow. Such a closure is motivated by arguments of the kind we provided in the previous section. However, this model may be too simplistic to capture all relevant features of a turbulent flow11. In view of this, we aim to set up a scheme that can be used to describe turbulent flows for which the Smagorinsky model gives unsatisfactory results. Finally, it is important to note that the Smagorinsky model is typically implemented--both in recent relativistic numerical work as well as in the Newtonian context--in the Eulerian frame associated with a foliation. The simple fact that the translation between fibration and foliation leads to a "mixing" of the different terms in the stress-energy-momentum tensor, suggests that we need to consider a more general closure model. Footnote 11: We do not want to comment on the validity of the Boussinesq hypothesis here, so simply refer to [151] where it is discussed, albeit in a non-relativistic setting. In effect, we propose to model the residuals in terms of a general expansion in derivatives of the resolved variables, \(\bar{n}\), \(\bar{n}^{a}\) and \(\bar{\epsilon}\). For practical reasons we halt the derivative expansion at first order, and decompose the gradients of the resolved quantities as \[\nabla_{a}\bar{n} =\,\mathbb{\bar{\perp}}_{a}^{b}\nabla_{b}\bar{n}-\bar{u}_{a}\hat{ n}\, \tag{4.92a}\] \[\nabla_{a}\bar{\epsilon} =\,\mathbb{\bar{\perp}}_{a}^{b}\nabla_{b}\bar{\epsilon}-\bar{u}_{ a}\dot{\bar{\epsilon}}\,\] (4.92b) \[\nabla_{a}\bar{u}_{b} =-\bar{u}_{b}\bar{u}_{a}+\tilde{\omega}_{ab}+\tilde{\sigma}_{ab} +\frac{1}{3}\tilde{\theta}\mathbb{\bar{\perp}}_{ab}\, \tag{4.92c}\] where, as before, \(\dot{\bar{n}}=\bar{u}^{a}\nabla_{a}\bar{n}\) (similarly for \(\dot{\bar{\epsilon}}\)) and the filtered four velocity gradients \(\nabla_{a}\bar{u}_{b}\) are decomposed as usual. This closure scheme is analogous, although in a different spirit, to the most general constitutive relations discussed for dissipative hydrodynamics (at the linear level), see section 2.4. Because there is no formal difference in the modelling of the sub-filter scale terms between the averaging and filtering cases, let us set up the closure scheme for the filtering case. We also immediately consider the case of a two-parameter equation of state, as the barotropic limit can be easily recovered from the more general results. We then have to model the residuals \(q^{a}\) and \(s^{ab}\). Recalling the definitions eqs. (4.79) and (4.80) we express these as \[s^{ab} =-\eta\bar{\sigma}^{ab}+(\pi_{1}\bar{\theta}+\pi_{2}\hat{\bar{n}} +\pi_{3}\dot{\bar{e}})\,\mathbb{\bar{L}}^{ab}\, \tag{4.93a}\] \[q^{a} =\theta_{1}\bar{a}^{a}+\theta_{2}\,\mathbb{\bar{L}}^{a}_{b}\nabla ^{b}\bar{n}+\theta_{3}\mathbb{\bar{L}}^{a}_{b}\nabla^{b}\bar{\epsilon}. \tag{4.93b}\] In order to evolve the system, we also need to express \(\langle p\rangle\) in terms of the resolved variables. To do so, we have to provide \(M\). As this is a scalar, we model it as (see eq. (4.85)) \[M=\chi_{1}\bar{\theta}+\chi_{2}\dot{\bar{n}}+\chi_{3}\dot{\bar{e}}\,\] (4.94a) and the filtered pressure then takes the form \[\langle p\rangle=-\bar{\epsilon}+\bar{T}\bar{s}+\bar{n}\mu+M. \tag{4.95}\] We have now introduced a total of 10 parameters to be used in the actual large-eddy model. These parameters--potentially validated/calibrated through high-resolution simulations--can be considered as functions of the resolved energy, density etcetera. Therefore, when we focus on a small region of the fluid they can be treated as simple constants. #### Stability Analysis Let us turn to the issue of linear stability, as this is a necessary condition for the system of equations to be (numerically) solved. Moreover, the fact that the "effective" theory we arrive at the resolved scale resembles that of a dissipative fluid further motivates this analysis. After all, it is well known that the standard/textbook relativistic viscous hydrodynamics equations are unstable (cf. chapter 2). The linear stability of the effective theory obviously depends on the closure used, so let us focus on the specific relations proposed above. However, the aim is not to discuss the stability of the closure model in full generality, only to provide a "proof of principle" argument. As the averaging/filtering residuals have been expressed in terms of gradients of the evolved variables and we are considering a local region, it makes sense to assume that the background configuration--the stability of which we want to study--is that of a homogeneous fluid at rest. We will also consider, as usual, a flat background spacetime and ignore metric perturbations. This is justified--even in the general relativistic context--since the stability analysis is intended to be local, so that the effects of gravity can be transformed away (using a local inertial frame argument, in the spirit of the Fermi frame logic). Finally, we simplify the notation in order not to clutter up the equations. We drop the "tildes" used to identify the resolved variables, as we no longer need to make the distinction. Instead, we identify background quantities with a subscript (0). For instance, we write the background four-velocity as \(u_{0}^{a}\) and the chemical potential in the background configuration as \(\mu_{0}\). Let us start by expanding the perturbed fields (indicated by a \(\delta\)) in Fourier modes \[\delta u^{a} =\mathcal{B}^{a}e^{ik_{a}x^{d}}\, \tag{4.96a}\] \[\delta n =\mathcal{A}e^{ik_{a}x^{d}}\,\] (4.96b) \[\delta\varepsilon =\mathcal{E}e^{ik_{a}x^{d}}. \tag{4.96c}\] where \(\mathcal{B}^{a}\) is orthogonal to \(u_{0}^{a}\) because \(u_{0}^{a}\delta u_{a}=0\) as a result of the four-velocity normalization. We also decompose the wave-vector \(k^{a}\) as \[k^{a}=\omega u_{0}^{a}+k\hat{k}^{a}\, \tag{4.97}\] where \(\omega\) is the frequency, \(k\) is the wavelength and \(\hat{k}^{a}\) is a unit four-vector orthogonal to \(u_{0}^{a}\) which describes each mode's direction. Because of the metric signature convention (+2), the system will be linearly stable (in time) if all solutions to the dispersion relation--written as \(\omega=\omega(k)\)--have a negative (or vanishing) imaginary part. We will also use \(\hat{k}^{a}\hat{k}^{b}\) and12\(\delta^{ab}-\hat{k}^{a}\hat{k}^{b}\) to decompose the momentum equation as well as \(\mathcal{B}^{a}=(0,\,B_{L},\,\mathcal{B}_{T1},\,\mathcal{B}_{T2})^{\top}\) into its longitudinal and transverse part (with respect to wave direction). Footnote 12: Recall the definition of orthogonal projection with respect to a time-like vector. The sign difference stems from \(\hat{k}^{a}\) being a unit space-like vector. In order to write the linearized equations in terms of the perturbed fields, we have to clarify how to perturb the pressure. As can be seen from eq. (4.95), its explicit expression depends on \(M\). Let us first focus on the non-residual contribution and come back to \(M\) later: \[\delta p=(\mathcal{C}\mathcal{A}+\mathcal{D}\mathcal{E})e^{ik_{a}x^{d}}+M\, \tag{4.98}\] where we have defined \[\mathcal{C} =\left(\frac{\partial p}{\partial n}\right)_{\varepsilon}(n_{0}, \,\varepsilon_{0})\, \tag{4.99a}\] \[\mathcal{D} =\left(\frac{\partial p}{\partial\varepsilon}\right)_{n}(n_{0}, \,\varepsilon_{0})\, \tag{4.99b}\] to simplify the expressions that follow. Let us start by linearizing first the non-residual part of the equations of motion. The result is \[-i\omega{\cal A}+ikn_{0}{\cal B}_{L}=0\, \tag{4.100a}\] \[-i\omega{\cal E}+ih_{0}k{\cal B}_{L}=0\,\] (4.100b) \[-ih_{0}\omega{\cal B}_{L}+ik({\cal C}{\cal A}+{\cal D}{\cal E})=0\,\] (4.100c) \[-ih_{0}\omega{\cal B}_{T1}=0\,\] (4.100d) \[-ih_{0}\omega{\cal B}_{T2}=0\, \tag{4.100e}\] where we have introduced the usual enthalpy density, \(h_{0}=p_{0}+\varepsilon_{0}\). To work out the full linearized system of equations, let us consider each residual at a time. We start by looking at the trace-free part of \(s^{ab}\), as this would correspond to the (fibration version of the) model proposed by Smagorinsky in the Newtonian context. A straightforward calculation leads to \[\delta\sigma^{ab} =\delta\big{(}\perp^{ac}_{0}\perp^{bd}_{0}\partial^{(c}u^{d)}- \frac{1}{3}(\partial_{c}u^{c})\perp^{ab}_{0}\big{)}\] \[=ik\big{(}\hat{k}^{(a}{\cal B}^{b)}-\frac{1}{3}{\cal B}_{L}\perp^{ ab}_{0}\big{)}e^{ik_{d}x^{d}}\, \tag{4.101}\] where \(\perp^{ab}_{0}=\eta^{ab}+u^{a}_{0}u^{b}_{0}\) is the projection orthogonal to the background velocity and \(\eta^{ab}\) is the Minkowski metric. As for the trace part of \(s^{ab}\) we have \[\delta\Big{[}\big{(}\pi_{1}\partial_{c}u^{c}+\pi_{2}u^{c} \partial_{c}n+\pi_{3}u^{c}\partial_{c}\varepsilon\big{)}\perp^{ab}_{0}\Big{]}=\] \[=\big{(}i\pi_{1}k{\cal B}_{L}-i\pi_{2}\omega{\cal A}-i\pi_{3} \omega{\cal E}\big{)}\perp^{ab}_{0}\, \tag{4.102}\] and it is easy to see that these additional terms only affect the longitudinal projection of the momentum equation. Next we have the heat-flux \(q^{a}\). It is fairly easy to see that only two (out of five) terms will contribute to the linearized equations. These terms lead to \[\delta(\partial_{a}q^{a})=\theta_{1}\omega k{\cal B}_{L}-\theta_{2}k^{2}{\cal A }-\theta_{3}k^{2}{\cal E}\, \tag{4.103}\] which enters the energy equation, while \[\delta(\perp^{b}_{c}u^{a}\partial_{a}q^{c})=-\theta_{1}\omega^{2}B^{b}+\theta _{2}\omega k\hat{k}^{b}{\cal A}+\theta_{3}\omega k\hat{k}^{b}{\cal E}\, \tag{4.104}\] affects the momentum equation. We note that the last two terms in the expression above affect only the longitudinal projection, while the first term modifies both the longitudinal and transverse components. Last but not least, we consider the residuals that arise from the Gibbs relation: \[M=\chi_{1}\theta+\chi_{2}\dot{n}+\chi_{3}\dot{\epsilon}. \tag{4.105}\] It is easy to see that this residual will not affect the (linearized) energy equation, while it contributes to the longitudinal momentum equation as \[\delta\big{(}\perp^{ab}\partial_{a}M\big{)}=\big{[}-\chi_{1}k^{2}\mathcal{B}_{L}+ \chi_{2}k\omega\mathcal{A}+\chi_{3}k\omega\mathcal{E}\big{]}\tilde{k}^{b}. \tag{4.106}\] Collecting everything together, we can write the linearized equations13 as Footnote 13: Note that the coefficient matrix depends only on \(\omega=-u^{a}k_{a}\) and \(k^{2}=k^{a}k_{a}+\omega^{2}\), in accordance with Lorentz invariance. \[\begin{pmatrix}\mathbf{L}&\mathbf{0}\\ \mathbf{0}&\mathbf{T}\end{pmatrix}\cdot\Big{(}\mathcal{A}&\mathcal{E}&\mathcal{B}_{L}& \mathcal{B}_{T1}&\mathcal{B}_{T2}\Big{)}^{\top}=0\, \tag{4.107}\] where \[\mathbf{L}=\begin{pmatrix}-i\omega&0&in_{0}k\\ -\theta_{2}k^{2}&-i\omega-\theta_{3}k^{2}&ih_{0}k+\theta_{1}k\omega\\ ik\mathcal{C}+(\zeta_{2}+\theta_{2})k\omega&i\mathcal{D}k+(\zeta_{3}+\theta_{3 })k\omega&-\big{(}ih_{0}\omega-\frac{2}{3}\eta k^{2}+\zeta_{1}k^{2}+\theta_{1 }\omega^{2}\big{)}\end{pmatrix}\, \tag{4.108}\] and \[\mathbf{T}=\begin{pmatrix}-\big{(}ih_{0}\omega-\frac{\eta}{2}k^{2}+\theta_{1} \omega^{2}\big{)}&0\\ 0&-\big{(}ih_{0}\omega-\frac{\eta}{2}k^{2}+\theta_{1}\omega^{2}\big{)}\end{pmatrix}\, \tag{4.109}\] and we have introduced \(\zeta_{i}=\chi_{i}+\pi_{i}\) with \(i=1,2,3\). A similar analysis is carried out, for instance, in [133]. However, the different gradient expansions (cf. eq. (2.77) and eqs. (4.93) and (4.94)) lead to slightly different results. The stability analysis for the general case is perhaps best considered numerically. We also stress (again) that, because the entropy \(\tilde{s}\) does not represent the true one, we are allowed to violate the second law of thermodynamics. This will give us more freedom (with respect to faithful dissipative fluids) to control the stability of the closure. #### Smagorinsky model As a first step, let us consider the simple case where the only non-vanishing parameter in eqs. (4.93) and (4.94) is \(\eta\). This would correspond to the (fibration version of the) model proposed by Smagorinsky in the Newtonian context. Starting from eq. (4.107) we easily obtain the linearized equations for this case \[\begin{pmatrix}-i\omega&0&in_{0}k&0&0\\ 0&-i\omega&ih_{0}k&0&0\\ ik\mathcal{C}&i\mathcal{D}k&-ih_{0}\omega+\frac{2}{3}\eta k^{2}&0&0\\ 0&0&0&-ih_{0}\omega+\frac{\eta}{2}k^{2}&0\\ 0&0&0&0&-ih_{0}\omega+\frac{\eta}{2}k^{2}\end{pmatrix}\cdot\begin{pmatrix} \mathcal{A}\\ \mathcal{E}\\ \mathcal{B}_{L}\\ \mathcal{B}_{T1}\\ \mathcal{B}_{T2}\end{pmatrix}=0. \tag{4.110}\] The required dispersion relations are obtained by setting to zero the determinant of the coefficient matrix. Working this out, we find that the transverse modes decouple, and the corresponding dispersion relation is \[\omega=-i\frac{\eta}{2h_{0}}k^{2}. \tag{4.111}\] The other non-trivial modes are longitudinal, with dispersion relation \[h_{0}\omega^{2}+i\frac{2}{3}\eta k^{2}\omega-(h_{0}\mathcal{D}+Cn_{0})k^{2}=0. \tag{4.112}\] Solving this equation we obtain \[\omega=\pm c_{s}k-i\frac{\eta}{3h_{0}}k^{2}+\mathcal{O}(k^{3})\, \tag{4.113}\] where \[c_{s}^{2}=(\mathcal{D}+\mathcal{C}n_{0}/h_{0})\, \tag{4.114}\] is the usual sound speed. The longitudinal modes represent sound waves, while the transverse modes are not propagating. Both sets of modes are stable for \(\eta>0\). As a simple consistency check, it is easy to see that in the ideal limit, when \(\eta=0\), one obtains a single non-trivial solution representing an undamped sound wave. The result demonstrates that the simple Smagorinsky model is stable according to the fibration observer. This is a key conclusion for cosmological applications, as these tend to involve a cosmological time associated with a co-moving observer; in essence, a fibration. The situation is different for numerical relativity simulations, which tend to be based on a spacetime foliation. The matter description (formally) involves a fibration associated with fluid element worldlines, but the evolution is carried out in a different frame. Our stability demonstration does not (yet) cover this case. In order to complete the argument, we have to consider the stability issue in a different frame. We therefore introduce (as usual) the Eulerian observer \(N^{a}\) as \[u^{a}=W\left(N^{a}+v^{a}\right)\,\quad\text{with}\quad W=(1-v^{2})^{-1/2}\, \tag{4.115}\] and note that the two frames are related by a Lorentz boost. This turns out to cause trouble. The simple Smagorinsky model, while stable in the fibration frame, becomes unstable in the boosted frame. In order to demonstrate this result, we start by noting that (using primes to indicate boosted quantities) \[\partial^{\prime}_{a}\epsilon^{\prime} =\Lambda^{b}_{a}\partial_{b}\epsilon=0\, \tag{4.116a}\] \[\partial^{\prime}_{a}n^{\prime} =\Lambda^{b}_{a}\partial_{b}n=0\,\] (4.116b) \[\partial^{\prime}_{a}u^{\prime}_{b} =\Lambda^{c}_{a}\Lambda^{b}_{a}\partial_{c}u_{d}=0\, \tag{4.116c}\] where \(\Lambda\) is the Lorentz boost matrix. As we are linearizing with respect to a homogeneous (in spacetime) background this confirms that the gradient-based closure scheme we are proposing still makes sense. We have to work out the dispersion relations in a non-comoving frame, but because these are expressed in terms of \(\omega=-k^{a}u_{a}\) and \(k^{2}=k^{a}k_{a}+\omega^{2}\), we just have to boost these quantities. We can then take (without loss of generality) \(v^{a}\) to be in the \(x\)-direction, while \(\hat{k}\) lies in the \(x-y\) plane. Then we introduce the angle \(\phi\) between the wave-vector and \(v^{a}\) as \(\hat{k}^{a}v_{a}=v\,\cos\phi\) and write the Lorentz boost as: \[\omega =W(\omega^{\prime}-vk^{\prime}\cos\phi)\, \tag{4.117a}\] \[k_{x} =W(k^{\prime}\cos\phi-v\omega^{\prime})\,\] (4.117b) \[k_{y} =k^{\prime}\sin\phi=k^{\prime}_{y}\,\] (4.117c) \[k_{z} =0=k^{\prime}_{z}. \tag{4.117d}\] Applying this to the transverse dispersion relation in eq. (4.111) we obtain (dropping the primes for clarity) \[\left(\eta W^{2}v^{2}\right)\omega^{2}-2\left(ih_{0}W-\eta W^{2} vk\cos\phi\right)\omega-\\ -\left(\eta W^{2}k^{2}\cos^{2}\phi+\eta k^{2}\sin^{2}\phi-2ih_{0} \cos\phi Wvk\right)=0. \tag{4.118}\] We note that eq. (4.111) was a first order polynomial, while the boost made it second order, thus generating an additional solution. For long wavelengths, the two solutions are \[\omega =vk\cos\phi-i\frac{\eta}{2h_{0}W^{3}}\left(\cos^{2}\phi+W^{-2} \sin^{2}\phi\right)k^{2}+\mathcal{O}(k^{3}) \tag{4.119a}\] \[\omega =i\frac{2h_{0}}{\eta Wv^{2}}+\mathcal{O}(k). \tag{4.119b}\] The first solution is the boosted version of the mode we obtained in the fibration. It is stable for \(\eta>0\) (as in the comoving frame), propagating with phase velocity \(v\cos\phi\) and the decay rate reduces to the original value as \(v\to 0\), \(W\to 1\). There is, however, an additional solution which is non-vanishing for \(k=0\) (in [133, 117] these are referred to as "gapped" modes). This second mode is evidently unstable for \(\eta>0\). This result demonstrates that the simple Smagorinsky model is unstable when "observed" from a non-comoving frame. This is a well-known problem of Eckart-Landau models for dissipative fluids (cf. chapter 2). As we are not dealing with a dissipative model that describes linear deviation from a thermodynamical equilibrium state, this is not intrinsically problematic. For real dissipative systems, stability of equilibrium is not only required for numerical implementations, but also guided by intrinsic consistency. A system slightly out of equilibrium must evolve, "by definition," toward thermodynamical equilibrium, no matter if the fluid in equilibrium is at rest or not. Our case is different. However, because we are setting up the filtering scheme in the fibration--in order to retain consistency with the covariance of General Relativity, as discussed in section 4.4.1--while the simulations will be carried out in the foliation, we have to ensure that the model is "covariantly" stable. The LES model of [176] gets away with a simple Smagorinsky closure because it is directly implemented in the foliation frame, where the simulation is then performed. #### Fixing the Smagorinsky instability The aim now is to show how we can fix the instability problem (in the boosted frame) by introducing more parameters in the closure. Focusing first on the transverse modes, we see from eq. (4.109) that the only way to fix the problem is by considering a non-zero \(\theta_{1}\). The co-moving transverse dispersion relation then becomes \[2ih_{0}\omega-\eta k^{2}+\theta_{1}\omega^{2}=0\, \tag{4.120}\] with solutions for long wavelengths (i.e. small \(k\)): \[\omega_{+} =-i\frac{\eta}{2h_{0}}k^{2}+\mathcal{O}(k^{4})\, \tag{4.121a}\] \[\omega_{-} =-2i\frac{h_{0}}{\theta_{1}}+\mathcal{O}(k^{2}). \tag{4.121b}\] Because the dispersion relation is now quadratic we obtain two solutions: the "un-gapped" mode from the Smagorinsky model, and an additional gapped mode that appears already in the co-moving frame. The (long wavelength) stability in the unboosted frame is guaranteed by taking \(\eta>0\) (as before) alongside \(\theta_{1}>0\). We can further check the stability in the co-moving frame at all wavelengths by means of the Routh-Hurwitz criterion (cf. appendix E and [131]). In order to do so we introduce \(\Delta=-i\omega\) (to deal with real algebraic equations) and rewrite the dispersion relation as \[\theta_{1}\Delta^{2}+2h_{0}\Delta+\eta k^{2}=0. \tag{4.122}\] Stability requires the solutions to have negative real part \(\text{Re}\Delta<0\). The Routh-Hurwitz criterion then guarantees the stability (at all wavelengths) as long as \(\theta_{1}>0\) and \(\eta>0\). These conditions are identical to the ones obtained at long wavelengths. In order to check the stability in the boosted frame, we boost the transverse modes dispersion relation (as before) to get \[\left(\theta_{1}-\eta v^{2}\right) W^{2}\omega^{2}+2\left(ih_{0}W+(\eta+\theta_{1})W^{2}vk\cos\phi \right)\omega-\] \[-\left(\eta W^{2}k^{2}\cos^{2}\phi+\eta k^{2}\sin^{2}\phi+2ih_{0 }Wvk-\theta_{1}W^{2}v^{2}k^{2}\cos^{2}\phi\right)=0. \tag{4.123}\] To work out the long wavelength stability conditions, we may solve this equation perturbatively. This means that we introduce \[\omega =\omega_{0}+\omega_{1}k+\omega_{2}k^{2}+\omega_{3}k^{3}\, \tag{4.124a}\] \[\omega^{2} =\omega_{0}^{2}+2\omega_{0}\omega_{1}k+(\omega_{1}^{2}+2\omega_{0 }\omega_{2})k^{2}+(2\omega_{1}\omega_{2}+2\omega_{0}\omega_{3})k^{3}\,\] (4.124b) \[\omega^{3} =\omega_{0}^{3}+3\omega_{0}^{2}\omega_{1}k+(3\omega_{0}\omega_{1 }^{2}+3\omega_{0}^{2}\omega_{2})k^{2}+(\omega_{1}^{3}+6\omega_{0}\omega_{1} \omega_{2}+3\omega_{0}^{2}\omega_{3})k^{3}\, \tag{4.124c}\] and solve order by order. Solving eq. (4.123) to lowest order we find two solutions: the first is the un-gapped mode (\(\omega_{0}=0\)) while the second is given by \[\omega_{0}=-i\frac{2h_{0}W^{-1}}{\theta_{1}-\eta v^{2}}. \tag{4.125}\] We may focus on the un-gapped mode as we already have the imaginary part (to lowest order) for the gapped mode. Working to first order we obtain the phase velocity, while at second order we get the damping rate. Collecting the results, the small \(k\) solutions to the boosted dispersion relation can be written \[\omega =-i\frac{2h_{0}W^{-1}}{\theta_{1}-\eta v^{2}}+\mathcal{O}(k)\, \tag{4.126a}\] \[\omega =vk\cos\phi+i\frac{\eta}{2h_{0}W^{3}}\left(\cos^{2}\phi+W^{-2} \sin^{2}\phi\right)+\mathcal{O}(k^{3}). \tag{4.126b}\] We see that stability in the boosted frame requires \(\eta>0\) and \(\theta_{1}>\eta v^{2}\). To the best of our knowledge, stability at all wavelengths cannot be studied analytically in the boosted case. This is because the Routh-Hurwitz criterion applies to real polynomials only. The stability is then perhaps best studied numerically, once a specific equation of state model has been chosen. However, our demonstration shows that the LES model passes the key stability tests. Next, we turn our attention to the longitudinal modes, assuming again that the only non-vanishing parameters in eq. (4.108) are \(\eta\) and \(\theta_{1}\). The (comoving) longitudinal modes dispersion relation is then \[\omega\left[\theta_{1}\omega^{3}+ih_{0}\omega^{2}-\left(\frac{2\eta}{3}+\theta _{1}\mathcal{D}\right)k^{2}\omega-ih_{0}c_{s}^{2}k^{2}\right]=0. \tag{4.127}\] As before, we can work out the non-trivial longitudinal modes using eq. (4.124). Again, working to lowest order we find that one mode is "gapped" and two are not. These long wavelength modes are \[\omega =-i\frac{h_{0}}{\theta_{1}}+\mathcal{O}(k^{2})\, \tag{4.128a}\] \[\omega =\pm c_{s}k-\frac{i}{3h_{0}}\left(\eta-\frac{3}{2}(c_{s}^{2}- \mathcal{D})\theta_{1}\right)k^{2}+\mathcal{O}(k^{3}). \tag{4.128b}\] In order to make sure the longitudinal modes are also stable we have to take \(\theta_{1}>0\), \(\eta>0\) and \[\frac{3}{2}(c_{s}^{2}-{\cal D})<\frac{\eta}{\theta_{1}}<\frac{1}{v^{2}}. \tag{4.129}\] We can then study the stability condition at all wavelengths using the Routh-Hurwitz criterion. To do so, we again introduce \(\Delta=-i\omega\) in eq. (4.127) to make it a real algebraic equation: \[\theta_{1}\Delta^{3}+h_{0}\Delta^{2}+A\Delta k^{2}+h_{0}c_{s}^{2}k^{2}=0\, \tag{4.130}\] where \(A=2/3\eta+\theta_{1}{\cal D}\). From this it is easy to see that the Routh-Hurwitz criterion guarantees stability for eq. (4.129). Again, the general \(k\) case does not change the stability requirements. As in the case of transverse modes, the story does not end here. We still have to establish the stability in the boosted frame. To do so, we "boost" eq. (4.127) using eq. (4.117) to obtain \[a\,\omega^{3}+b\,\omega^{2}+c\,\omega+d=0\, \tag{4.131}\] where \[a = W^{3}\left(\theta_{1}-A\,v^{2}\right)\, \tag{4.132a}\] \[b = -W^{2}\left[(3\theta_{1}-2A)Wvk\cos\phi-ih_{0}-WAv^{3}k\cos\phi+ ih_{0}c_{s}^{2}v^{2}\right]\,\] (4.132b) \[c = W\Big{[}(3\theta_{1}-2A)W^{2}v^{2}k^{2}\cos^{2}\phi-Ak^{2} \left(1+W^{2}v^{2}\cos^{2}\phi\right)\] (4.132c) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-2ih_{0}Wvk(1-c_{s }^{2})\cos\phi\Big{]}\,\] \[d = -\left[\theta_{1}v^{3}W^{3}\cos^{3}\phi-WAvk^{3}\cos^{3}\phi-W^{ 3}v^{3}Ak^{3}\cos^{3}\phi\right.\] \[\qquad\qquad-ih_{0}k^{2}\left(W^{2}(v^{2}-c_{s}^{2})\cos^{2}\phi -c_{s}^{2}(1-\cos^{2}\phi)\right)\Big{]}\.\] Again we solve the problem using the expansion in eq. (4.124). At lowest order we find two "un-gapped" modes and one "gapped" solution. The latter is given by \[\omega=-i\frac{h_{0}(1-c_{s}^{2}v^{2})}{(\theta_{1}-Av^{2})W}+{\cal O}(k)\, \tag{4.133}\] which is stable for \[\eta<\frac{3}{2}\theta_{1}\frac{1-{\cal D}v^{2}}{v^{2}}. \tag{4.134}\] As in the case of the "un-gapped" modes, working to \({\cal O}(k^{2})\) (as the \({\cal O}(k)\) problem is trivial) we obtain the boosted sound speed \(\omega_{1}={\cal C}_{s}\). This is found by solving \[W^{2}(1-v^{2}c_{s}^{2}){\cal C}_{s}^{2}-2W^{2}v\cos\phi(1-c_{s}^{2}){\cal C}_{ s}+W^{2}(v^{2}-c_{s}^{2})\cos^{2}\phi-c_{s}^{2}\sin^{2}\phi=0. \tag{4.135}\] To see that this result actually makes sense, we provide the solution for the two cases where \(v^{a}\) is parallel/orthogonal to \(\hat{k}^{a}\) (respectively) \[\mathcal{C}_{s} =\frac{v\pm c_{s}}{1\pm v\,c_{s}}\, \tag{4.136a}\] \[\mathcal{C}_{s} =\pm\frac{\sqrt{1-v^{2}}}{\sqrt{1-v^{2}c_{s}^{2}}}c_{s}. \tag{4.136b}\] The solution for different values of \(\phi\) is best understood by considering specific examples, see fig. 4.2, noting that it only depends on the thermodynamic speed of sound \(c_{s}\) and the relative velocity \(v\). In order to work out the longitudinal mode damping we work at \(\mathcal{O}(k^{3})\). This leads to a purely imaginary \(\omega_{2}=\Gamma(\phi)\). The solution involves the boosted sound speed \(\mathcal{C}_{s}\), and it is not particularly illuminating, so it is also best understood by specific examples, see fig. 4.2. As we see from the illustrations in fig. 4.2 and fig. 4.2 there are regions of the \((\eta,\theta)\) parameter space where all modes are stable. As we are only aiming at a proof of principle (not a comprehensive stability analysis) this concludes the argument. The stability conditions depend on the equation of state (which enters through \(c_{s},\,\mathcal{D},\,\mathcal{C}\)) and the relative velocity \(v\). Therefore, an exhaustive study of the stability in the general case is best done once a specific equation of state has been chosen, following the logic outlined here. Figure 4.2: Left: shaded region resulting from combining the stability constraints obtained for the i) transverse modes (gapped, un-gapped, boosted, un-boosted), ii) longitudinal un-boosted modes (gapped and un-gapped), and iii) longitudinal boosted gapped modes. Right: sound speeds (solid) and damping rates (dashed) for the un-gapped longitudinal modes in the boosted frame. Colours match sound speeds with the corresponding damping rates. For illustrative purposes we used: \(c_{s}=0.16\), \(v=0.4\), \(\eta=1.2\), \(\theta_{1}=0.5\) and a barotropic equation of state (both figures). ### Summary Averaging and filtering are the standard strategies for dealing with the problem of simulating (computationally demanding) turbulent flows. Both approaches are complicated by the covariance of General Relativity, where the split between space and time is an observer-dependent notion. As the problem is beginning to be considered from the point of view of numerical relativity (relevant examples are [94, 176, 177, 4, 220, 72]), it is important to understand the underpinning theory. Hence, we decided to start from the beginning and considered how the different strategies should be implemented in the curved spacetime setting relevant for, say, binary neutron-star mergers. After clarifying the sense in which consistency with the principles of General Relativity poses interesting foundational questions, we argued that it is natural to set up the analysis in the "fibration" associated with individual fluid elements. This then allowed us to introduce a meaningful local analysis via the use of Fermi coordinates ([78, 79]), which defines the covariant averaging/filtering procedure. Building on this, we worked out the coarse-grained fluid dynamics, and considered the impact averaging/filtering has on the (thermodynamical) interpretation of the resolved variables. Finally, because smoothing the fluid dynamics inevitably introduces a closure issue, we proposed a closure scheme and discussed its linear stability. This completed the formal development of the fibration-based model. In order for this work to have practical relevance, however, we need to make contact with actual simulations. Whilst this goes beyond the scope of this thesis, we will go back to (briefly) comment on this at the end of chapter 5. ## Chapter 5 Filtering relativistic magneto-hydrodynamics In this chapter we take the first steps towards extending the covariant filtering scheme discussed in chapter 4 to charged multi-fluids. The relevance of this effort is clear in the context of neutron star astrophysics, as the discussion in the previous chapter left out aspects that are important for (if not crucial to) neutron stars modelling--from electromagnetism to superfluidity/superconductivity. Keeping in mind the bigger picture and final aim, it is natural to start by focusing on magneto-hydrodynamics (MHD), which can be derived as the single-fluid limit of an underlying two-fluid plasma model. We will start in section 5.1 with a brief introduction to magneto-hydrodynamics turbulence, mainly focusing on the differences with hydrodynamics. The discussion is based on the recent reviews by Schekochihin [192] and Beresnyak [44], which we refer to for more details (and references to the relevant original papers). We then introduce the relativistic magneto-hydrodynamics equations in section 5.2. As anticipated, these can be derived starting from a two-fluid plasma model, but we here take a shortcut, namely we start from the Euler-Maxwell system instead. We conclude this chapter (and this part) by discussing in section 5.3 the first steps towards a covariant filtering scheme for magneto-hydrodynamics based, as the approach discussed in chapter 4, on the fibration associated with individual fluid elements. ### 5.1 A brief introduction to magneto-hydrodynamic turbulence In a similar fashion as for hydrodynamic turbulence--discussed in section 4.1--useful quantities in the characterization of MHD turbulence are the magnetic Reynolds number \(\mathrm{Re}_{m}\) and the Lundquist number \(S\). These are defined as \[\mathrm{Re}_{m}=\frac{\rho VL}{\beta}\,\quad S=\frac{LB}{\sqrt{\mu_{0}\rho}\beta }=\frac{Lv_{A}}{\beta}\, \tag{5.1}\] where \(\beta\) is the magnetic diffusivity (or resistivity, proportional to the inverse of the conductivity) and \(v_{A}\) is the Alfven velocity, whilst \(\rho,\,V,\,L\) are characteristic values for the density, velocity and lengthscale of the flow (as in section 4.1). The magnetic Reynolds number measures the importance of fluid convection over resistive diffusion, while the Lundquist number has a similar interpretation, but compares the Alfven wave crossing time to the timescale of resistive diffusion. When these numbers1 (together with the Reynolds number defined in eq. (4.1)) are very large, we expect the MHD flow to be turbulent. Footnote 1: One can also define the magnetic Prandtl number \(\mathrm{Pm}=\mathrm{Re}_{m}/\mathrm{Re}=\eta/\beta\), quantifying the importance of fluid viscosity over magnetic diffusivity. MHD turbulence, however, is expected to be substantially different from hydrodynamics. This is intuitive, as we can eliminate a mean flow by a proper choice of frame, while a mean magnetic field cannot be removed by such a transformation. In essence, we may say that the magnetic field is the only large-scale feature that does not "go away" at small scales. This simple fact is what makes MHD turbulence _a priori_ different from hydrodynamics: the large-scale mean magnetic field makes the system anisotropic. This simple fact means we should expect MHD turbulence to be "more complicated", as it would not be possible to, for example, justify scaling laws of the form in eq. (4.2)--which assume (statistical) homogeneity and isotropy. Dynamically, we expect transport along the magnetic field lines (on a scale \(l_{//}\)) to be associated with Alfven wave, while transport across the field lines (on a scale \(\lambda\)) to be associated with non-linear interactions. This leads us to a key feature of MHD turbulence, the so-called _critical balance_. Originally conjectured by Goldreich and Sridhar [96, 97], critical balance boils down to postulating a balance between parallel and perpendicular transport in the inertial range. To see where this brings us, we introduce the Alfven time and the non-linear time \[\tau_{A}=\frac{l_{//}}{v_{A}}\,\qquad\tau_{nl}=\frac{\lambda}{\delta u_{ \lambda}}\, \tag{5.2}\] where \(\delta u_{\lambda}\) is the typical velocity increment between points separated by \(\lambda\), and assume the two timescales are of the same order \(\tau_{A}\sim\tau_{nl}\sim\tau_{c}\)--where \(\tau_{c}\) is the cascade time, that is the typical time it takes to transfer energy from one (perpendicular) scale to the next. Considering perpendicular transport first, and noting that on dimensional grounds we expect the energy spectrum \(E\sim\delta u_{\lambda}^{2}\lambda\), we have on the one hand, \[\varepsilon\sim\frac{\delta u_{\lambda}^{2}}{\tau_{c}}\sim\frac{\delta u_{ \lambda}^{2}}{\tau_{nl}}\Longrightarrow\delta u_{\lambda}\sim(\varepsilon \lambda)^{1/3}\Longrightarrow E(k_{\perp})\sim\varepsilon^{2/3}k_{\perp}^{-5/3 }\;. \tag{5.3}\] that is, the Kolmogorov spectrum. If we focus instead on transport along the magnetic field, a similar logic leads to \[\varepsilon\sim\frac{\delta u_{l_{//}}^{2}}{\tau_{c}}\sim\frac{\delta u_{l_{// }}^{2}}{\tau_{A}}\Longrightarrow\delta u_{l_{//}}\sim(\varepsilon\tau_{A})^{ 1/2}\Longrightarrow E(k_{//})\sim\frac{\varepsilon}{v_{A}}k_{//}^{-2}\;. \tag{5.4}\] In essence, we find that critical balance leads to an anisotropic spectrum, with a Kolmogorov type spectrum in the perpendicular directions. Critical balance can be justified as follows. On the one hand we have the observation (based on theoretical arguments and numerical evidence) that weak MHD turbulence--where perturbation amplitudes are small enough that non-linear interactions are negligible (\(\tau_{nl}\gg\tau_{A}\)) and the perpendicular spectrum presents the same scaling as the parallel one--leads naturally to the strong turbulence regime where \(\tau_{nl}\sim\tau_{A}\). On the other hand, the opposite regime with \(\tau_{A}\gg\tau_{nl}\) is unsustainable as information in MHD propagates predominantly along the magnetic field lines with velocity \(v_{A}\) (the Alfven waves) and hence no structure with \(l_{//}\) larger than \(v_{A}\tau_{nl}\) can be kept coherent and will break up. In fact, critical balance appears to be a robust property of MHD turbulence. Whilst the parallel spectrum, with scaling \(k_{//}^{-2}\), appears to be robust, there is currently no definitive consensus in the community with regards to the perpendicular spectrum. Initially, in fact, solar wind observations favoured the \(-5/3\) Kolmogorov scaling, but then sets of (direct) numerical simulations started to suggest a different scaling \(E(k_{\perp})\sim k_{\perp}^{-3/2}\)--although issues with such results have been raised and some authors claim to see a better convergence with the Kolmogorov scaling (e.g. Beresnyak [43]). This leads us to another important feature of MHD turbulence, _dynamic alignment_, according to which the velocity and magnetic field tend to shear each other into alignment (in the plane perpendicular to the magnetic field) and hence form sheet-like structures as we approach smaller scales. Phenomenological models based on dynamic alignment (and critical balance) can, in fact, reproduce both the parallel spectrum we found above and the \(-3/2\) scaling in the perpendicular spectrum. The key point, however, is that such sheet-like configurations are not sustainable asymptotically at ever smaller scales due to different known instabilities--including, for example, a magnetized version of the Kelvin-Helmoltz instability (cf. chapter 7)--and other non-ideal processes like magnetic reconnection, thus eventually leading to the break up of these sheets into islands. When enough of these islands form the flow becomes isotropic, and the cascade starts up again in Kolmogorov form. This isotropic-to-sheet-to-islands transition is expected to repeat, giving a periodically interrupted, or intermittent, turbulent cascade (cf. fig. 5.1). In essence, it appears that a complete theory of MHD turbulence should contain (to some degree) a theory of reconnection, thus making the story even more complicated2. As for what matters for the present discussion, it is fair to say that sub-grid models of MHD turbulence constitute a very challenging problem because of the local anisotropy and complicated dissipative processes like reconnection [193]. Footnote 2: Not to mention the fact that we have here discussed the so-called balanced MHD turbulence regime— where the averaged/total cross helicity \(\mathbf{v}\cdot\mathbf{B}\), where \(\mathbf{v}\) is the velocity and \(\mathbf{B}\) is the magnetic field, vanishes— and did not touch upon the link between turbulence and dynamo processes leading to the amplification of the magnetic field [47, 187]. Figure 5.1: Cartoon of the perpendicular MHD turbulence spectrum. At scales larger than critical balance (i.e. \(k<k_{CB}\)) is shown the same scaling as in weak turbulence (i.e. \(\propto k_{\perp}^{-2}\)). At scales smaller than critical balance is shown the aligned cascade (i.e. \(\propto k_{\perp}^{-3/2}\)) periodically interrupted at \(k_{1},k_{2}\) and so on, while \(k_{D}\) represents the scale at which dissipative/resistive effects begin to prevail over inertial ones. Figure adapted from Schekochihin [192]. ### 5.2 Magneto-hydrodynamics in the fibration: a shortcut We argued that, from a theory perspective, it would be natural to start the discussion at the level of a charged multi-fluid model, and derive the magneto-hydrodynamics equations as the single-fluid limit of a two-fluid plasma model. The underlying multi-fluid nature is, in fact, also important for the development of consistent models beyond ideal. This motivates several discussions in the literature, both connecting to the variational (dissipative) multi-fluid framework discussed in section 2.3 (see [16, 23, 24]) as well as linking to "3+1 formulations" geared towards numerical implementations (see [25, 26]). Our aim here is, however, less ambitious. We aim to provide a streamlined derivation of the magneto-hydrodynamics equations to set the stage for discussing extensions of the covariant filtering scheme developed in chapter 4. With this aim in mind, we will start from the Euler-Maxwell system as a proxy for magneto-hydrodynamics. We will mainly focus on the electromagnetic degrees of freedom, while the fluid equations are obtained from the stress-energy-momentum conservation law (as before). The total stress-energy-momentum tensor is written as the sum of a perfect fluid part \[T^{ab}_{\text{p.f.}}=\varepsilon u^{a}u^{b}+p(g^{ab}+u^{a}u^{b})\, \tag{5.5}\] where \(u^{a}\) is the fluid four-velocity, \(p\) is the pressure and \(\varepsilon\) is the energy density (not to be confused with the energy flux from scale to scale in the previous section), augmented with the Maxwell stress-energy-momentum tensor \[T^{ab}_{EM}=\frac{1}{\mu_{0}}\left[F^{ad}F^{b}_{\ d}-\frac{1}{4}g^{ab}F^{cd}F_ {cd}\right]\, \tag{5.6}\] where \(F^{ab}\) is the Faraday tensor. The fluid equations are then obtained from3 Footnote 3: The last step here involves using the Maxwell equations, eq. (5.9). \[\nabla_{a}T^{ab}_{\text{p.f.}}=-\nabla_{a}T^{ab}_{EM}=-j_{a}F^{ab}\, \tag{5.7}\] where the term on the right-hand side is the Lorentz four-force and \(j^{a}\) is the charge four-current. We consider the pressure to be a function of two thermodynamic variables--recall the discussion in section 4.7 to see why barotropic models are of lesser interest for the present discussion--which we conveniently take as the energy \(\varepsilon\) and the baryon density \(n\). As such, we also need an equation for \(n\), given by the continuity equation \[u^{a}\nabla_{a}n+n\nabla_{a}u^{a}=0. \tag{5.8}\] Before we move on to focus on the electromagnetic degrees of freedom, let us stress that there is a formal inconsistency in the fluid equations we wrote down. Deriving the equations of magneto-hydrodynamic as the single fluid limit of a two-fluid plasma model, one can show that a perfect fluid form for the matter stress-energy-momentum tensor holds in the "centre of momentum" frame--the analogue of Landau frame in this context--but this would lead to a continuity equation for the baryon current with drift terms4. As discussed in detail in [26], working with eqs. (5.7) and (5.8) involves additional steps such as assuming that the electron mass is much smaller than the baryon mass. This assumption may or may not apply depending on the physical system under consideration. It is certainly well motivated for non-relativistic systems, but less obvious for a neutron star core model where the electron effective mass may be up to ten per cent of baryon rest mass [66]. Let us further note that an evolution equation (or a constraint) for the charge four-current is missing from the fluid equations obtained--although the charge current is intuitively associated with a drift velocity between the two species constituting the plasma. Simply noting these reservations here, we will work with eqs. (5.7) and (5.8) as the fluid equations, and move on to focus on the electromagnetic degrees of freedom. Footnote 4: To get rid of drift terms we would have to work in the (analogue of) Eckart frame [16, 24], but this in turn would lead to momentum flux terms in the fluid stress-energy-momentum tensor. Assuming we want to work with the electric and magnetic fields, and because the filtering scheme of chapter 4 is tied to the fluid, it is natural to write down the Maxwell equations in the "fluid frame" [21]--leading to a (fibration) formulation commonly used in cosmology [35]. We start from Maxwell equations in covariant form5 Footnote 5: Here we are working using a “mixture” of SI units and geometric units. In the SI system, the electric constant (or vacuum permittivity) is given by \(\varepsilon_{0}=1/\mu_{0}c^{2}\)—where \(c\) is the speed of light and \(\mu_{0}\) is the magnetic constant (or vacuum permeability)—and the electric and magnetic field have different units. This would appear to be in contrast with eq. (5.10) as \(e^{a}\) and \(b^{a}\) seem to have the same dimension. However, this is due to the fact that we are also using (at the same time) geometric units, where the speed of light \(c=1\) and suppressed, so that \(\varepsilon_{0}=1/\mu_{0}\). This detail is however irrelevant for the present discussion. \[\nabla_{a}F^{ba}=\mu_{0}j^{b}\,\qquad\nabla_{[a}F_{bc]}=0\, \tag{5.9}\] and introduce a four-velocity \(U^{a}\) associated with a generic observer. We decompose the Faraday tensor and the charge current as \[F_{ab}=2U_{[a}e_{b]}+\varepsilon_{abc}b^{c}\,\qquad j^{a}=\sigma U^{a}+J^{a}. \tag{5.10}\] where the electric and magnetic field (as measured by \(U^{a}\)) are defined as \[e_{a}=F_{ab}U^{b}\,\ \ b_{a}=\frac{1}{2}\varepsilon_{abc}F^{bc}\,\ \text{and}\ \varepsilon_{abc}=\varepsilon_{dabc}U^{d}. \tag{5.11}\] With these definitions we can work out the parallel and orthogonal projections--with respect to --of eq. (5.9) and rewrite the Maxwell equations as [74, 16, 21] (5.12a) (5.12b) (5.12c) (5.12d) where dots--which stand for co-moving time derivatives--and the projection operator refer to the observer6\(U^{a}\). In eq. (5.12), the terms on the left-hand side should be familiar, while those on the right-hand side are associated with gradients of the observer four-velocity. As such, they vanish identically for an inertial observer, and hence do not appear in most textbook discussions. We also note that the system of eqs. (5.7), (5.8) and (5.12) is not closed. We need an additional equation linking the charge four-current to the other quantities. This can be derived starting from a two-fluid model [16, 24, 26]. For ideal models, however, this additional equation is often given by a phenomenological Ohm's law based on the standard argument that in a perfect conductor--where charges easily flow--one would expect the electric field to "short out" as the matter becomes locally charge neutral [38]. Although this is sufficient for getting to a workable (i.e. closed) set of equations [93, 65], we here want to tread a bit more carefully. We do so for two reasons: First, our discussion will allow us to better appreciate the assumptions involved as it connects directly to the Newtonian argument for dropping the displacement current. Second, we will derive the induction equation according to a general (i.e. non inertial) observer, which will be used later in chapter 7. Footnote 6: The same is true for the observer four velocity gradients, and we also note for clarity that is the vorticity vector, defined as. While both the electric and magnetic fields are dynamical degrees of freedom at the level of Maxwell equations, in textbook magneto-hydrodynamics the electric field is denoted to a lesser role and the magnetic field evolution is described by the induction equation. The Newtonian argument (see, for example, [39]) for deriving the induction equation from Maxwell involves assuming that the dynamics is associated with characteristic length- and timescales, and, leading to an associated velocity which is much smaller than the speed of light. With this in mind, we can "massage" the Faraday equation in the form (5.13) to see that. As long as the electric and magnetic fields are slowly evolving, a similar dimensional analysis then leads us to neglecting terms involving the electric field (i.e. the displacement current) in the Ampere law, so that \[J^{a}=\frac{1}{\mu_{0}}\left(\varepsilon_{abc}\nabla^{b}b^{c}+\varepsilon_{abc}a^ {b}b^{c}\right). \tag{5.14}\] We then immediately see where this is going to take us. By effectively working with the pre-Maxwell form of Ampere law (leaving out the displacement current) the charge current is slaved to the magnetic field. At the same time, we no longer have an evolution equation for the electric field \(e^{a}\), so we need an additional relation between the electric and magnetic fields. This is where the issue of (infinite) conductivity enters the discussion. Connecting the local fluid four-velocity \(u^{a}\) to the observer \(U^{a}\) as7 Footnote 7: It would seem natural, in writing down the Maxwell equations in the fibration formulation, to identify the fibration observer as the local fluid four-velocity [35, 21]. However, as we want here to make contact with the discussion in chapter 4, it is natural to distinguish the two (cf. discussion in section 4.5). \[u^{a}=\gamma(U^{a}+v^{a})\,\quad U^{a}v_{a}=0\,\gamma=(1-v^{a}v_{a})^{-1/2}\, \tag{5.15}\] where \(v^{a}\) is the spatial fluid velocity as measured by the observer \(U^{a}\), we see that the electric field as measured by the fluid is linked to \(e^{a}\), \(b^{a}\) (measured by \(U^{a}\)) as \[F_{ab}u^{b}=\gamma\left[e_{a}+\varepsilon_{abc}v^{b}b^{c}+U_{a}(v^{b}e_{b}) \right]. \tag{5.16}\] Assuming this vanishes yields \[e_{a}+\varepsilon_{abc}v^{b}b^{c}=0. \tag{5.17}\] With this constraint, we can derive the induction equation from Faraday's law. To do so, note that \[\varepsilon_{abc}\varepsilon^{cde}=U^{f}U_{g}\varepsilon_{fabc} \varepsilon^{gdce}=-3U^{f}U_{g}\delta_{f}^{[\bar{g}}\delta_{a}^{d}\delta_{b}^{ c]}\\ =\left(\delta_{a}^{d}\delta_{b}^{e}-\delta_{a}^{e}\delta_{d}^{b} \right)-\left(\nicefrac{{\left/{\vphantom{\delta_{a}^{d}\delta_{b}^{e}}}}}{{ \left/{\vphantom{\delta_{a}^{d}\delta_{b}^{d}}}}\right.}-\nicefrac{{\left({ \vphantom{\delta_{a}^{d}\delta_{b}^{e}}}}\right.}{{\left/{\vphantom{\delta_{a }^{d}\delta_{b}^{d}}}}\right)}{{\left({\vphantom{\delta_{a}^{d}\delta_{b}^{e}}} \right.}\right.}\right)-\left(\delta_{a}^{d}\nicefrac{{\left/{\vphantom{\delta_ {a}^{d}\delta_{b}^{e}}}}}{{\left/{\vphantom{\delta_{a}^{d}\delta_{b}^{e}}} \right.}\right.}-\nicefrac{{\delta_{a}^{e}}}{{\left/{\vphantom{\delta_{a}^{d} \delta_{b}^{e}}}}}{{\left/{\vphantom{\delta_{a}^{d}\delta_{b}^{e}}}\right.} \right)}\, \tag{5.18}\] where we introduced the parallel projection \(\nicefrac{{\left/{\vphantom{\delta_{a}^{d}\delta_{b}^{e}}}}}{{\left/{ \vphantom{\delta_{a}^{d}\delta_{b}^{e}}}\right.}}=-U^{a}U_{b}\). When this is contracted with a spatial tensor (with respect to \(U^{a}\)) the last two terms in eq. (5.18) can be dropped. It follows that \[\varepsilon_{abc}\varepsilon^{cde}\left(a^{b}v_{d}b_{e}\right)=\left(a^{b}b_ {b}\right)v_{a}-\left(a^{b}v_{b}\right)b_{a}. \tag{5.19}\] We also need to take care of the curl of \(e^{a}\) term in the Faraday equation. This can be written \[-\varepsilon_{abc}\nabla^{b}\left(\varepsilon^{cde}v_{d}b_{e}\right)=\left[- \varepsilon_{abc}\left(\nabla^{b}\varepsilon^{cde}\right)v_{d}b_{e}\right]- \left[\varepsilon_{abc}\varepsilon^{cde}\nabla^{b}\left(v_{d}b_{e}\right) \right]\, \tag{5.20}\] where it is convenient to consider the two terms separately. We start from the second term and, even if \(U^{a}\) is not necessarily surface forming (i.e. has non-vanishing vorticity), we introduce a "spatial" covariant derivative \(D\) in the usual way (projecting each index in the sub-space orthogonal to \(U^{a}\)). Then, it is easy to see that \[\varepsilon_{abc}\epsilon^{cde}D^{b}\left(v_{d}b_{e}\right)\doteq\varepsilon_{abc} \epsilon^{cde}\left(\tfrac{1}{f}\tfrac{b}{f}\tfrac{1}{d}\tfrac{1}{e}\right) \nabla^{f}\left(v_{g}b_{h}\right)=\varepsilon_{abc}\epsilon^{cde}\nabla^{b} \left(v_{d}b_{e}\right)\, \tag{5.21}\] and hence \[-\varepsilon_{abc}\epsilon^{cde}\nabla^{b}\left(v_{d}b_{e}\right)=D^{b}\left( v_{b}b_{a}\right)-D^{b}\left(v_{a}b_{b}\right). \tag{5.22}\] As for the other term, writing it it as \[-\varepsilon_{abc}\left(\nabla^{b}\epsilon^{cde}\right)v_{d}b_{e} =-U^{g}\varepsilon_{gabc}\epsilon^{fcde}\left(\nabla^{b}U_{f} \right)v_{d}b_{e}\\ =-U^{g}\delta_{g}^{[f}\delta_{a}^{d}\delta_{b}^{e]}\,g_{fh}\left( -U^{b}a^{h}+\omega^{bh}+\sigma^{bh}+\frac{1}{3}\theta\perp^{bh}\right)v_{d}b_ {e}\, \tag{5.23}\] we see that--given the anti-symmetrization--it vanishes identically. In summary, the induction equation according to a generic observer8 can be written as Footnote 8: The worldlines of the generic observer \(U^{a}\) constitute a fibration of the spacetime, hence we may call this the ideal induction equation in the fibration framework, as opposed to the corresponding “3+1 form” derived by, for example, Andersson et al. [26]. \[\perp^{ab}\dot{b}_{b}+D_{b}(v^{b}b^{a})-D_{b}(v^{a}b^{b})=\left(\sigma^{ab}- \omega^{ab}-\frac{2}{3}\theta\perp^{ab}\right)b_{b}+v^{a}(a_{b}b^{b})-b^{a}(a _{b}v^{b})\, \tag{5.24}\] where the terms on the left should be familiar, while those on the right vanish for an inertial observer. Let us now pause for a second, and ponder the implications of the argument we put forward--which is the intuitive extension to the curved spacetime setting of the Newtonian one. Clearly, the argument is non-controversial at the Newtonian/non-relativistic level. At the relativistic level, however, dropping the displacement current might not be fully justified, at least not in general. Whilst this may be seen as a flaw in the logic, it is actually the reason why we derived the induction equation this way. We could have, in fact, argued for a relation of the form in eq. (5.17) from the beginning, thus arriving at the same induction equation but "sweeping under the rug" the controversial issue of dropping the displacement current. The derivation provided suggests that, in a sense, magneto-hydrodynamics is intrinsically--with a slight abuse of nomenclature--a "post Newtonian" theory as it necessarily involves a low-frequency/low-velocity approximation (the timescale over which the local electric field shorts out is tiny but not zero). In essence, we have to apply the magneto-hydrodynamics approximation with some level of caution. We also point to the discussion in [26] where the same argument is detailed in the so-called "3+1 formulation"--thus showing in which sense the approximation may still be used (on a case by case basis) and how gauge issues (in the choice of lapse and shift) play a key role for the validity of the approximation. We conclude this section by showing how the derived equation further simplifies in the Newtonian limit. On dimensional grounds, we observe that the last two terms on the right hand side of eq. (5.24) contain an extra factor of \(1/c^{2}\) (with respect to the rest, where \(c\) is the speed of light), and will as a result be negligible in the non-relativistic limit (\(c^{2}\to\infty\)). Similarly, let us consider the absence of monopoles constraint. From eq. (5.12) and eq. (5.17) we immediately obtain \[\perp_{b}^{a}\nabla_{a}b^{b}=2W^{a}\varepsilon_{abc}v^{b}b^{c}\, \tag{5.25}\] and we observe that the term on the left hand side is \(\sim b/L\) while that on the right is \(\sim bL/T^{2}\). Dimensional consistency implies the term on the right-hand side contains an extra factor of \(1/c^{2}\) and should be neglected in the Newtonian limit. In essence, non-inertial effects do not affect the absence of monopoles constraint at the Newtonian level. When it comes to the Lorentz force, we expect it not to change at the Newtonian level, but let us nonetheless check this for consistency. The Lorentz four-force can be written \[-j_{b}F^{ba}=-\left(\sigma U_{b}+J_{b}\right)\left(U^{b}e^{a}+U^{ a}e^{b}+\varepsilon^{bacd}U_{c}b_{d}\right)\\ =-U^{a}\left(J_{b}\varepsilon^{bcd}v_{c}b_{d}\right)+\varepsilon^ {abc}\left(J_{b}-\sigma v_{b}\right)b_{c}\, \tag{5.26}\] where we used the ideal magneto-hydrodynamics relation (5.17) in the last step. The Lorentz three force corresponds to the second term, where the charge density is measured by the observer, hence does not (in general) vanish. However, if we insist on the local charge density to be zero (consistently with (5.17)), then we have \[-u^{a}j_{a}=W(\sigma-v_{a}J^{a})=0\Longrightarrow J_{b}-\sigma v_{b}=\left(g_ {b}^{a}-v^{a}v_{b}\right)J_{a}. \tag{5.27}\] Re-inserting the factor of \(1/c^{2}\) we see that the second term is negligible with respect to the first. As also the second term in eq. (5.14) is negligible in the Newtonian limit, we see that the Lorentz force in the Euler equation is unchanged (as expected). ### 5.3 MHD covariant filtering: first steps We now want to make contact with the filtering strategy developed in chapter 4. Whilst this is very much unfinished business, we discuss some of the issues that arise along the way, and what seems to be the right strategy to make progress. First of all, we note that it does not seem to make much sense to take the MHD-approximation--that is, neglecting the displacement current--first and then applying the filtering procedure. The main reason for this has to do with the choice of filtering observer--recall the discussion of the normalization issue in section 4.5--and the simple fact that electric and magnetic fields are observer dependent quantities. That such a strategy would be problematic can also be appreciated by considering the filtered charge current, noting that the issue of local charge neutrality and vanishing electric field are intrinsically linked [26]. Using the filtering notation introduced in chapter 4, we can consider the filtered charge four-current and decompose it as \[\langle j^{a}\rangle=\bar{\sigma}\tilde{u}^{a}+\bar{J}^{a}\,\quad\bar{\sigma}= \tilde{u}_{a}\langle j^{a}\rangle\ \text{and}\ \bar{J}^{a}=\mathbb{I}_{b}^{\,a}\langle j^{b}\rangle\, \tag{5.28}\] where \(\tilde{u}^{a}\) is the filtered four-velocity (say, the Favre-filtered velocity). Using an analogous decomposition of the fine-scale charge-current with respect to the fine-scale fluid velocity we get \[\bar{\sigma}=\tilde{u}_{a}\left(\langle\sigma u^{a}\rangle+\langle J^{a} \rangle\right). \tag{5.29}\] The upshot is that even if we assume the system to be charge neutral at the fine-scale (i.e. \(\sigma=0\)), we are left with \[\bar{\sigma}=\tilde{u}_{a}\langle J^{a}\rangle\, \tag{5.30}\] which clearly does not have to vanish. Conversely, if we impose charge neutrality on the filtered flow then this condition may not hold on the small scale (which might be an issue for equation of state inversion in numerical relativity simulations). The upshot is that the correct strategy seems to involve a filtering at the level of the Maxwell equations. We can then use the fact that the filtering procedure defined in chapter 4 commutes with partial derivatives to arrive at \[\nabla_{a}\langle F^{ba}\rangle=\mu_{0}\langle j^{b}\rangle. \tag{5.31}\] These equations can then be written in terms of the coarse-grained electromagnetic fields--defined decomposing \(\langle F^{ab}\rangle\) as in eq. (5.10), now in terms of the filtered velocity \(\tilde{u}^{a}\). The net result is that the filtered Maxwell equations retain the usual form (as the Maxwell system is linear). In particular, the filtered electric and magnetic field are given in terms of the fine scale ones as \[\tilde{E}^{a}=\tilde{u}^{b}\langle u_{a}e_{b}\rangle-\tilde{u}^{b}\langle u_{b }e_{a}\rangle-\tilde{x}_{abc}\langle u^{b}b^{c}\rangle\, \tag{5.32}\] and \[\tilde{B}^{a}=\tilde{u}^{b}\langle u_{a}b_{b}\rangle-\tilde{u}^{b}\langle u_{b }b_{a}\rangle+\tilde{\epsilon}_{abc}\langle u^{b}b^{c}\rangle. \tag{5.33}\] From this, we immediately see that even if we take the fine-scale electric field to vanish (due to the assumption on local charge neutrality) we would still have a non-vanishing coarse grained electric field: \[\tilde{E}^{a}=-\tilde{\epsilon}_{abc}\langle u^{b}b^{c}\rangle\,\qquad \tilde{B}^{a}=\tilde{u}^{b}\left(\langle u_{a}b_{b}\rangle-\langle u_{b}b_{a} \rangle\right). \tag{5.34}\] Nonetheless, we may, pragmatically, move on and take the MHD-approximation at the filtered level (i.e. neglecting the displacement current in the filtered equations) and provide a closure relation for the filtered electric field. For example, we could model the filtered electric field using an algebraic decomposition in terms of the filtered variables as \[\tilde{E}^{a}=\gamma_{1}\tilde{B}^{a}+\gamma_{2}\tilde{J}^{a}+\gamma_{3}\tilde{ \varepsilon}^{abc}\tilde{B}_{b}\tilde{J}_{c}. \tag{5.35}\] Whilst this is a very simple closure scheme, it is attractive as it allows for an immediate physical interpretation of the expansion coefficients. It is, in fact natural to interpret the second term as an effective resistivity and the third one as an effective Hall term. The first term instead reminds us of the "classic" alpha-dynamo term, which enters many mean-field-dynamo models [48, 47, 187]. We conclude this section by commenting on the filtering of the fluid equations. When it comes to the baryon continuity equation, it is natural to work with the Favre-filtered four-velocity as in section 4.6 so that it retains the pre-filter form. As for the (fluid part of the) stress-energy-momentum tensor, we know already from the hydrodynamic analysis in chapter 4 that filtering will introduce additional terms in the equations that can be interpreted as effective dissipative terms. In addition to these, we also have terms coming from filtering the Lorentz four-force \[\mathcal{F}^{a}=\langle j_{b}F^{bu}\rangle-\langle j_{b}\rangle\langle F^{bu} \rangle. \tag{5.36}\] This contribution also requires modelling. ### 5.4 Outlook As anticipated at the end of the previous chapter, an important point missing from the current discussion is the link with actual numerical relativity simulations. These tend to involve a foliation of spacetime and the associated "3+1" spacetime split [100, 184], thus adding a new observer (the "Eulerian" observer) to the game. In principle, one might make progress simply by "translating" our results to the foliation picture, but it is by no means clear that this is the most sensible way to proceed. Moreover, any discussion of numerical simulations should consider a number of additional issues, such as the role of numerical discretization errors. Models developed directly in the foliation--such as the gradient sub-grid model developed in [219, 49]--appear to be more suited for dealing with issues like the lack of numerical resolution. At the same time, the fibration scheme we developed provides a clear link with the underlying thermodynamics and the equation of state information we would like to extract from numerical simulations. Moreover, the covariance of our framework, as well as the fact that the filtering observer acquires a clear physical meaning, allow us to "lift" the results of box simulations to any space-time (at least in principle). In essence, the scheme we have discussed suggests that the large-eddy strategy can be enhanced to a tool for linking models valid at "mesoscopic" and "macroscopic" scales. This is relevant as it appears that large-eddy simulations based on gradient subgrid models need to minimally resolve the relevant physical effects in order to describe turbulence at even smaller scales--as demonstrated by Miravet-Tenes et al. [154] for the specific case of the magneto-rotational instability (cf. chapter 7). The (although minimal and incomplete) discussion provided here is a first step in this direction. **Part III** **Binary neutron-star merger applications** ## Chapter 6 Formulating bulk-viscosity for neutron star simulations In the previous parts of this thesis, we focused on modelling dissipation and turbulence in general relativistic fluids. This part is instead dedicated to neutron star merger applications. In particular, we will consider in this chapter the role of bulk viscosity associated with nuclear reactions--which may, or may not, leave an observable imprint on (say) the gravitational-wave signal [158, 105, 159, 178, 227]--and ask how this mechanism can be implemented in nonlinear simulations. We highlight the formal aspects of the problem and establish how the inevitable "limitations" of a numerical simulation (in terms of resolution) enter the discussion. The aim is to establish to what extent simulations based on an effective bulk viscosity are viable and (perhaps more importantly) when they are not. This understanding will be crucial for future numerical implementations. The results discussed in this chapter have been published in [59]. In the following chapter we discuss the magneto-rotational instability (MRI) using a (Newtonian) local analysis. We do so as the MRI is thought to play a key role in the development of (MHD) turbulence in the outer envelope of a merger remnant. In particular, the discussion we provide is suited for highly-dynamical systems such as mergers, highlighting the importance of global properties for the standard results (and criteria) to be valid. ### 6.1 Simplifications must be made The underlying physical model of a neutron star merger is expected to be a system of multiple interacting "fluids" of different charged particle species, coupled to an electromagnetic field and radiation through, for example, neutrinos, all evolving on a dynamical relativistic spacetime [21]. The complexity of this model makes it impractical for use in either theoretical calculations or numerical simulations. Instead, simplifications must be made, with heuristic arguments needed to justify each assumption that is introduced. To illustrate the key argument with a simple toy model, consider the problem of heat propagation. When the underlying model is required to be _causal_, the starting point is often taken to be the Cattaneo equation (see section 2.1.2) \[\tau\frac{\partial q}{\partial t}+q=-\kappa\frac{\partial T}{\partial x}. \tag{6.1}\] In the "fast relaxation limit", when the relaxation time \(\tau\to 0\), we recover Fourier's law, relating the heat flux to temperature gradients, and leading to the familiar heat equation. While the underlying model is hyperbolic, the fast relaxation limit is parabolic and hence not causal. Specifically, working out the characteristic velocities in the problem1 one finds that the Cattaneo equation (6.1) is causal with finite propagation speeds bounded by \(\pm(\kappa/\tau)^{1/2}\). At the same time, there is a critical wavenumber \(\gtrsim(\kappa\tau)^{-1/2}\) below which the behaviour is purely parabolic and the solution is diffused away. An illustration of the transition from second sound to diffusion can be found in Figure 16 of [21]. Footnote 1: Computed, for example, taking the large wavenumber limit of the (real part of the) phase speed. From a theory point of view it would be natural to argue that we should base our models on the Cattaneo formulation, but from a numerical perspective this may be problematic. We would need to resolve the (presumably fast) relaxation towards equilibrium and this may not be possible/practical. In this sense, the parabolic prescription may be preferable. A heuristic argument for using the parabolic heat equation within a relativistic model--for which causality would be a prerequisite--would be as follows. We assume that on the length scales \(L\) relevant for our model we have \(\tau\sim L/c\), where \(c\) is the speed of light. By causality there can be no (propagating!) scales of physical relevance faster than \(c\), hence with timescales smaller than \(\tau\) or (equivalently) frequency scales larger than \(\tau^{-1}\). Therefore the only relevant behaviour for heat propagation is the purely parabolic case where heat fluctuations are rapidly damped, which is well modelled by the standard heat equation. It is possible to use the internal consistency of the underlying model to check when this heuristic argument is valid. For example, to be consistent with causality the dispersion relation at low frequencies requires that \(\kappa\leq\tau c^{2}\sim cL\). The key point here is the existence of a single scale (length, time, or frequency) at which some physical effect acts or changes type. This issue is often considered for turbulence (for example, are the length scales probed sufficient to trigger the magneto-rotational instability [71, 204, 124, 102]), for reactions (are physical regions of parameter space probed so that out-of-equilibrium physics, such as a bulk viscosity has to be accounted for [13]), and for radiation (are neutrinos propagating or trapped, see e.g. [104]). When this key scale is outside that which can be included in the model then the physical effect is either ignored (by using a purely ideal model, or by assuming instantaneous relaxation to an equilibrium) or modelled (by approximating the additional physics through a closure term, such as an effective equation of state, or a large-eddy approach [104, 94, 176, 49]). The fundamental issue regards how the physics, and hence the model, should behave when key scales cross or overlap at different parts of the required parameter space. This is particularly relevant for nonlinear numerical simulations, where the discretization length introduces a scale (or multiple scales with uneven grids or mesh refinement), and nonlinearity can lead to the relevant physical scales varying over many orders of magnitude. The interaction between the different scales makes heuristic simplifications dubious. With these issues in mind we will study, analytically and numerically, issues relating to bulk viscosity in reactive fluids. Here the underlying model involves nuclear reactions, specifically the direct and modified Urca reactions (although the analytical calculations presented apply more generally). These microphysical reactions can, in some regimes, be approximated as a (resonant) bulk viscosity [194]. However, the timescales on which the reactions take place strongly depend on (for example) the temperature and may as a result be close to those that can be captured in numerical calculations. The timescales may also interact with, for example, large eddy closure terms required for turbulent regions (cf. section 6.4.3). ### 6.2 The reactive system We want to consider the hydrodynamics of an isotropic reactive system consisting of comoving neutrons, electrons and protons, the simplest meaningful matter composition for a neutron star core.2 Further, because we assume charge neutrality, there are only two independent number densities (or, equivalently, one density and one species fraction). These are conveniently taken as \(n\), representing the baryon number density, and \(Y_{\rm e}=n_{\rm e}/n\) the electron (lepton) fraction. The proton fraction then follows as \(Y_{\rm p}=n_{\rm p}/n=Y_{\rm e}\) while the neutron fraction is given by \(Y_{\rm n}=n_{\rm n}/n=1-Y_{\rm e}\). Because we assume isotropy, the stress-energy-momentum tensor takes the perfect fluid form (cf. section 1.3) Footnote 2: This may seem somewhat reductionist, given that the high density region is likely to bring other matter constituents into play, but our main interest is to establish the principles involved. If richer matter models are required then the extension of our discussion will be conceptually straightforward. \[T^{ab}=\epsilon u^{a}u^{b}+p(g^{ab}+u^{a}u^{b})\, \tag{6.2}\] where the four-velocity \(u^{a}\) is uniquely defined as the one associated with the flow of all particle species. The pressure and energy are identified with the corresponding thermodynamical quantities. As we will not impose that the system is in chemical equilibrium (we allow reactions) we assume the equation of state to involve three parameters. That is, the pressure follows from \(p=p(n,\varepsilon,Y_{\mathrm{e}})\). This relation is assumed to be provided in tabulated form (suitable for a numerical simulation, see appendix C and [210]). The energy-momentum conservation laws and baryon continuity are the same as in section 1.3. Nonetheless, we repeat them here for convenience: \[u^{b}\nabla_{b}\varepsilon =-(p+\varepsilon)\theta\, \tag{6.3a}\] \[(p+\varepsilon)a_{b} =-\,\perp_{b}^{\,\,c}\nabla_{c}p\, \tag{6.3b}\] and \[u^{a}\nabla_{a}n+n\theta=0. \tag{6.4}\] As we are presently working with a three parameter equation of state, we need an evolution equation for the electron fraction. We write this as \[u^{a}\nabla_{a}Y_{\mathrm{e}}=\frac{\Gamma_{\mathrm{e}}}{n}\, \tag{6.5}\] where the rate \(\Gamma_{\mathrm{e}}\) is generally non-vanishing as we are considering a reactive system. Once the reaction rate is provided by the microphysics (and tabulated as a function of the other variables), these equations constitute a closed system. From the perspective of thermodynamics, it is natural to introduce the affinity \[\beta=\mu_{\mathrm{n}}-\mu_{\mathrm{p}}-\mu_{\mathrm{e}}\, \tag{6.6}\] as it quantifies how far the system is out of cold beta equilibrium. This has the advantage that we can work within the so-called Fermi surface approximation [9] and express relevant quantities, like the reaction rates, as expansions for small values of \(\beta\) (with the coefficients in the expansion evaluated at equilibrium, \(\beta=0\)). In the following, and notably in the illustrations we provide, we assume that this strategy is appropriate. Pragmatically, this makes sense as we are only aiming to establish a proof of principle and these assumptions allow us to work out all required parameters for the model from a standard tabulated equation of state. However, it is important to keep in mind that the assumptions will not be appropriate for much of the parameter space (temperature and density) explored in the binary neutron star merger/post-merger phase, and they completely exclude any neutrino effects. At finite temperatures, the true notion of beta equilibrium is more complex, and may require the addition of an isospin chemical potential in the definition of \(\beta\) (see [9, 10, 14, 6, 8, 104]). A complete model should account for the correct notion of equilibrium, but this will require the equation of state table to be extended to include all necessary information. As such data is not yet available for simulations, we are (pragmatically) doing the best we can given the information at hand. As we will see in section 6.2.1, the affinity is (thermodynamically) conjugate to \(Y_{\rm e}\), meaning that either of the two variables can be used "equivalently" in the discussion. This is important because state-of-the-art simulations tend to involve \(Y_{\rm e}\) while the theory is somewhat more transparent when expressed in terms of \(\beta\). In the following we will develop the model both in terms of \(\beta\) and \(Y_{\rm e}\), showing the expected consistency, and emphasising the different perspectives the two complementary approaches bring on the problem. The evolution equation for \(\beta\) is easily obtained by considering it as a function of \((\varepsilon,n,Y_{\rm e})\)--which follows from \(\beta\) and \(Y_{\rm e}\) being thermodynamically conjugated. We arrive at \[u^{a}\nabla_{a}\beta=\left(\frac{\partial\beta}{\partial Y_{\rm e}}\right)_{n,\varepsilon}\frac{\Gamma_{\rm e}}{n}-n\mathcal{B}\theta\, \tag{6.7}\] with \[\mathcal{B}=\left(\frac{\partial\beta}{\partial n}\right)_{\varepsilon,Y_{ \rm e}}+\frac{p+\varepsilon}{n}\left(\frac{\partial\beta}{\partial \varepsilon}\right)_{n,Y_{\rm e}}. \tag{6.8}\] We again see that the system of equations is closed once the reaction rate (as well as the relevant thermodynamical coefficients) is provided. A useful simplification occurs when the system is sub-thermal, when3\(\beta/T\ll 1\). Then we can expand the rate with respect to chemical equilibrium \(\beta=0\) to write it as4\(\Gamma_{\rm e}=-\gamma\beta\). The evolution equation for the affinity \(\beta\) then simplifies to Footnote 3: Let us note for clarity that we use units where the Boltzmann constant \(k_{B}=1\). Footnote 4: There is a sign convention here, and we are following [11]. The logic is, if \(\beta>0\) then \(\mu_{n}>\mu_{\rm e}+\mu_{\rm p}\) and neutron decay is favoured (over electron-capture) as this will release energy. Therefore, we want the electron rate to be positive when \(\beta\) is positive and vice versa. The sign of \(\gamma\) should then be negative. \[u^{a}\nabla_{a}\beta=-\mathcal{A}\beta-n\mathcal{B}\theta\, \tag{6.9}\] where we introduced the new coefficient (with units of inverse time) \[\mathcal{A}=\frac{\gamma}{n}\left(\frac{\partial\beta}{\partial Y_{\rm e}} \right)_{n,\varepsilon}. \tag{6.10}\] The information encoded in the reaction rate \(\Gamma_{\rm e}\) is now "stored" in \(\gamma\). We can then make progress and compute \(\gamma\) from the equation of state tables provided in the compOSE database [210]_as long as_ we ignore finite temperature effects [9, 10, 14, 6, 8] (see appendix C and [106] for more details). While the coefficient \(\mathcal{B}\) can be introduced without reference to an expansion around equilibrium, this is not the case for \(\mathcal{A}\). In the sub-thermal limit we retain only terms linear in \(\beta\), so that \(\mathcal{A}\) must be evaluated at \(\beta=0\). For completeness, let us also comment on the entropy density, viewed as a function of \((\varepsilon,n,Y_{\rm e})\). Using the equations of motion for these quantities (as well as the generalized Gibbs relation provided in section 6.2 below) we arrive at \[T\nabla_{a}\left(su^{a}\right)=\beta\Gamma_{\rm e}. \tag{6.11}\] As long as \(\beta\) has the same sign as \(\Gamma_{\rm e}\) the entropy increases. This is guaranteed to be the case in the sub-thermal limit if \(\gamma<0\). Note that this assumes that a negligible amount of energy is deposited in neutrinos by the reactions, which will be a poor approximation at high temperatures. This important caveat will quantitatively affect our results without changing the qualitative conclusions. #### Thermodynamics of a reactive system Having discussed the hydrodynamics, let us turn to the associated thermodynamics. Because the underlying model is required to be causal, and hence based on Cattaneo-type laws, it is natural to set the discussion within the Extended Irreversible Thermodynamics (EIT) framework (cf. section 2.1.2). We here provide a streamlined discussion, and refer to the monograph [121] and references therein (see also [89] for a recent analysis focused specifically on bulk viscosity). The first step is to assume that the Gibbs relation takes the usual form--noting that the various quantities may not be in thermodynamical equilibrium \[p+\varepsilon=\sum_{{\rm x}={\rm n},{\rm p},{\rm s},{\rm e}}n_{\rm x}\mu_{\rm x }=n\mu_{\rm n}-n_{\rm e}\beta+Ts\, \tag{6.12}\] and \[dp=\sum_{{\rm x}={\rm n},{\rm p},{\rm s},{\rm e}}n_{\rm x}d\mu_{\rm x}=nd\mu_{ \rm n}-n_{\rm e}d\beta+sdT. \tag{6.13}\] As we will be working at the fluid level with either \((n,\varepsilon,Y_{\rm e})\) or \((n,\varepsilon,\beta)\), we can use the entropy as a thermodynamical potential. This is convenient because if we also assume that the system is close to thermodynamical equilibrium, we can expand the entropy as \[s=s^{\rm eq}(n,\varepsilon)+\frac{1}{2}s_{2}(n,\varepsilon)\beta^{2}\,\quad \mbox{where}\ s_{2}=\left(\left.\frac{\partial^{2}s}{\partial\beta^{2}}\right| _{\beta=0}\right)_{n,\varepsilon}. \tag{6.14}\] From this we can compute the out-of-equilibrium expansion of the thermodynamical variables. Linearizing in deviations from equilibrium, and assuming that the equation of state is expressed in terms of rather than, we obtain (6.15a) (6.15b) (6.15c) Note that the thermodynamical requirement on the entropy reaching a maximum at equilibrium, namely, implies that must be negative. Recalling eqs. (6.9) and (6.10), and the fact that the restoring term, we see that and therefore plays the role of an (inverse) relaxation rate. Now that we have expansions (in ) of the thermodynamical variables, we can use the Gibbs relation to work out the pressure. To linear order in the deviation from equilibrium we then have (6.16) or, explicitly, (6.17) and (6.18) In essence, the thermodynamical expansion provides us with an expression for the out-of-equilibrium contribution to the pressure, which would naturally be interpreted as a bulk viscosity. We identify (6.19) Note that, even though we have outlined the derivation in the simplest case (where the system is subthermal and close to equilibrium), the argument applies more generally. A broader discussion would rely on a detailed description of the out-of-equilibrium physics, which is not included in equation of state tables currently used for numerical simulations. Specifically, as described in appendix C, starting from a three-parameter equation of state from the compOSE database, the derivatives we need can be worked out before carrying out a simulation. We are relying on this in the specific example discussed later. #### Thermodynamics working with the equilibrium electron fraction As mentioned earlier, we may equivalently work with the electron fraction \(Y_{\rm e}\). Given this, we want to revisit the path that we just followed, now working with the electron fraction. As we will see, this also results in a correction term to the pressure, which will be naturally expressed in terms of derivatives involving a notion of equilibrium electron fraction. In order to define the equilibrium electron fraction we consider the following thermodynamical potential \[g=s-n_{\rm e}\frac{\beta}{T}\, \tag{6.20}\] such that \[dg=\frac{1}{T}d\varepsilon-\frac{\mu_{\rm n}}{T}dn-n_{\rm e}d\left(\frac{ \beta}{T}\right). \tag{6.21}\] Exactly as we did for the entropy, we can then expand \(g\) around equilibrium. There will now be a first order term in \(\beta\), which provides the "formal definition" of the equilibrium electron number density, \(n_{\rm e}^{\rm eq}\). We get \[g(n,\varepsilon,\beta/T)=s^{\rm eq}(n,\varepsilon)-n_{\rm e}^{\rm eq}\frac{ \beta}{T}+\frac{1}{2}g_{2}\left(\frac{\beta}{T}\right)^{2}\, \tag{6.22}\] where \[n_{\rm e}^{\rm eq}=\left(\left.\frac{\partial g}{\partial(\beta/T)}\right|_{ \beta=0}\right)_{n,\varepsilon}\,\ \ \ \ {\rm and}\ \ \ \ g_{2}=\left(\left.\frac{\partial^{2}g}{\partial\left(\beta/T\right)^{2}} \right|_{\beta=0}\right)_{n,\varepsilon}. \tag{6.23}\] We can then work out the expansion (to first order in \(\beta\)) for \(n_{\rm e}\) as \[n_{\rm e}(n,\varepsilon,\beta)=n_{\rm e}^{\rm eq}(n,\varepsilon)-g_{2}\frac{ \beta}{T}\, \tag{6.24}\] and use it in the definition of \(g\) to arrive at \[g=s^{\rm eq}+\frac{1}{2}s_{2}\beta^{2}-n_{\rm e}^{\rm eq}\frac{\beta}{T}+g_{2 }\left(\frac{\beta}{T}\right)^{2}. \tag{6.25}\] Comparing this with the expansion above we identify \[g_{2}(n,\varepsilon)=-T_{\rm eq}^{2}s_{2}=-T_{\rm eq}\left(\frac{\partial \beta}{\partial n_{\rm e}}\right)_{n,\varepsilon}^{-1}. \tag{6.26}\] Using this result and working out the expansion for the thermodynamical variables from \(g\), we obtain (after linearizing in \(\beta\) and introducing \(Y_{\rm e}^{\rm eq}=n_{\rm e}^{\rm eq}/n\)) \[T =T^{\rm eq}+n\left(\frac{\partial Y_{\rm e}^{\rm eq}}{\partial \varepsilon}\right)_{n}T^{\rm eq}\beta\, \tag{6.27a}\] \[\mu_{\rm n} =\mu^{\rm eq}+\left[\mu^{\rm eq}n\left(\frac{\partial Y_{\rm e}^ {\rm eq}}{\partial\varepsilon}\right)_{n}+n\left(\frac{\partial Y_{\rm e}^{ \rm eq}}{\partial n}\right)_{\varepsilon}-Y_{\rm e}^{\rm eq}\right]\beta\,\] (6.27b) \[Y_{\rm e} =Y_{\rm e}^{\rm eq}-\left(\frac{\partial\beta}{\partial Y_{\rm e }}\right)_{n,\varepsilon}^{-1}\beta. \tag{6.27c}\] Combining the result with the Gibbs relation we can work out the pressure (and the "thermodynamical" bulk viscosity) \[p(n,\varepsilon,\beta)=p^{\rm eq}(n,\varepsilon)+p_{1}\beta=p^{\rm eq}+ \chi_{t}\, \tag{6.28}\] with \[p_{1}=n\left[(p^{\rm eq}+\varepsilon)\left(\frac{\partial Y_{\rm e}^{\rm eq} }{\partial\varepsilon}\right)_{n}+n\left(\frac{\partial Y_{\rm e}^{\rm eq}}{ \partial n}\right)_{\varepsilon}\right]. \tag{6.29}\] The take home message is that we have two equivalent expressions for the bulk-viscosity purely from thermodynamical arguments: this is always written as \(\chi_{t}=p_{1}\beta\), where \(p_{1}\) can be either written as in eq. (6.15) or (6.29). These results are thermodynamically correct as long as the linearization in \(\beta\) is valid (i.e., the system is close to chemical equilibrium), independently of the modelling of the relaxation towards equilibrium. ### 6.3 Approximating the reactive system At this point we have formulated the relaxation problem for the reactive system. The equations we have written down are, in principle, all we need to evolve the system. However, it may not be numerically practical to solve the full nonlinear system, for example when the physical reactions are fast compared to numerically resolvable timescales. Given this, it is natural to consider approximations. To set up the discussion let us introduce the (proper) time derivative \(d/dt=u^{a}\nabla_{a}\). We can then write the hydrodynamical equations in non-dimensional form (assuming for a moment that we work with the lepton fraction instead of \(\beta\), an assumption that makes no practical difference here) \[\frac{d\epsilon}{dt} =-\frac{1}{\epsilon_{St}}(\epsilon+c_{r}^{2}p)\theta\, \tag{6.30a}\] \[a_{b} =-\frac{1}{\epsilon_{St}}\frac{1}{\epsilon_{Ma}^{2}}\frac{1}{ \epsilon+c_{r}^{2}p}\ \perp_{b}^{c}\nabla_{c}p\,\] (6.30b) \[\frac{dn}{dt} =-\frac{1}{\epsilon_{St}}n\theta\,\] (6.30c) \[\frac{dY_{\rm e}}{dt} =-\frac{1}{\epsilon_{A}}(Y_{\rm e}-Y_{\rm e}^{\rm eq})\, \tag{6.30d}\] where we have defined the dimensionless parameters \[\epsilon_{St}=\frac{l_{r}}{u_{r}t_{r}}\,\quad\epsilon_{Ma}=\frac{u_{r}}{c_{r }}\,\quad\epsilon_{A}=\frac{1}{\mathcal{A}t_{r}}\, \tag{6.31}\] and introduced a reference sound speed \(c_{r}\) as well as \(l_{r},t_{r},u_{r}\) as reference lengthscale, timescale and fluid velocity--so that \(a_{b}\) has dimensions \(u_{r}/t_{r}\). To write the equations in non-dimensional form as above, we have first taken the sub-thermal limit \(\Gamma_{\rm e}=-\gamma\beta\) and then expanded around equilibrium5. From this we see that \(n,\epsilon,u^{a}\) evolve on similar timescales (in terms of the proper time associated with \(u^{a}\)), while the electron fraction evolution timescale is given by \(\epsilon_{A}\). Assuming--as expected for large regions in neutron star merger simulations--that reactions occur on a fast timescale (so that \(\epsilon_{A}\ll 1\)), we may consider three different regimes: i) The expansion rate6\(\theta\) varies only (or primarily) on slow timescales--which makes sense for numerical simulations where the spatial dynamics are resolvable but the reaction timescale is not; ii) The expansion \(\theta\) varies on the fast timescale and hence this must be resolved in a simulation; iii) The flow is turbulent and therefore all scales are coupled. In the last two cases, we cannot analytically simplify the problem much--expensive direct numerical simulations are required, although the large-eddy strategy [151, 136, 49, 58] (see also section 6.4.3) may provide a useful alternative. As our main interest here is to consider the regime where progress can be made through approximations, we focus on the first case, where we can use standard multi-scale methods (see, e.g. [223]) to "integrate out" the fast behaviour. Footnote 5: Let us note that, as is clear from the last of eq. (6.27), \(Y_{\rm e}=Y_{\rm e}^{\rm eq}\) if and only if \(\beta=0\), and that in the sub-thermal limit this means \(\Gamma_{\rm e}(Y_{\rm e}=Y_{\rm e}^{\rm eq})=0\). Footnote 6: As can be seen from e.g. eq. (6.7), the expansion rate is a “source-term” in the affinity equation. #### Multi-scale arguments and the reactive system Bringing the multiscale argument from [170] to bear on the problem (see appendix B for more details on the results used throughout the rest of this chapter) and assuming that we continue to work with \(\beta\), we have to compute the late-time behaviour for the affinity by integrating the \(\beta\) equation considering the other variables as parameters, and then taking the limit \(t\to\infty\). The approximated equations for the remaining variables are then unchanged to lowest order, but we have to evaluate every function of using the late-time result. This is intuitively motivated by the underlying assumption that the affinity evolves on a faster timescale, so that the remaining degrees of freedom are approximately frozen on short timescales. The inclusion of the first order corrections then guarantees the approximated equations to be correct to up to times. The equations of motion using the affinity can be written (6.32) The non-dimensional form in eq. (6.30) indicates that the reaction timescale enters only through --and it can be easily seen that relates to how quickly the equilibrium electron fraction adjusts to a change in number and energy density. Assuming only to be fast, and using the results from appendix B.4 we split the affinity into fast and slow pieces, writing (6.33a) (6.33b) where. By construction, is independent of --as it is defined at --so the fast dynamics are now linear and the results of appendix B.3 apply. We see that the invariant manifold--describing the "late-time behaviour"--is given by, which is equivalent to saying on the invariant manifold. That is, the invariant manifold corresponds to beta equilibrium, as expected. The reduced system is (6.34) where all terms (pressure and its derivatives, and ) have to be evaluated at beta equilibrium. The final equation decouples as all quantities in the first two equations depend on the total evaluated at equilibrium, which is. We see from the energy equation that the pressure correction appears as (6.35) where in the second equality we have re-absorbed the scaling parameter into the reaction rate. If we instead make the assumption that is also a fast parameter, then it is easy to see that the affinity would be an entirely fast variable, that is automatically. Repeating the analysis, the invariant manifold would now be, and as a consequence the pressure in the energy (as well as Euler) equation (6.36) This shows that, in the case when is fast the bulk-viscous correction enters already at lowest order. The corrections to this are second order in, beyond the regime of validity of the theory (linear in ), and hence cannot be trusted. In essence, whatever assumption we make for (being fast or slow), the result is the same. The "interpretation" of eq. (6.35) as the Navier-Stokes bulk-viscosity is supported by the analysis in section 6.2 (cf. eq. (6.19))--recall also the argument around eq. (6.1), now specified to eq. (6.9). This result, while in line with "expectations", is non-trivial. In fact, common arguments in favour of a representation of the net effect of under-resolved reactions via a bulk-viscous pressure are typically perturbative in nature [11, 7]. #### Invariant manifold method with the electron fraction Let us now run through the invariant manifold argument working with the electron fraction. Consistently with the dimensional analysis of eq. (6.30), we assume that the electron fraction evolves on faster timescales than the other variables. This means we are effectively considering the system of equations (6.37a) (6.37b) where we have made explicit that the fast behaviour arise from as explained in appendix B.4. Again, the linear fast case of appendix B.3 is relevant, giving that the invariant manifold is the equilibrium surface. This immediately tells us that, to lowest order, the approximated equations describe a reactive fluid for which chemical equilibrium is restored immediately on the dynamical timescale. Including the first order corrections to the approximated equations--simply drawing on the results from [170] and appendix B.3--we obtain (6.38a) where \[\chi_{d}=\frac{n}{\gamma}\left(\frac{\partial\beta}{\partial Y_{\rm e}}\right)_{n, \varepsilon}^{-1}\left(\frac{\partial p}{\partial Y_{\rm e}}\right)_{n, \varepsilon}\left[(p^{\rm eq}+\varepsilon)\left(\frac{\partial Y_{\rm e}^{ \rm eq}}{\partial\varepsilon}\right)_{n}+n\left(\frac{\partial Y_{\rm e}^{ \rm eq}}{\partial n}\right)_{\varepsilon}\right]\theta. \tag{6.39}\] Now, using the fact that \[\left(\frac{\partial p}{\partial\beta}\right)_{n,\varepsilon}=\left(\frac{ \partial p}{\partial Y_{\rm e}}\right)_{n,\varepsilon}\left(\frac{\partial \beta}{\partial Y_{\rm e}}\right)_{n,\varepsilon}^{-1}\, \tag{6.40}\] along with the two alternative formulae for \(p_{1}\) eqs. (6.18) and (6.29) we observe the (pleasing) consistency with the result in eq. (6.35). Exactly as when working with the affinity, by integrating out the fast variable we pick up an additional contribution to the pressure that corresponds to a bulk-viscous response. #### Partially resolved reactions and double counting Let us now consider the situation where some part of the reactions are slow enough that we can capture them, and the rest is not. We consider the case where we can split the reactions in two families, fast ones (representative of, say, direct Urca processes) that cannot be captured by the numerics, and slow ones (representative of, say, modified Urca processes) that can be resolved. Even though this might be somewhat artificial, the key point is that we assume a clear scale-separation between two types of reactions. We do so as we are only interested in a proof of principle argument here. We want to show the problems in constructing a consistent bulk viscous approximation in this case. It would seem reasonable, after the discussion we just had, to model the impact of unresolved reactions as a bulk-viscosity. Whilst this may be a valid strategy, one has to be careful because the multi-scale methods suggest that the resolvable reaction rates should pick up a correction term as well. This also provides a proof of principle argument for the "double counting" issue raised by Hammond et al. [104]. The discussion in [104] points out that the effects of under-resolved reactions are already accounted for in schemes aimed at modelling neutrinos, and hence adding a bulk-viscous pressure on top of that may lead to a double-counting. Let us start from the equation for the electron fraction, written in terms of the reaction rate. We define the fast/slow part of the total creation rate as discussed in appendix B.4 \[\Gamma_{f}=\lim_{\varepsilon\to 0}(\varepsilon\Gamma_{\rm e})\,\qquad \Gamma_{s}=\Gamma_{\rm e}-\frac{1}{\varepsilon}\Gamma_{f}\, \tag{6.41}\] and hence split the electron fraction into its fast and slow contributions according to \[\frac{d}{dt}Y_{s}=\frac{\Gamma_{s}}{n}\,\quad\text{and}\quad\frac{d}{dt}Y_{f}= \frac{1}{\epsilon}\frac{\Gamma_{f}}{n}. \tag{6.42}\] We can think of \(\Gamma_{f}\) as a function of \((n,\varepsilon,Y_{s}+Y_{f})\) and define a fast equilibrium fraction \(Y_{f}^{\text{eq}}\) such that \(\Gamma_{f}(n,\varepsilon,Y_{s}+Y_{f}^{\text{eq}})=0\). It is then possible to expand the equation for the fast variable as \[\frac{d}{dt}Y_{f}=\frac{1}{\epsilon}\frac{\partial\Gamma_{f}}{\partial Y_{ \text{e}}}\Big{|}_{Y_{\varepsilon}=Y_{s}+Y_{f}^{\text{eq}}}(Y_{f}-Y_{f}^{\text {eq}}). \tag{6.43}\] Note that, because the two fractions must add up to the total electron fraction, we have \(Y_{f}^{\text{eq}}=Y_{\text{e}}-Y_{s}\), namely the equilibrium fast fraction \(Y_{f}^{\text{eq}}=Y_{f}^{\text{eq}}(n,\varepsilon,Y_{s})\). As for the slow reaction rate, we do not need to expand it around equilibrium because this is assumed to be resolved in the simulation. Applying the results of appendix B (including the first order corrections) we obtain \[\frac{d}{dt}\begin{pmatrix}n\\ \varepsilon\\ Y_{s}\end{pmatrix}=\begin{pmatrix}-n\theta\\ -(p+\varepsilon+\chi_{d})\theta\\ \Gamma_{s}-\frac{1}{n}\left(\frac{\partial\Gamma_{s}}{\partial Y_{\text{e}}} \right)_{n,\varepsilon}\left(\frac{\partial p}{\partial Y_{\text{e}}}\right) _{n,\varepsilon}^{-1}\chi_{d}\end{pmatrix}\, \tag{6.44}\] with \[\chi_{d}=\left\{\left[(p+\varepsilon)\left(\frac{\partial Y_{f}^{\text{eq}}} {\partial\varepsilon}\right)_{n,Y_{s}}+n\left(\frac{\partial Y_{f}^{\text{eq }}}{\partial n}\right)_{\varepsilon,Y_{s}}\right]\theta-\left(\frac{\partial Y _{f}^{\text{eq}}}{\partial Y_{s}}\right)_{n,\varepsilon}\frac{\Gamma_{s}}{n }\right\}\left(\frac{\partial\Gamma_{f}}{\partial Y_{\text{e}}}\right)_{n, \varepsilon}^{-1}\left(\frac{\partial p}{\partial Y_{\text{e}}}\right)_{n, \varepsilon}^{\, \tag{6.45}\] and everything evaluated at \(Y_{f}=Y_{f}^{\text{eq}}\). This argument then shows that, if there are both fast and slow reactions in the system, and we are trying to capture the effect of the fast/unresolved ones via a bulk-viscosity like contribution, we need to tread carefully, as the introduction of the bulk viscosity also impacts on the resolved reaction rates, and the rates impact on the definition of equilibrium. ### 6.4 Making contact with simulations Having discussed the approximate equations we obtain from the multi-scales approach, it makes sense to "step back" and ask to what extent we expect this approximation to make sense for numerical simulations. To set the stage for the discussion, let us rewrite eq. (6.9) as \[\frac{d\beta}{dt}=-\mathcal{A}\beta+\mathcal{B}\frac{dn}{dt}. \tag{6.46}\] The parabolic limit--which corresponds to neglecting the time derivative of the affinity \(d\beta/dt\)--in the Fourier domain is \[\beta_{NS}(\omega)=\frac{\mathcal{B}}{\mathcal{A}}n\omega\, \tag{6.47}\] while the extended irreversible thermodynamics (EIT) result takes the form \[\beta_{EIT}(\omega)=n\mathcal{BA}\frac{\omega}{\omega^{2}+\mathcal{A}^{2}}=\beta_{ NS}(\omega)\frac{\mathcal{A}^{2}}{\mathcal{A}^{2}+\omega^{2}}. \tag{6.48}\] The two results are compared in fig. 6.1, from which we see that the EIT behaviour shows the "expected" resonance feature [191, 11, 10, 7, 6, 8]. We note, however, that this is not the usual figure (see, for example, figure 2 in [10]), as we are plotting the affinity \(\beta\) and not the bulk-viscosity coefficient. To link the results, we need to recall that \(\chi=p_{1}\beta\) and then define \(\zeta\) via \(\chi=-\zeta\theta\). Then we can use eq. (6.4) to write \(\theta=-\dot{n}/n\) to obtain \[\zeta_{NS}=p_{1}\frac{\mathcal{B}}{\mathcal{A}}n\,\quad\text{and}\quad\zeta_{ EIT}=\zeta_{NS}\frac{\mathcal{A}^{2}}{\omega^{2}+ \mathcal{A}^{2}}. \tag{6.49}\] The difference is subtle as both \(\beta_{EIT}\) and \(\zeta_{EIT}\) present resonant features. However, \(\zeta_{EIT}\) does so when the frequency \(\omega\) is kept fixed and \(\mathcal{A}\) is varied, while \(\beta_{EIT}\) exhibits the resonance even if we fix \(\mathcal{A}\) and vary the frequency. Figure 6.1: Illustrating the behaviour of \(\beta(\omega)\) in two cases: the solution to the Cattaneo equation (i.e. the full extended irreversible thermodynamics (EIT) relaxation towards the Navier-Stokes limit, blue solid curve) and the parabolic case (the Navier-Stokes limit, orange dashed line). The shaded region indicates frequencies we assume we may not have “access” to numerically (this region moves towards higher frequencies when the numerical resolution is increased). The difference may not seem particularly relevant, but the illustration in fig. 6.1 allows us to draw useful conclusions. The figure shows that the parabolic limit (i.e. the limit we represent with the multi-scale argument) is a "good" approximation at low frequencies (we provide a quantitative argument in the next section 6.4.1), but becomes less accurate at higher frequencies7. Keeping in mind that we consider the problem in the context of numerical simulations, we may also note that there will inevitably be frequencies that we do not have access to. This high-frequency cut-off is (schematically) represented by the shaded region in fig. 6.1. Specifically, the resolution is limited by \(\omega\sim\Delta t^{-1}\) where \(\Delta t\) is the numerical timestep. Now, because the resonance frequency is given by \(\omega={\cal A}\), we are left with two options: either the numerical timestep is large enough (\(\Delta t^{-1}<{\cal A}\)) that the peak is not resolved, or the simulation is precise enough that the region to the right of the resonant peak is (at least partially) resolved. In the first case, our simulation cannot resolve the (fast) relaxation towards the Navier-Stokes limit (and it is also likely that the expected instability [115] associated with the Navier-Stokes behaviour will be suppressed). In the second case, the relaxation towards Navier-Stokes can be resolved by the numerics, so working with the Navier-Stokes approximation is just wrong. Footnote 7: The illustration also provides an intuitive demonstration of why the high-frequency behaviour is non-causal and associated with a linear instability [115]. #### What bulk viscous pressure approximation is suitable? In the regime where the reactions need to be approximated as a bulk viscous pressure there are many possible ways in which such an approximation can be constructed. The standard first order form could be used, where the bulk pressure depends on a (data dependent) coefficient multiplied by the expansion associated with the fluid motion [158]. Alternatively, in frequency space the bulk pressure can be written as a term depending on the thermodynamics multiplied by a function of frequency. In this case the "true" (Cattaneo) result differs from the first order ("Navier Stokes") result only in the form of the frequency term. Intuitively we have argued that these two different bulk viscous approximations should be close to each other, as long as we are considering frequencies below the resonant peak. Here we make that argument more quantitative. We are interested in the low frequency part of the pressure that can be captured in a numerical simulation. We will therefore assume that we are only interested in frequencies \(\hat{\omega}=\omega/{\cal A}<1\) ("to the left of the peak"), and that there is a hard cut-off at \(\hat{\omega}_{\Delta}\sim 2\pi/\left({\cal A}\,\Delta t\right)\) (the numerical scheme is spectral like and captures all frequencies available on the grid). Therefore \[\chi=\int_{-\infty}^{\infty}{\rm d}\hat{\omega}\,H\left(1-\left|\frac{\hat{ \omega}}{\hat{\omega}_{\Delta}}\right|\right)\,p_{1}\beta=2\int_{0}^{\hat{ \omega}_{\Delta}}{\rm d}\hat{\omega}\,p_{1}\beta. \tag{6.50}\] We want the relative difference between the two bulk viscous pressure approximations, which is \[\mathcal{E}=\frac{|\chi_{\rm Cattaneo}-\chi_{\rm NS}|}{|\chi_{\rm NS}|}. \tag{6.51}\] To compute this we _assume_ that we can bound the correction terms as powers of the frequency, as \[C_{-}\hat{\omega}^{a}<n\mathcal{B}p_{1}<C_{+}\hat{\omega}^{b}\, \tag{6.52}\] where \(0>a>b>-2\) is needed for the results to converge. This seems reasonable for Kolmogorov turbulence, but the range of these coefficients \((C_{\pm},a,b)\) has an impact on the result. From this assumption we have \[2\int_{0}^{\hat{\omega}_{\Delta}}\mathrm{d}\hat{\omega}\,C_{-}\hat{\omega}^{1 +a}<|\chi_{\rm NS}|<2\int_{0}^{\hat{\omega}_{\Delta}}\mathrm{d}\hat{\omega}\,C _{+}\hat{\omega}^{1+b}\, \tag{6.53}\] giving \[\frac{2C_{-}}{2+a}\hat{\omega}_{\Delta}^{2+a}<|\chi_{\rm NS}|<\frac{2C_{+}}{2 +b}\hat{\omega}_{\Delta}^{2+b}. \tag{6.54}\] To bound the pressure difference, first write \[|\chi_{\rm Cattaneo}-\chi_{\rm NS}| =\left|n\mathcal{B}p_{1}\hat{\omega}\left(\frac{\hat{\omega}^{2} }{1+\hat{\omega}^{2}}\right)\right|\] \[=\left|n\mathcal{B}p_{1}\hat{\omega}^{3}\right|+\mathcal{O}(\hat {\omega}^{5}). \tag{6.55}\] From this we get \[\frac{2C_{-}}{4+a}\hat{\omega}_{\Delta}^{4+a}<|\chi_{\rm Cattaneo}-\chi_{\rm NS }|<\frac{2C_{+}}{4+b}\hat{\omega}_{\Delta}^{4+b}. \tag{6.56}\] Using the appropriate bound for numerator and denominator in \(\mathcal{E}\), it follows that \[\mathcal{E}<\frac{C_{+}}{C_{-}}\frac{2+a}{4+b}\hat{\omega}_{\Delta}^{2+b-a}. \tag{6.57}\] As \(a>b\) and \(a-b<2\) we see the difference between the two approximations is, in the frequency range (\(\hat{\omega}_{\Delta}<1\)) of interest, of order \(\mathcal{E}<\mathcal{O}(\hat{\omega}_{\Delta}^{c})\) where \(c\in(0,2)\). If we expect \(a\simeq b\) then \(\mathcal{E}<\mathcal{O}(\hat{\omega}_{\Delta}^{2})\). Therefore the difference between the two approximations will be small until we are close to \(\hat{\omega}_{\Delta}=1\), which is the resonant frequency peak. This supports the arguments made in connection with the results in fig. 6.1. #### How relevant is bulk viscosity in mergers? In the previous sections we have described how a reactive system near equilibrium can be approximated as a single fluid model with a bulk viscous pressure. One question to tackle is whether this approximation is of any relevance for, for instance, a neutron star merger simulation. In order for the argument to be of interest we need multiple criteria to be met. First, the timescales that we can resolve in the simulation must not include the key timescales for the reactions. If we can resolve the reactions then we should, as the bulk viscosity approximation (as well as the Cattaneo-type law eq. (6.9)) only includes the linear effects (in deviations from thermodynamic equilibrium), whilst the reactions account for the full nonlinear behaviour. Besides, the illustration in fig. 6.1 shows that the approximation would simply break down in a resolved simulation. Second, we would require that the correction to the total pressure from the bulk viscous approximation should not be negligible. If the bulk viscous term due to the reactions is tiny in comparison to the standard fluid pressure then it is pointless to include the reactions in the numerical simulation. Instead we should impose chemical equilibrium directly, and solve without reactions or a bulk viscosity approximation. Figure 6.2 combines both these criteria in a single plot (based on data for the APR equation of state [195, 210] used in the simulations discussed in [104]). In order for the bulk viscous approximation to be useful the timescale from the simulation must be slower, or at a lower angular frequency, than that given by the peak frequency of the resonance. This peak is defined by. Therefore every point in the plane _below_ a contour of fixed has reactions acting on time or frequency scales that can (and hence should) be resolved by the numerical simulation. The contours given show that as the numerical grid resolution is improved, more of the plane should be modelled by solving directly for the reactions. The bottom two contours in the figure bracket frequencies relevant for bulk gravitational wave generation (), and so must be captured by any numerical simulation. The top contour lines are indicative of the grid frequencies achievable by current simulations (an angular frequency of roughly corresponds to a grid spacing of 200m) and the state of the art in maybe ten years (an angular frequency of roughly corresponds to a grid spacing of 2m). However, there remain points at densities above and temperatures above where reactions can only be modelled using the bulk viscous approximation, even with the best resolution available in the near future. Current simulations such as [104] do see substantial amounts of matter in the post-merger remnant within this part of phase space, meaning that the bulk viscous approximation will remain necessary for the foreseeable future. In addition, fig. 6.2 shows the maximum relative magnitude of the bulk viscosity pressure, again in the plane, with contours of fixed overlaid. In this calculation the bulk viscous approximation requires a range of additional approximations, including assuming the Fermi surface approximation is valid in order to compute the rates. These approximations are laid out in detail in appendix C. However, for the purposes of our argument, the key is to note that we are interested in regions of phase space where the bulk viscosity approximation may be valid (above the contours), and where the bulk viscosity makes a significant contribution to the pressure. We see that this holds again in the region with densities above \(10^{-2}n_{\text{sat}}\) and temperatures above 10MeV. Figure 6.2 also shows a number of additional features. First, on the right hand side we see a vertical line. As this corresponds to a density just above saturation, it is natural to associate it with the APR phase transition. To the left of this there are two features that start vertically at similar densities. The first one goes up until about 10MeV before turning left and going down to just below 1MeV. This is expected to be an artefact due to the way the equation of state is constructed since different techniques are used in different parts of the parameter space and then cobbled together. The second feature also turns left, but then goes down to about 2MeV before turning right again and going to the top right of the figure. Even though we have not explored this feature in detail, we believe this has to do with points in parameter space where \(\partial\beta/\partial Y_{e}\) goes to zero. We also stress that we see similar features when analogous figures are produced using different (but similarly constructed) equations of state. As a final important point we note that there is no sharp transition between regions where the effects of the bulk viscous approximation are sizeable and where they are not, when considering contours of fixed \(\mathcal{A}\) (which can be linked to the numerical resolution). Unless the numerical resolution can be increased by many orders of magnitude beyond the current state of the art, there will always be regions in spacetime where the bulk viscosity is significant but the approximation itself is debatable. Therefore we have to consider a model that is able to model reactions by directly evolving, for example, the species fraction in some parts of spacetime, makes the bulk viscous pressure approximation in other parts of the spacetime, and transitions between the two appropriately. This is difficult to do correctly. #### The impact of large-eddy filtering We have demonstrated how the Navier-Stokes limit for bulk viscosity due to reactions can be obtained through a multi-scale argument, effectively integrating out the fast timescales of the problem. The argument is quite intuitive. However, the discussion is not yet complete. In the context of numerical simulations we also have to consider other "filtering" aspects. In particular, we need to explore the link to (or conflict with) the large-eddy strategy. This problem is not straightforward. In a sense, the large-eddy approach is "complementary" to the approximate scheme as it aims to represent the regimes where multi-scale arguments do not apply--namely turbulent flows and when all dynamics take place on fast timescales. One may be tempted to view the multi-scale method and large-eddy space-filtering as "orthogonal". A space filter cuts off short length-scales, while the invariant manifold method integrates out the fast dynamics (removing short timescales). However, the two issues are linked. An actual numerical implementation introduces an implicit filtering associated with the discretized numerical grid. On a grid with fixed spacing \(\Delta x\), the implicit filtering requires that any spatial feature on shorter lengthscales cannot be captured and must be modelled by some closure relation, as discussed in [176, 177, 49, 220, 72]. Equally, the CFL bound (linked to causality) imposes that the timestep \(\Delta t\propto\Delta x\), and so any physical feature happening on shorter timescales cannot be captured. Thus, increasing the accuracy by modifying the grid spacing automatically means the \(\Delta t\) in fig. 6.1 decreases and therefore the amount of the fast reactions that cannot be captured, shown Figure 6.2: The maximum (for each point we assume that \(\omega=\mathcal{A}\)) potential relative contribution of the bulk viscous approximation \(\chi^{\rm max}/p^{\rm eq}\) at each point in phase space using the APR [210, 195] equation of state. We see that the bulk viscous pressure contribution can be large for most temperatures when \(n\gtrsim 10^{-3}n_{\rm sat}\). However, the bulk viscous approximation should only be used where the reaction rate cannot be resolved by the numerical simulation, which is where the grid frequency is greater than \(\mathcal{A}\). Also shown are contours at \(\mathcal{A}=\{10^{3},10^{4},10^{7},10^{9}\}\) s\({}^{-1}\) (solid, dashed, dot-dash, dot). For current simulations, frequencies of \(\sim 10^{6}\) s\({}^{-1}\) are resolvable. This shows that the bulk viscous approximation should only be used for \(T\gtrsim 10\)MeV, and as the grid resolution improves becomes less necessary. by the grey area, decreases. There is a direct link between the amount of physics that must be modelled via a spatial filtering and via a time filtering or multi-scale argument. In order to examine how the filtering associated with the large-eddy strategy impacts on the discussion of bulk viscosity, let us frame the argument using the "fibration" framework developed in chapter 4. Let us first consider what happens when filtering is applied to the continuity equations for baryons and electrons, eqs. (6.4) and (6.5). As discussed in section 4.5.1, the equation for the baryons remains unchanged if (and only if!) we choose to work with the density-weighted four-velocity: \[\nabla_{a}n^{a}=0\Longrightarrow\bar{u}^{a}\nabla_{a}\bar{n}+\bar{n}\nabla_{a }\bar{u}^{a}=0\, \tag{6.58}\] where \(\langle n^{a}\rangle=\bar{n}\bar{u}^{a}\) defines the filtered baryon number density current (indicated with a tilde). In the case of the electrons, we need to introduce the "electron fraction residual" \[\tau^{a}_{Y_{\rm e}}=\langle Y_{\rm e}n^{a}\rangle-\langle Y_{\rm e}\rangle \langle n^{a}\rangle\, \tag{6.59}\] and work with a coarse-grained electron fraction, defined by \(\bar{n}_{\rm e}=-\bar{u}_{a}\langle n_{\rm e}u^{a}\rangle\) and leading to \[\bar{Y}_{\rm e}=\frac{\bar{n}_{\rm e}}{\bar{n}}=-\frac{\bar{u}_{a}}{\bar{n}} \langle n_{\rm e}u^{a}\rangle. \tag{6.60}\] The filtered equation for the electron fraction then becomes8 Footnote 8: Note that we could choose a different coarse-grained observer in such a way that the effective creation rate is re-absorbed in the macroscopic fluid four-velocity. However, this would come at a cost since the equation for the filtered baryon current would then have an additional diffusion term on the right-hand-side. \[\bar{n}\bar{u}^{a}\nabla_{a}\bar{Y}_{\rm e}=\langle\Gamma_{\rm e}\rangle- \nabla_{a}\bar{v}^{a}_{\rm e}\, \tag{6.61}\] where \[\bar{Y}_{\rm e}=\langle Y_{\rm e}\rangle-\frac{1}{\bar{n}}\bar{u}_{b}\tau^{b} _{Y_{\rm e}}\,\qquad\text{and}\ \bar{v}^{a}_{\rm e}=\mathbb{I}^{a}_{b}\langle n_{\rm e}u^{b}\rangle= \mathbb{I}^{a}_{b}\tau^{b}_{Y_{\rm e}}. \tag{6.62}\] To avoid confusion, let us explicitly state that the orthogonal projection is here defined with respect to the filtered velocity \(\bar{u}^{a}\). The take home message is simple. The large-eddy filtering introduces an effective creation rate in addition to the faithful microphysical one. As the effect of the reactions can be modelled as a bulk-viscosity this may clearly have an impact on the analysis (cf. eq. (6.7)). Now turn to the remaining equations of motion, e.g. eq. (6.3), which follow from the conservation of the stress-energy-momentum tensor. As this retains exactly the same form as in the perfect fluid case, we can simply draw on the results from sections 4.5.2 and 4.6. The only difference is in the pressure and the Gibbs relation, as we are now considering a reactive system: \[\langle p\rangle=\langle\,-\,\varepsilon+Ts+\mu_{\rm n}n-n_{\rm e}\beta\rangle. \tag{6.63}\] The \(n_{\rm e}\beta\) term was not considered in section 4.7 as the fine-scale model considered there did not account for reactions. Nonetheless, we may simply adapt the same strategy: introduce an effective three-parameter equation of state at the resolved scale, and use it to define the macroscopic thermodynamic variables \[\frac{1}{\bar{T}} \doteq\left(\frac{\partial\bar{s}}{\partial\bar{\varepsilon}} \right)_{\bar{n},\bar{Y}_{\rm e}}\, \tag{6.64a}\] \[-\frac{\bar{n}_{\rm n}}{\bar{T}} \doteq\left(\frac{\partial\bar{s}}{\partial\bar{n}}\right)_{ \bar{e},\bar{Y}_{\rm e}}-\frac{\bar{Y}_{\rm e}}{\bar{n}}\left(\frac{\partial \bar{s}}{\partial\bar{Y}_{\rm e}}\right)_{\bar{n},\bar{\varepsilon}}\,\] (6.64b) \[\frac{\bar{\beta}}{\bar{T}} \doteq\frac{1}{\bar{n}}\left(\frac{\partial\bar{s}}{\partial\bar {Y}_{\rm e}}\right)_{\bar{n},\bar{\varepsilon}}. \tag{6.64c}\] With these definitions in hand, we can rewrite the filtered Gibbs relation as \[\langle p\rangle=-\bar{\varepsilon}+\bar{n}\bar{\mu}_{\rm n}+\bar{T}\bar{s}- \bar{n}\bar{Y}_{\rm e}\bar{\beta}+M\, \tag{6.65}\] with the enhanced closure term \[M=\left[\left(\langle n\mu_{\rm n}\rangle-\bar{n}\bar{\mu}_{\rm n}\right)+ \left(\langle Ts\rangle-\bar{T}\bar{s}\right)-\left(\langle\varepsilon \rangle-\bar{\varepsilon}\right)-\left(\langle n_{\rm e}\beta\rangle-\bar{n} \bar{Y}_{\rm e}\bar{\beta}\right)\right]. \tag{6.66}\] It makes sense to introduce the effective pressure \(\bar{p}=\langle p\rangle-M\) as this will satisfy a Gibbs relation of the pre-filtered form (but now in terms of the coarse-grained equation of state and the associated variables). Then, because the filtered energy and Euler equations will contain \(\langle p\rangle\), the \(M\) term will enter the final equations as a correction to the pressure--effectively providing a bulk-viscous-like contribution. Let us stress that while we are only providing a minimal discussion of a large-eddy model, this is everything we need here. To make real progress we would need to introduce an explicit closure scheme and perform numerical experiments, both of which go beyond the scope of this thesis. The essence of the argument is that the large-eddy filtering introduces an "effective bulk-viscosity" contribution to the coarse-grained equations. This happens in (at least) two ways: i) through the residual term \(M\) stemming from filtering the Gibbs relation ii) by adding an effective creation rate, and the two effects are not (necessarily) linked as they depend on the introduced closure relations. In particular, the effective creation rate (which follows from the four-divergence of \(v_{\rm e}^{a}\)) affects the effective restoring term \(\gamma\), and in turn \(\mathcal{A}\)--as is evident from inspection of eq. (6.10). As the resonance frequency in fig. 6.1 is given by \(\omega=\mathcal{A}\), and this essentially dictates whether or not the Navier-Stokes approximation is applicable, it makes sense to consider applying the filtering and the multi-scale approach at the same time. We can intuitively see (and check explicitly) that the coarse-grained equations will have a bulk-viscous contribution stemming from having integrated out the electron fraction degrees of freedom, and one from the filtering. The analytic expressions of these two terms depend on the order with which we take the steps: either we apply the multi-scale methods first and then filter, or the other way around. The results are unlikely to be the same. To see this, simply note that the multi-scale/invariant manifold method essentially boils down to an approximation of the equations around the equilibrium surface \(Y_{\rm e}=Y_{\rm e}^{\rm eq}\). If we take the filtering step first, the notion of equilibrium changes--both because the (potentially different) equation of state is evaluated in terms of the coarse grained variables, and because of the effective rate. This highlights the importance of including all the relevant physics when constructing the closure terms in a large eddy model. This is problematic as the best closure terms require direct fine scale numerical simulations, and the analysis in this paper shows these expensive simulations need repeating each time additional physics is added. It is bound to be an expensive endeavour. ### 6.5 Summary and Outlook Binary neutron star mergers offer a unique opportunity to explore several extremes of physics, but we need to improve our numerical simulation technology if we want to realize the discovery potential of future gravitational-wave instruments (required to catch the high-frequency dynamics of the merger events). In particular, we need to make sure that efforts to infer the detailed physics are not stumped by systematic errors associated with the numerical implementation. An important step towards realism involves dealing with nuclear reactions. Motivated by this, we have considered the issue of reactions from the perspective of numerical simulations (and the associated limited resolution), aiming to provide "new insights on an old problem". Specifically, we have discussed to what extent it makes sense to capture the net effect of reactions via a bulk viscosity prescription, taking explicitly into account the issue of resolution limitations. In essence, we assessed the impact of the reaction timescales on the way we should frame the modelling, and represent the net effect of reactions. Our key messages link the reaction timescales to the numerical (grid) timescales. When the reactions are slow the ground truth result is found by evolving the reactions directly. When the timescales are comparable, particularly when the physical timescales are (slightly) faster, this is not numerically practical. Instead the evolution system must be approximated. To leading order the reactions relax the system to equilibrium instantly. However, the error incurred is proportional to the ratio in the timescales. The first order (in the ratio of scales) approximation introduces correction terms that act as a bulk viscosity. This demonstrates how a multi-component reactive system can be approximated as a dissipative single fluid. This bulk viscous structure emerges regardless of whether we formulate the problem in terms of \(\beta\) (which would be natural from a thermodynamics perspective) or \(Y_{\rm e}\) (to connect more directly with simulations). In a neutron star merger simulation the ratio of scales covers all ranges, with the scales being comparable (and hence a bulk viscosity approximation necessary) particularly in the core shortly after merger (see [12, 10] and cf. fig. 6.2). We also showed that the prescription for the bulk viscosity--either algebraically in a "Navier-Stokes" like fashion, or by providing an equation of motion in a "Cattaneo" or "Israel-Stewart" like fashion--is irrelevant, as they agree (to the order of the approximation) in the regime where the approximation is useful (at low frequencies where the ratio of scales is comparable). Finally, we demonstrated explicitly how the equations of motion are modified when some, but not all, of the scales are comparable. This flags up how the introduction of a bulk viscosity (either directly through approximating reactions, or indirectly through using large-eddy simulation techniques) can conflict with the definition of equilibrium. Consistently accounting for these issues to avoid "double-counting" is possible, as outlined in section 6.3.3, but difficult. Our main conclusions are intuitive but this is the first time that they have been spelled out in detail. Contrasting theoretical work against the simulations in [104], we find that different regions of the parameter space relevant for mergers would require different prescriptions. In particular, there are regions of the density-temperature phase space where reactions are slow enough that they can (and hence, should) be captured directly, and other regions where they are not--even with the best resolution available, now and in the foreseeable future--with no sharp boundaries between the two regimes. This indicates that, to properly account for reactions, we need to develop numerical codes capable of handling both regimes (reactions fast/slow compared to the resolved timescales), and the transition between them on the fly. The issue of bulk viscosity is also closely linked to the role of large-eddy filtering--which enters the discussion implicitly or explicitly. Noting this, we provided general filtering arguments that help set the stage for further work, and highlighted the coupling between bulk viscosity and large-eddy modelling. A "definitive" prescription for reactive systems will require explicit numerical experiments and the introduction of an appropriate closure model. As a final comment, let us note that there have been recent papers supporting the idea that bulk viscosity effects can be important for gravitational waves [105, 159], and also papers suggesting it has no impact at all [178, 227]. In particular, different approaches are used to tackle the stiffness issue, where reactions happen faster than the numerical scheme can stably capture them. Even though we would have to know the fingerprints of the different numerical implementations to fully understand the origin of such discrepancies, we can add a final comment that may be part of the story. By checking numerically the formal accuracy of the (bulk viscous) approximation detailed above, we found that the expected accuracy results are recovered when the system is started close to the equilibrium manifold. However, for the accuracy result to hold, additional boundary layer effects need to be accounted for if the system is kicked far out of equilibrium--as suggested, for example, by the simulations of [104]. ## Chapter 7 Magneto-rotational instability in mergers: a local Newtonian analysis In the previous chapter we focused on modelling bulk viscosity from the perspective of numerical simulations. We also touched upon the links to turbulence and filtered models. In this chapter, instead, we will focus on the magneto-rotational instability or, as will become clear as we proceed, on more general magneto-shear instabilities. We do so as we often see this kind of instabilities in action--the Kelvin-Helmholtz instability is, for example, responsible for the formation of the billow clouds--and because these provide one of the most important mechanisms for developing and sustaining turbulence. Binary neutron star mergers are no exception. Before we march on, let us point out that the analysis in this chapter is _Newtonian_, in marked contrast with the rest of this work. The reason, quite naturally, is that the nature of the magneto-rotational instability is typically discussed in a Newtonian setting, with almost no exception to the best of our knowledge. The magneto-rotational instability was discovered by Balbus and Hawley in the early 1990s [31, 108, 33] (linking to earlier ideas from, for example, Chandrasekhar [63] and Velikhov [217]). Due to the fast instability growth rate, this mechanism is considered the most promising candidate for developing/sustaining magneto-hydrodynamic turbulence in accretion disks as well as explaining enhanced angular momentum transfer [33, 198]. The instability is due to an interplay between the magnetic field and a sheared background flow. With few exceptions (see for example [147] and [197]) and due to its "local" nature, the magneto-rotational instability is discussed in the so-called "shearing sheet approximation" [95, 113]. That is, the instability is established in a frame that corotates with a fiducial point in the mid-plane of the undisturbed disk (see also [98]). This is convenient both for analytical studies as well as numerical analysis since local simulations can reach much higher resolution than global ones (see [109, 228] and references therein). Although originally discussed in the context of accretion disks, the magneto-rotational instability is expected to play a role also in neutron-star mergers [71, 204, 169, 150, 112, 127], especially for sustaining a magneto-turbulent state in the outer envelope of the remnant, where the Kelvin-Helmholtz instability is less significant or, indeed, not active [126]. To assess whether or not the magneto-rotational instability is active and resolved in merger simulations, criteria discussed/established in the context of accretion disks [110, 111, 200] are often used. However, because binary neutron star mergers are highly dynamical environments, framing a discussion of the magneto-rotational instability using criteria that exploits restrictive symmetry conditions might be misleading. Motivated by this, we aim to discuss the impact of relaxing common assumptions--well-motivated in the accretion disks scenario, like an axi-symmetric and circular background flow--on the magneto-shear instability. The results presented in this chapter will be published in [60]. ### 7.1 Background gradients and plane-wave expansion Let us begin by observing that the magneto-rotational instability is, in some sense, a "global instability analyzed with local tools". The local nature is evident since the instability is established by means of a dispersion relation (hence involves a plane-wave expansion and, by assumption, a short-wavelength approximation). At the same time, one may appreciate the "global nature" of the instability by recalling the key aspects of the instability: The addition of a weak magnetic field turns axisymmetric modes (which would otherwise be hydrodynamically stable) unstable. The global axisymmetry of the background, then, plays a crucial role as the relevant hydrodynamic stability criterion--the Rayleigh criterion [180]--applies to axisymmetric modes only. Although the standard derivation of the instability does not highlight this subtlety, this aspect becomes apparent if we formulate the problem using a co-rotating local frame (cf. the discussion in appendix D). With these points in mind, let us spell out how we intend to discuss the magneto-shear instability without referring to a given axisymmetric and circular background. Consistently with the shearing box idea [95, 113], the strategy is to zoom in on a small region of fluid--small enough for the analysis to be local but large enough to allow for a meaningful hydrodynamic description. We then set up a local Cartesian frame comoving with the background flow--so that the background velocity vanishes at the origin of the local box. As this frame moves around with the flow--and hence cannot be expected to be inertial--we need to consider the (at this point Newtonian) ideal magneto-hydrodynamics equations in a non-inertial frame. The non-inertial equations will then be perturbed--retaining gradients in the background quantities as explained below--and a local WKB-type dispersion relation will be derived and studied. This way we can account for the effects of a sheared background and its interplay with the magnetic field in a general setting. Strictly speaking, the plane-wave expansion only makes sense for a homogeneous background --that is, the plane-wave amplitude is assumed to vary over the same scales as the background. At the same time, we know that a sheared background is key to the magneto-rotational instability. Therefore, given any quantity/field \(a\), we first write it as a sum of background plus perturbations \[a=A+\delta A\, \tag{7.1}\] and then introduce a WKB-type expansion of the form [213, 28] \[\delta A=\bar{\delta}\left(\sum_{q=0}\epsilon^{q}\bar{A}_{q}\right)e^{i\theta /\epsilon}\, \tag{7.2}\] with book-keeping parameters \(\bar{\delta}\) and \(\epsilon\) (see also [168]). The former (\(\bar{\delta}\)) is introduced to measure the relative magnitude of background vs. perturbations, while the latter (\(\epsilon\)) is given by \(\epsilon\approx\lambda/L\) where \(\lambda\) is the typical wavelength of the waves and \(L\) is the typical lengthscale over which the wave amplitude, polarization and wavelength vary. Having split the perturbations into amplitude and phase, we follow the standard convention [155] and stick all "post-geometric optics" corrections into the amplitude \(\bar{A}_{q}\). With this Ansatz, the background equations are obtained by collecting all terms of order \(\mathcal{O}(\bar{\delta}^{0},\epsilon^{0})\), while the perturbation equations are obtained collecting terms of order \(\mathcal{O}(\bar{\delta}^{1},\epsilon^{0})\). Terms of higher order in \(\epsilon\) correspond to post-geometric optics, while those of higher order in \(\bar{\delta}\) represent non-linear perturbations. Along with this WKB-type Ansatz, we need to introduce the concept of fast and slowly varying quantities. Given a specific choice of coordinates, a quantity is slow in the variable \(x\) if \(A=A(X)\) where \(X=\epsilon x\) while it is fast if \(A=A(x)\). Deciding which quantities are fast or slow corresponds to specifying (in a qualitative manner) the background configuration. As an illustration, consider the simple toy problem \[a(\partial_{x}b+\partial_{x}c)=0\, \tag{7.3}\] together with the Ansatz from eq. (7.2). Let us first assume that both \(B\) and \(C\) are fast, so that \(\partial_{x}B\approx\mathcal{O}(\bar{\delta}^{0},\epsilon^{0})\) and similarly for \(C\). The background equation is then \[A\left(\partial_{x}B+\partial_{x}C\right)=0. \tag{7.4}\] If we instead assume that, say, \(B\) is fast while \(C\) is slow, then \(\partial_{x}B\approx\mathcal{O}(\bar{\delta}^{0},\epsilon^{0})\) while \(\partial_{x}C\approx\mathcal{O}(\bar{\delta}^{0},\epsilon)\) and the background equation becomes \[A\partial_{x}B=0. \tag{7.5}\] Clearly, the two problems are different already at the background level. Let us now turn to the linear perturbations. Because we have explicitly introduced the book-keeping parameter \(\epsilon\) in eq. (7.2), we take all amplitude terms as well as the phase to be slowly varying. Then, to order \(\mathcal{O}(\bar{\delta},\epsilon^{0})\) we have \[\left(A+\delta\bar{A}_{0}e^{i\theta/\epsilon}\right)\partial_{x}\left[B+\delta \bar{B}_{0}e^{i\theta/\epsilon}+C+\delta\bar{C}_{0}e^{i\theta/\epsilon}\right]= 0. \tag{7.6}\] Assuming again that the background quantity \(B\) is fast, while \(C\) is slow, the perturbation equation becomes \[\bar{A}_{0}\left(\partial_{x}B\right)e^{i\theta/\epsilon}+A\left(\bar{B}_{0} \partial_{x}e^{i\theta/\epsilon}+\bar{C}_{0}\partial_{x}e^{i\theta/\epsilon} \right)=0. \tag{7.7}\] Next, Taylor expanding the phase--which is slowly varying--we get \[\frac{\theta(x)}{\epsilon}\approx\frac{\theta(0)}{\epsilon}+\frac{\partial \theta}{\partial X}\Big{|}_{X=0}x+\cdots=\theta(0)/\epsilon+k_{x}x+\mathcal{O }(\epsilon)\, \tag{7.8}\] where we define the wave-vector \(k_{x}=\partial\theta/\partial X\) from the first order term in the expansion, while the overall constant can be neglected. Then, introducing an analogous expansion for the fast background gradients \(\partial_{x}B(x)=\partial_{x}B(0)+\mathcal{O}(\epsilon)\) we end up with \[\bar{A}_{0}(\partial_{x}B)+A\left(ik_{x}\bar{B}_{0}+ik_{x}\bar{C}_{0}\right)=0\, \tag{7.9}\] where both \(\partial_{x}B\) and \(A\) are evaluated at a point (conveniently chosen as the origin of the coordinate system). Therefore, if all background quantities are "slow", we get back the dispersion relation we would have obtained ignoring all background gradients. This is quite intuitive. However, the strategy also allows us to account for the impact that "fast" background gradients have on the dispersion relation. In short, as long as these terms are treated as constants, we may retain them and work out a dispersion relation in the usual way. ### 7.2 The slowly evolving background The starting point for any hydrodynamic perturbation analysis is the choice/identification of a stationary background flow configuration, which is then perturbed in order to establish stability (or not). Here, we want to frame the analysis of the magneto-shear instability without considering a specific background configuration (with constraining symmetries) stated from the outset. Nonetheless, we need to clarify how we can refer to a suitable "background" in highly dynamical environments like binary neutron star mergers. Given real numerical simulation data, this discussion will inevitably involve some kind of filtering operation. Anticipating that this can be done in a meaningful way, we consider perturbations evolving rapidly with respect to the evolution time-scale of an unspecified "background" flow. To make this statement more precise, let us consider the inertial ideal MHD equations and introduce reference values for each quantity (indicated with an "\(r\)" subscript) such as \(\rho=\rho_{r}\tilde{\rho}\). We introduce the (dimensionless) Strouhal, Mach, Froude and magnetic interaction numbers as \[\varepsilon_{\text{St}}=\frac{l_{r}}{t_{r}v_{r}}\,\quad\varepsilon_{\text{Ma}}= \frac{v_{r}}{c_{r}}\,\quad\varepsilon_{\text{Fr}}=\frac{v_{r}}{\sqrt{\Phi_{r}}}\,\quad \varepsilon_{B}=\frac{B_{r}^{2}}{\mu_{0}\rho_{r}v_{r}^{2}}\, \tag{7.10}\] where \(l_{r},\ t_{r},\ v_{r}\) are characteristic lengthscale, timescale and velocity (respectively) while \(B_{r},\ \vartheta_{r},\ \rho_{r}\) are reference values for the magnetic field, gravitational potential and density and \(c_{r}\) is the (adiabatic) speed of sound. This way, the non-dimensional inertial ideal MHD equations read (now dropping the "tilde"s for notational clarity) \[\varepsilon_{\text{St}}\,\partial_{t}\rho =-\rho\nabla_{i}v^{i}-v^{i}\nabla_{i}\rho\, \tag{7.11a}\] \[\varepsilon_{\text{St}}\,\partial_{t}B^{i} =-v^{i}\nabla_{j}B^{i}+B^{i}\nabla_{j}v^{i}-B^{i}\nabla_{j}v^{j}\,\] (7.11b) \[\varepsilon_{\text{St}}\,\partial_{t}v^{i} =-v^{j}\nabla_{j}v^{i}-\frac{1}{\varepsilon_{\text{Ma}}^{2}} \frac{1}{\rho}\nabla^{i}\rho-\frac{1}{\varepsilon_{\text{Fr}}^{2}}\nabla^{i} \Phi-\epsilon_{B}\frac{1}{\rho}\left[B^{j}\nabla_{j}B^{i}-\nabla^{i}\left( \frac{B^{2}}{2}\right)\right]. \tag{7.11c}\] From this we see that a generic flow configuration can be considered slowly evolving (in time) as long as the corresponding Strouhal number is small. In practice, given a characteristic lengthscale \(l_{r}\) and velocity \(v_{r}\) of a generic flow, we consider disturbances evolving on timescales \(t_{r}\) such that \(\varepsilon_{\text{St}}\ll 1\)--over which the background can be effectively taken as stationary. In turn, this determines the time-scales over which we expect the following results to be reliable. #### Velocity gradient decomposition In the following we will consider the impact that gradients in the background flow velocity have on the time evolution of perturbations. It is then convenient to introduce the standard decomposition of the velocity gradient into expansion, shear and vorticity. Even though this has been used many times before in this thesis, it makes sense to write it down explicitly here as we are working at a Newtonian setting. The three vector velocity gradient decomposition in the Newtonian case reads \[\nabla_{i}v_{j}=\frac{1}{3}\theta g_{ij}+\sigma_{ij}+\omega_{ij}\, \tag{7.12}\] where \[\theta=\nabla_{i}v^{i}\,\quad\sigma_{ij}=\nabla_{(i}v_{j)}-\frac{1}{3}\theta g _{ij}=\frac{1}{2}\left(\nabla_{i}v_{j}+\nabla_{j}v_{i}\right)-\frac{1}{3} \theta g_{ij}\,\] (7.13a) and \[\omega_{ij}=\nabla_{[i}v_{j]}=\frac{1}{2}\left(\nabla_{i}v_{j}-\nabla_{j}v_{ i}\right). \tag{7.13b}\] In order to bring out the magneto-shear nature of the instability, we will consider the impact of having a background with non-negligible shear and vorticity separately. We will, however, not consider the impact of a background expansion rate as exact non-linear results are sufficient to predict this. In fact, due to the Alfven theorem, we know that the magnetic intensity must grow in a (ideal magneto-)fluid undergoing compression as the field lines are squeezed together. Similarly, the field will get weaker in an expanding fluid. In essence, we expect--and have verified explicitly--this non-linear prediction to emerge in the analysis as a generic "instability". The background magnetic field cannot grow in time as it is assumed to be slowly evolving by construction, so the required growth must be represented by perturbations. In what follows, we will first analyse the problem analytically, and come back to discuss the link to/relevance for numerical simulations in the concluding remarks in section 7.6. Before we move on though, it is useful to take a brief detour and consider a realization of a flow with only non-negligible shear. Because we are interested in flows that are "slowly evolving" we can start by assuming that \(v^{i}=v^{i}(\epsilon t,x,y,z)\) and suppress the time dependence in the following. We then take the velocity vector as mainly two-dimensional, specifically in the \(x-y\) plane of a set of local Cartesian coordinates \[\mathbf{v}=v^{x}(x,y,z)\hat{x}+v^{y}(x,y,z)\hat{y}+\mathcal{O}(\epsilon)\, \tag{7.14}\] where, in order to make sure the expansion is small, we take \(\partial_{x}v^{x}=-\partial_{y}v^{y}+\mathcal{O}(\epsilon)\). The shear matrix is then given by \[\sigma=\begin{pmatrix}\partial_{x}v^{x}&\frac{1}{2}\left(\partial_{x}v^{y}+ \partial_{y}v^{x}\right)&\frac{1}{2}\partial_{z}v^{x}\\ \frac{1}{2}\left(\partial_{x}v^{y}+\partial_{y}v^{x}\right)&-\partial_{x}v^{x }&\frac{1}{2}\partial_{z}v^{y}\\ \frac{1}{2}\partial_{z}v^{x}&\frac{1}{2}\partial_{z}v^{y}&0\end{pmatrix}+ \mathcal{O}(\epsilon)\, \tag{7.15}\] while the curl of \(\mathbf{v}\) becomes \[\nabla\times\mathbf{v}=-\left(\partial_{z}v^{y}\right)\hat{x}+\left(\partial_{ z}v^{x}\right)\hat{y}+\left(\partial_{x}v^{y}-\partial_{y}v^{x}\right)\hat{z}. \tag{7.16}\] This has to be of \(\mathcal{O}(\epsilon)\) for background flows with only non-negligibile shear, in which case \[\partial_{z}v^{x}=\partial_{z}v^{y}=\mathcal{O}(\epsilon)\,\qquad\partial_{y}v^{x}= \partial_{x}v^{y}+\mathcal{O}(\epsilon), \tag{7.17}\] and as result the determinant of the shear matrix vanishes (more precisely, is of order \(\mathcal{O}(\epsilon)\)). This is equivalent to saying that two eigenvalues of the shear matrix are opposite and the third is zero (to order \(\mathcal{O}(\epsilon)\)). In essence, a mainly two-dimensional flow with negligible expansion and vorticity is characterized by a shear matrix with one zero eigenvalue, and hence a vanishing determinant. This will turn out to be a useful observation later on. We also note that having negligible expansion (although relevant for the present analysis) is not strictly necessary to the argument. ### Non-inertial equations and the local frame The ideal magneto-hydrodynamic equations above hold in an inertial frame. As an observer locally comoving with the fluid cannot be expected to be inertial (in general) we need to consider the equations according to a non-inertial observer. The non-inertial MHD equations have been derived earlier, so that we will here build on the results of section 5.2. However, as we are interested in a local analysis, we now take a step further and make contact with the concept of local frame associated with an observer (see [101, 155]). In doing so we, for a moment, go back to use concepts and notations that are common in relativity. This is required as we need to make contact with the non-inertial induction equation derived in section 5.2 using a relativistic language. Given an observer worldline with tangent, the local frame is constructed by considering three spatial unit vectors that complete to an orthonormal basis on the tangent space at a point (see appendix A for more details). These three spatial vectors are then transported along the worldline as (7.18) where is the four-acceleration of (an intrinsic property of the worldline) and is the _arbitrary_ four-rotation of the local frame. Let us then look at the non-inertial induction equation, which we report here for convenience1 Footnote 1: We drop the last two terms in equation eq. (5.24), cf. discussion at the end of section 5.2. (7.19) Focusing on the first term, and using eq. (7.18) (7.20) The second term vanishes due to the orthogonal projection, while2 Footnote 2: Note that we are here identifying the vorticity of the fibration observer with the four-rotation of the local frame chosen. (7.21) In practice, the term involving the four rotation of the frame drops out of the induction equation. We also note that, because we are now considering the non-inertial equations in the local frame of a single observer, there is no shear or expansion and the induction equation in the Newtonian limit simplifies to (7.22) At the Newtonian level then, the induction equation in the local frame of a generic observer retains the same form as for an inertial one. This is similar to the case of the Lorentz force (entering the Euler equation) and the Ampere law. Let us nonetheless stress that additional terms involving the four-acceleration of the observer worldline do appear at the special relativistic level, even though working with the ideal MHD induction equation may be somehow controversial in a fully-relativistic regime (cf. discussion in section 5.2). When it comes to the non-inertial terms in the Euler equations, these are obviously well known: we have to account for fictitious accelerations. We refer to [101] for a rigorous derivation of the fictitious forces in special relativity, showing also how additional terms involving the observer four-acceleration enter the relativistic expressions. We also stress that working with a rotating or non-rotating local frame is entirely a matter of choice (see [155]). At the local Newtonian level then, we can always get rid of the non-inertial terms associated with the frame rotation. As the linear acceleration of the observer drops out of the perturbation equations, this means we can effectively work with the inertial equations. We conclude this section by noting that, as previously anticipated, some kind of filtering operation is key to separate between background and fluctuations in a highly dynamical environment. Postponing a discussion of this to section 7.6, let us simply note at this point that the notion of local frame discussed here is clearly linked to the covariant filtering procedure discussed in chapter 4. ### 7.4 Going back to hydrodynamics As briefly hinted at in section 7.1, the magneto-rotational instability relies on the _hydrodynamic_ stability of axisymmetric modes. The generic instability problem is more involved. If we relax the symmetry assumptions on the background, we need to consider the fact that hydrodynamic shear flows tend to be unstable. That is, we expect to find instabilities to appear already at the hydrodynamic level. Clearly, such instabilities would be affected by a magnetic field but not caused by it in the first place. This is an important distiction seeing as the magneto-rotational instability is _due to_ the presence of the magnetic field. With this observation in mind, let us first consider the fluid problem. This will be useful for two reasons: First, it will allow us to get a better grasp on the magnetic field impact on the instability. Second, it will allow us to make contact with the Rayleigh criterion (and ultimately the magneto-rotational instability). As the fluid problem is much simpler than the magneto-fluid one, we will study the case where both shear and vorticity gradients are retained, and also discuss the impact of shear viscosity--either of microphysical origin or due to filtering as in the Smagorinsky model (or, more in general, in the so-called eddy-viscosity type models, see section 4.7.2). Shear viscosity is introduced in the usual way (see, for example, Landau and Lifshitz [134]), and the shear viscosity coefficient \(\eta\) will be considered constant consistently within the local analysis. Before we move on to discuss the perturbation equations and the resulting dispersion relation(s) though, it is worth stressing that, in many situations of interest the relevant dynamics is either sub- or supersonic. As such, for these problems it is worth considering models that filter out modes that are either faster or slower than the sound waves. This can be done starting from a fully compressible dispersion relation and taking either of two limits: either we assume the speed of sound to be very large, in which case the model becomes sound-proof (we point to [216] for more details), or very small. In the following, we typically work in the sound-proof limit, noting that the MRI is commonly discussed within the so-called Boussinesq approximation [34], thus removing fast magneto-sonic waves [31]. Starting from the continuity equation, perturbing it and introducing the plane-wave expansion we readily obtain \[\partial_{t}\delta\rho+\delta\rho\nabla_{i}v^{i}+v^{i}\nabla_{i}\delta\rho+ \rho\nabla_{i}\delta v^{i}=0\Longrightarrow-i\omega\delta\rho+i\rho k_{i} \delta v^{i}=0\, \tag{7.23}\] where \(\omega\) and \(k_{i}\) are defined as in section 7.1. Note that we set \(v^{i}=0\) as we evaluate the relation at the centre of the local box, and assume that the background expansion rate \(\nabla_{i}v^{i}\) can be neglected. In a similar fashion, the perturbed Euler including a shear-viscous term gives \[\partial_{t}\delta v_{i}+\delta v^{j}\nabla_{j}v_{i}+\frac{1}{\rho }\nabla_{i}\delta P-\delta\left(\eta\nabla^{j}\tau_{ji}\right)=0\\ \Longrightarrow-i\omega\delta v_{i}+i\frac{c_{s}^{2}}{\rho}k_{i} \delta\rho+\sigma_{ij}\delta v^{j}+\epsilon_{ijk}W^{j}\delta v^{k}-\delta \left(\eta\nabla^{j}\tau_{ji}\right)=0\, \tag{7.24}\] where \(\tau_{ji}\) is the rate-of-strain/shear tensor and \(W^{i}=1/2\epsilon^{ijk}\omega_{jk}\). Working this out we retained gradients in the background flow only, used the velocity gradient decomposition (section 7.2.1), introduced the adiabatic speed of sound \(c_{s}^{2}=\partial P/\partial\rho\), and considered the gravitational potential to be externally sourced (hence neglecting its perturbations). In order to derive the dispersion relation and study the effects of a sheared background, it is convenient to choose a basis that is adapted to it. Because the shear is a trace-free symmetric matrix, we know there exists a basis (in the tangent space) whereby \[\sigma^{ij}=\text{diag}\left(\sigma_{1},\sigma_{2},-(\sigma_{1}+\sigma_{2}) \right). \tag{7.25}\] We will make use of this basis to write down the coefficient matrix of the linearized system. Before doing so, however, it is reasonable to wonder whether this change of basis has any impact on the perturbation equations. We are, always free to choose a basis in the tangent space that is not associated with the coordinates chosen, but this (in general) introduces additional terms in the covariant derivative. Let us spell out why this is not the case here. Working with a non-coordinate basis, we need to account for spin-coefficients when a derivative acts on vectors and tensors. The spin coefficients are given by the sum of two terms (see the formula in the Notation chapter, or [50] for more details). The first involves the Christoffel symbols associated with the coordinates chosen, and thus vanish as we are working with a non-rotating Cartesian frame. The second term instead stems from the fact that the change of basis matrix (translating the coordinates base into the shear-adapted one) may be different from point to point. In the context of this analysis, however, we are looking at scales smaller than those over which background quantities vary. In essence, also this second term vanishes as the shear matrix is (by construction) constant over the local region of fluid we are zooming on. Working in the shear adapted basis, we write the coefficients matrix of the linearized system as \[\mathbf{M}=\begin{pmatrix}-\omega&\rho k_{1}&\rho k_{2}&\rho k_{3}\\ \frac{c_{2}^{2}}{\rho}k_{1}&-\omega-i\sigma_{1}-i\eta L_{1}&iW^{3}-\frac{i}{6} \eta k_{1}k_{2}&-iW^{2}-\frac{i}{6}\eta k_{1}k_{3}\\ \frac{c_{2}^{2}}{\rho}k_{2}&-iW^{3}-\frac{i}{6}\eta k_{2}k_{1}&-\omega-i\sigma _{2}-i\eta L_{2}&iW^{1}-\frac{i}{6}\eta k_{2}k_{3}\\ \frac{c_{2}^{2}}{\rho}k_{3}&iW^{2}-\frac{i}{6}\eta k_{3}k_{1}&-iW^{1}-\frac{i} {6}\eta k_{3}k_{2}&-\omega+i\sigma_{2}+i\sigma_{2}-i\eta L_{3}\end{pmatrix}\,, \tag{7.26}\] where \(L_{1}=\frac{2}{3}k_{1}^{2}+\frac{1}{2}k_{2}^{2}+\frac{1}{2}k_{3}^{2}\) and \(L_{2}\), \(L_{3}\) are similarly defined. The dispersion relation is computed taking the determinant of this matrix and equating it to zero. In order to keep the discussion as general as possible (i.e. without having to refer to a specific background configuration) we will decompose the coefficients of the characteristic polynomial in terms of scalars built from background quantities. In the simplest cases this can be done "by eye", but the procedure can easily become quite messy. The logic is nonetheless simple: we group the different terms in each coefficients according to the power of the various background quantities, for example we group all the terms quadratic in the shear and wave-vector components. We then build all the possible scalars that are quadratic in the shear and wave-vector, and look for the correct linear combination of them. This logic can be easily implemented on a computer algebra program such as Mathematica3. We now discuss the dispersion relations obtained by retaining only shear terms, both shear and viscous terms, and lastly shear and vorticity terms. Before doing so, we observe that the coefficients will involve scalars constructed from the shear matrix only. As any \(3\times 3\) matrix, the shear matrix \(\sigma\) has three invariants \[I_{1}=\text{Tr}(\sigma)\,\quad I_{2}=\frac{1}{2}\left[\text{Tr}(\sigma^{2})-( \text{Tr}(\sigma))^{2}\right]\,\quad I_{3}=\det(\sigma)\, \tag{7.27}\] related via the Cayley-Hamilton theorem as \[\sigma^{3}-I_{1}\sigma^{2}+I_{2}\sigma-I_{3}\mathbb{I}=0\, \tag{7.28}\] where \(\mathbb{I}\) is the \(3\times 3\) identity matrix. Because the shear matrix is trace-free, we will write the coefficients in terms of \(\nicefrac{{1}}{{2}}\text{Tr}(\sigma^{2})\) and \(\det(\sigma)\). We begin our analysis by considering a background with negligible vorticity, and set the shear viscosity to zero. The resulting dispersion relation is \[\omega^{4}+a_{2}\omega^{2}+a_{1}\omega+a_{0}=0\, \tag{7.29}\] with \[a_{2} =-c_{s}^{2}k^{2}+\frac{1}{2}\text{Tr}(\sigma^{2})\, \tag{7.30a}\] \[a_{1} =i\left[c_{s}^{2}\sigma_{ij}k_{i}k_{j}-\det(\sigma)\right]\,\] (7.30b) \[a_{0} =c_{s}^{2}\left[\sigma_{ij}^{2}k_{i}k_{j}-\frac{1}{2}\text{Tr}( \sigma^{2})k^{2}\right]. \tag{7.30c}\] We then take the sound-proof limit (i.e. we retain only terms proportional to the speed of sound) and consider the case \(\det(\sigma)=0\), looking for modes such that \(\sigma^{ij}k_{j}=0\). Recalling that, as discussed in section 7.2.1, a mainly two dimensional flow with negligible vorticity is characterized by having a shear matrix with vanishing determinant--that is \(\det(\sigma)\sim\mathcal{O}(\epsilon)\)--and noting that we can choose the orientation of the local axes in such a way that the background flow is, say, along the \(\hat{x}\), \(\hat{y}\) directions, we can always consider the determinant to be zero. This means that there always exists a wave-vector living in the eigen-space corresponding to the zero eigenvalue. Then we end up with4 Footnote 4: We note that we obtain the same dispersion relation also in the opposite limit where the speed of sound is tiny. \[\omega^{2}=-\frac{1}{2}\text{Tr}(\sigma^{2})\Longrightarrow\omega=\pm i \sqrt{\frac{1}{2}\text{Tr}(\sigma^{2})}. \tag{7.31}\] That is such modes are non-propagating, and half of them are unstable with a growth rate independent of the wave-vector. Next, we consider wave-numbers such that \(\sigma_{ij}k_{i}k_{j}=0\), noting that such modes will always exist. In the shear-adapted basis they are characterized by \(k^{1}=k^{2}=k^{3}\) if the determinant is not vanishing (that is \(s_{1}\neq s_{2}\)), and \(k_{1}=k_{2}\) when it does. It follows that for such modes \[-\frac{1}{2}\text{Tr}(\sigma^{2})+\sigma_{ij}^{2}\hat{k}\hat{k}^{j}=-\frac{1}{ 6}\text{Tr}(\sigma^{2})\, \tag{7.32}\] where \(\hat{k}={\bf k}/|{\bf k}|\). We then obtain \[\omega^{2}=-\frac{1}{6}{\rm Tr}(\sigma^{2})\Longrightarrow\omega=\pm i\sqrt{ \frac{1}{6}{\rm Tr}(\sigma^{2})}. \tag{7.33}\] These modes are also non-propagating, and half of them are unstable with a (constant) growth rate about a factor of 2 smaller. As the dispersion relation is quadratic (in the sound-proof limit), we can explicitly solve it and confirm the expectation (and well known fact) that shearing flows are generically unstable. Let us now build on this and discuss how vorticity and shear viscosity impact on the generic instability of sheared flows. First we consider the case where the background has negligible vorticity but non-vanishing shear viscosity. As sanity check, we observe that if we also set the background shear to be negligible, and take the sound-proof limit5 we obtain Footnote 5: We have also verified, using the Routh-Hurwitz criterion (see [131] and appendix E) that the same result holds true in general, not only in the sound-proof limit. \[\omega^{2}+i\eta k^{2}\omega-\frac{1}{4}\eta^{2}(k^{2})^{2}=\left(\omega+\frac {i}{2}\eta k^{2}\right)^{2}=0\, \tag{7.34}\] with stable roots provided \(\eta>0\). We recall that when viscosity is of microphysical origin, \(\eta>0\) follows from the second law of thermodynamics (see [134, 21]). If viscosity is instead due to filtering, a positive value of \(\eta\) corresponds to an eddy-type model where energy is cascading to smaller/unresolved scales (see [193, 136, 143]). With this observation in mind, let us go back to the case with both both shear and viscosity, in which case the dispersion relation is \[\omega^{4}+a_{3}\omega^{3}+a_{2}\omega^{2}+a_{1}\omega+a_{0}=0\, \tag{7.35}\] with \[a_{3} =\frac{5}{3}i\eta k^{2}\, \tag{7.36a}\] \[a_{2} =-c_{s}^{2}k^{2}+\frac{1}{2}{\rm Tr}(\sigma^{2})-\frac{1}{12} \eta\left[11\eta(k^{2})^{2}-2\sigma_{ij}k_{i}k_{j}\right]\,\] (7.36b) \[a_{0} =c_{s}^{2}\left[\sigma_{ij}^{2}k_{i}k_{j}-\frac{1}{2}{\rm Tr}( \sigma^{2})k^{2}-\frac{1}{2}\eta k^{2}\sigma_{ij}k_{i}k_{j}+\frac{1}{4}\eta^{ 2}(k^{2})^{3}\right]\, \tag{7.36c}\] and \[a_{1} =ic_{s}^{2}\left[\sigma_{ij}k_{i}k_{j}-\eta(k^{2})^{2}\right]\] \[\quad+i\left\{-\frac{1}{6}\left[\sigma_{ij}^{2}k_{i}k_{j}-2{\rm Tr }(\sigma^{2})k^{2}\right]+\frac{1}{12}\eta^{2}k^{2}(\sigma_{ij}k_{i}k_{j})- \frac{1}{6}\eta^{3}(k^{2})^{3}-\det(\sigma)\right\}. \tag{7.36d}\] As before, we first consider modes such that \(\sigma_{ij}\hat{k}_{j}=0\), whose dispersion relation is \[\omega^{2}+i\eta k^{2}\omega-\frac{1}{4}\left(\eta^{2}(k^{2})^{2}-2\text{Tr}( \boldsymbol{\sigma}^{2})\right)=0. \tag{7.37}\] Assuming \(\eta>0\), stability corresponds to \[\eta^{2}(k^{2})^{2}-2\text{Tr}(\boldsymbol{\sigma}^{2})>0. \tag{7.38}\] In essence, comparing this to eq. (7.31) we see that viscosity tends to stabilize shear-unstable modes, with a larger impact at smaller scales. This makes intuitive sense. Next, consider modes such that \(\sigma_{ij}\hat{k}_{i}\hat{k}_{j}=0\), whose dispersion relation is \[\omega^{2}+i\eta k^{2}\omega-\frac{1}{12}\left(3\eta^{2}(k^{2})^{2}-2\text{Tr} (\boldsymbol{\sigma}^{2})\right)=0\, \tag{7.39}\] and we made use of eq. (7.32). As before, these modes--to be compared with their counterparts in eq. (7.33)--are also stable (assuming \(\eta>0\)) provided the last term in the previous equation is negative. That is, provided the wave-number is sufficiently large. We have verified that the same trend is true for generic wavevectors. In essence, we learn (as one may have expected) that shear viscosity generically slows the growth rate of unstable shear modes, and stabilises modes with small enough wavelengths. Turning to the case where the background has non-negligible vorticity and shear, the dispersion relation is \[\omega^{4}+a_{2}\omega^{2}+a_{1}\omega+a_{0}=0\, \tag{7.40}\] with \[a_{2} =-c_{s}^{2}k^{2}-\mathbf{W}^{2}+\frac{1}{2}\text{Tr}(\boldsymbol{ \sigma}^{2})\, \tag{7.41a}\] \[a_{1} =i\left[c_{s}^{2}\sigma_{ij}k_{i}k_{j}-\det(\boldsymbol{\sigma})- \sigma_{ij}W_{i}W_{j}\right]\,\] (7.41b) \[a_{0} =c_{s}^{2}\left[\sigma_{ij}^{2}k_{i}k_{j}-\frac{1}{2}\text{Tr}( \boldsymbol{\sigma}^{2})k^{2}+(\mathbf{k}\cdot\mathbf{W})^{2}\right]. \tag{7.41c}\] Taking the sound-proof limit, we first observe that the fastest growing modes encountered before, namely those characterized by \(\sigma_{ij}\hat{k}_{j}=0\) are not guaranteed to exist anymore, as the determinant of the shear matrix cannot be assumed to be negligible in general (see section 7.2.1). Should these modes exist, though, their dispersion relation would be \[\omega^{2}=-\frac{1}{2}\text{Tr}(\boldsymbol{\sigma}^{2})+(\hat{k}\cdot \mathbf{W})^{2}\Longrightarrow\omega=\pm i\sqrt{-\frac{1}{2}\text{Tr}( \boldsymbol{\sigma}^{2})+(\hat{k}\cdot\mathbf{W})^{2}}\, \tag{7.42}\] and we see, comparing this to eq. (7.31), that vorticity tends to stabilize them. We also observe that--in contrast to shear viscosity--vorticity affects all such modes by reducing their growth rate in a way that does not depend on their wave-number (although the direction of propagation is important). Next--and also because the modes we just looked at may not exist--we consider modes such that \(\sigma_{ij}\hat{k}_{i}\hat{k}_{j}=0\) obtaining \[\omega^{2}=-\frac{1}{6}\text{Tr}(\sigma^{2})+(\hat{k}\cdot\mathbf{W})^{2}. \tag{7.43}\] Comparing this to eq. (7.33), we observe again that vorticity tends to stabilize such modes in a way that does not depend on their wave-number. We have verified that the same trend is also true for generic wave-vectors. As a final point, it is easy to verify that the case with only background vorticity is generally stable (not only in the sound-proof limit). In summary, a sheared background flow is generically unstable already at the hydrodynamic level, which is a well-known fact. However, we have considered the impact that shear viscosity and/or vorticity have on the instability of the possible hydrodynamic modes. The results show that shear viscosity tends to weaken the instability in general, with larger effects for larger wave-numbers. Meanwhile, vorticity has a stabilizing effect which does not depend on the wave-number. Finally, let us also point to appendix D.2 where we show that the general dispersion relation derived here is shown to encompass the classic Rayleigh stability criterion. ### 7.5 Magneto-shear instability in the local frame Having explored the hydrodynamic case, let us perturb the corresponding MHD equations and study the impact of the magnetic field on the generic shear instabilities we encountered. We consider a barotropic equation of state and retain gradients in the background velocity only, as we want to focus on the magneto-shear nature of the instability (cf. [108, 200]). The continuity equation is obviously unchanged, while the perturbed Euler equation becomes \[\partial_{t}\delta v_{i}+\delta v^{j}\nabla_{j}v_{i}+\frac{1}{ \rho}\nabla_{i}\delta P+\frac{1}{\mu_{0}\rho}\left[B_{j}\nabla_{i}\delta B^{j }-B^{j}\nabla_{j}\delta B_{i}\right]=0\\ \Longrightarrow-i\omega\delta v_{i}+i\frac{c_{s}^{2}}{\rho}k_{i} \delta\rho+\frac{i}{\mu_{0}\rho}\left[(B_{j}\delta B^{j})k_{i}-(B^{j}k_{j}) \delta B_{i}\right]+\sigma_{ij}\delta v^{j}+\epsilon_{ijk}W^{j}\delta v^{k}=0. \tag{7.44}\] Finally, the perturbed induction equation is \[\partial_{t}\delta B^{i}+B^{i}\nabla_{j}\delta v^{j}-B^{j}\nabla _{j}\delta v^{i}-\delta B^{j}\nabla_{j}v^{i}+\delta B^{i}\nabla_{j}v^{j}=0\\ \Longrightarrow-i\omega\delta B^{i}+iB^{i}(k_{j}\delta v^{j})-i( B^{j}k_{j})\delta v^{i}-\sigma^{ij}\delta B_{j}-\epsilon^{ijk}W_{k}\delta B_{j}+ \frac{2}{3}\delta\delta B^{i}=0. \tag{7.45}\] We will now discuss the linearized system that follows from these equations. In order to keep the discussion tidy, we will first recap the mode analysis for the homogeneous case and then move on to consider a background with non negligible shear and vorticity (separately). #### Homogeneous background: a recap In order to derive the fully compressible dispersion relation for the homogeneous case, we first re-scale the magnetic field as \[\mathbf{v}_{A}\doteq\frac{\mathbf{B}}{\sqrt{\mu_{0}\rho}}\,\quad\delta\mathbf{v}_{A} \doteq\frac{\delta\mathbf{B}}{\sqrt{\mu_{0}\rho}}\, \tag{7.46}\] and introduce a convenient basis \(\{\hat{v}_{A},\,\hat{q},\,\hat{s}\}\) where \(\hat{v}_{A}=\mathbf{v}_{A}/|\mathbf{v}_{A}|\) while \(\hat{q},\hat{s}\) complete it to an orthonormal basis. For instance, assuming \(\mathbf{v}_{A}\) is not aligned with \(\mathbf{k}\) we can construct it as \[\mathbf{q}=\mathbf{k}-\left(\mathbf{k}\cdot\hat{v}_{A}\right)\hat{v}_{A}\,, \qquad\hat{q}=\frac{\mathbf{q}}{|\mathbf{q}|}\,\qquad\hat{s}=\hat{v}_{A}\times\hat{q}\, \tag{7.47}\] so that6 Footnote 6: If the wave-vector is along the background magnetic field we just have to set \(k^{q}=0\) in the following. \[\mathbf{k}=k^{\nu_{A}}\hat{v}_{A}+k^{q}\hat{q}. \tag{7.48}\] The coefficient matrix of the linearized system can then be written as (cf. eqs. (7.23), (7.44) and (7.45) and ignore background vorticity and shear) \[\mathbf{M}=\begin{pmatrix}\mathbf{A}&\mathbf{C}\\ \mathbf{C}^{\top}&\mathbf{D}\end{pmatrix}\, \tag{7.49}\] with \[\mathbf{A}=\begin{pmatrix}-\omega&\rho k_{\nu_{A}}&\rho k_{q}&0\\ \frac{c_{\nu}^{2}}{\rho}k_{\nu_{A}}&-\omega&0&0\\ \frac{c_{\nu}^{2}}{\rho}k_{q}&0&-\omega&0\\ 0&0&0&-\omega\end{pmatrix}\,\quad\mathbf{C}=\begin{pmatrix}0&0&0\\ 0&0&0\\ bk_{q}&-bk_{\nu_{A}}&0\\ 0&0&-bk_{\nu_{A}}\end{pmatrix}\,\] (7.50a) and \[\mathbf{D}=\begin{pmatrix}-\omega&0&0\\ 0&-\omega&0\\ 0&0&-\omega\end{pmatrix}. \tag{7.50b}\] As \(\mathbf{D}\) is clearly invertible, we can reduce \(\mathbf{M}\) into factors via the Schur complement \[\begin{pmatrix}\mathbf{A}&\mathbf{C}\\ \mathbf{C}^{\top}&\mathbf{D}\end{pmatrix}=\begin{pmatrix}\mathbf{I}_{4}& \mathbf{C}\mathbf{D}^{-1}\\ \mathbf{0}_{4\times 3}&\mathbf{I}_{3}\end{pmatrix}\begin{pmatrix}\mathbf{A}- \mathbf{C}\mathbf{D}^{-1}\mathbf{C}^{\top}&\mathbf{0}_{4\times 3}\\ \mathbf{0}_{3\times 4}&\mathbf{D}\end{pmatrix}\begin{pmatrix}\mathbf{I}_{4}& \mathbf{0}_{4\times 3}\\ \mathbf{D}^{-1}\mathbf{C}^{\top}&\mathbf{I}_{3}\end{pmatrix}\, \tag{7.51}\] and then compute the determinant as \[\det(\mathbf{M})=\det(\mathbf{D})\det(\mathbf{A}-\mathbf{C}\mathbf{D}^{-1} \mathbf{C}^{\top}). \tag{7.52}\] The resulting dispersion relation is \[-\omega\left(\omega^{2}-\left(\mathbf{v}_{A}\cdot\mathbf{k}\right)^{2}\right) \left[\omega^{4}-\left(v_{A}^{2}+c_{s}^{2}\right)k^{2}\omega^{2}+c_{s}^{2}k^{2} \left(\mathbf{v}_{A}\cdot\mathbf{k}\right)^{2}\right]=0\, \tag{7.53}\] where the roots of the quadratic polynomial correspond to Alfven waves, while those of the quartic one in square brackets describe (fast and slow) magneto-sonic waves [86]. Before moving on to discuss the impact of shear and vorticity, let us briefly note what happens to the modes when we take the sound-proof limit--where the speed of sound is large. From eq. (7.53) we see that fast magneto-sonic waves are filtered out, while the slow ones reduce to Alfven waves. In the opposite limit--when disturbances are much faster than the sound waves--the dispersion relation describes Alfven waves and the low-\(c_{s}\) limit of fast magneto-sonic waves. This limit corresponds to ignoring fluid pressure perturbations while retaining variations in the magnetic pressure. #### Sheared Background Let us now consider the case where the background vorticity is negligible while shear terms are not. Re-scaling the magnetic field as in eq. (7.46) and decomposing eqs. (7.23), (7.44) and (7.45) (ignoring vorticity terms) as well as \(\delta\mathbf{v}\) and \(\delta\mathbf{v}_{A}\) in the shear-adapted basis, the coefficients' matrix of the linearized system of equations reads \[\begin{pmatrix}-\omega&\rho k_{1}&\rho k_{2}&\rho k_{3}&0&0&0\\ \frac{c_{s}^{2}}{\rho}k_{1}&-\omega-i\sigma_{1}&0&0&I_{1}&v_{A}^{2}k_{1}&v_{A }^{3}k_{1}\\ \frac{c_{s}^{2}}{\rho}k_{2}&0&-\omega-i\sigma_{2}&0&v_{A}^{1}k_{2}&I_{2}&v_{A }^{3}k_{2}\\ \frac{c_{s}^{2}}{\rho}k_{3}&0&0&-\omega-i\sigma_{3}&v_{A}^{1}k_{3}&v_{A}^{2}k_ {3}&I_{3}\\ 0&I_{1}&v_{A}^{1}k_{2}&v_{A}^{1}k_{3}&-\omega+i\sigma_{1}&0&0\\ 0&v_{A}^{2}k_{1}&I_{2}&v_{A}^{2}k_{3}&0&-\omega+i\sigma_{2}&0\\ 0&v_{A}^{3}k_{1}&v_{A}^{3}k_{2}&I_{3}&0&0&-\omega+i\sigma_{3}\end{pmatrix}\, \tag{7.54}\] where \[I_{1}=v_{A}^{1}k_{1}-\left(\mathbf{v}_{A}\cdot\mathbf{k}\right)\,\qquad \sigma_{3}=-(\sigma_{1}+\sigma_{2})\, \tag{7.55}\] and \(I_{2},I_{3}\) are defined similarly. In a similar fashion as for the hydrodynamic case considered above, we will decompose the coefficients of the characteristic polynomial in terms of scalars built from background quantities. As we might have expected, the resulting dispersion relation is a complicated seventh-degree polynomial (and we sanity-checked it reduces to the homogeneous case when we set to vanish the shear terms). In order to learn something useful out of it, we then consider the sound-proof limit and retain only terms proportional to the speed of sound. We end up with the following dispersion relation \[a_{5}\omega^{5}+a_{4}\omega^{4}+a_{3}\omega^{3}+a_{2}\omega^{2}+a_{1}\omega+a_{0} =0\, \tag{7.56}\] with \[a_{0}=-i\bigg{\{}\det(\mathbf{\sigma})\Big{[}\sigma_{ijk}^{2}k^{i}k^{ j}-\frac{1}{2}\text{Tr}(\mathbf{\sigma}^{2})\Big{]}+(\mathbf{v}_{A}\cdot\mathbf{k})^{2} \Big{[}\text{det}(\mathbf{\sigma})k^{2}-\frac{1}{2}(\sigma_{ij}k^{i}k^{j})\text{ Tr}(\mathbf{\sigma}^{2})\Big{]}\\ +(\mathbf{v}_{A}\cdot\mathbf{k})^{4}\sigma_{ij}k^{i}k^{j}\bigg{\}}\, \tag{7.57a}\] \[a_{1}=\bigg{\{}(\mathbf{v}_{A}\cdot\mathbf{k})^{4}k^{2}+(\mathbf{v}_{A} \cdot\mathbf{k})^{2}\left[\sigma_{ij}^{2}k^{i}k^{j}-\text{Tr}(\mathbf{\sigma}^{2} )k^{2}\right]+\text{det}(\mathbf{\sigma})\left(\sigma_{ij}k^{i}k^{j}\right)+\\ \frac{1}{2}\text{Tr}(\mathbf{\sigma}^{2})\left[\frac{1}{2}\text{Tr}( \mathbf{\sigma}^{2})k^{2}-\sigma_{ij}^{2}k^{i}k^{j}\right]\bigg{\}}\,\] (7.57b) \[a_{2}=i\bigg{\{}-\frac{1}{2}(\sigma_{ij}k^{i}k^{j})\text{Tr}(\mathbf{\sigma}^{2})+ \text{det}(\mathbf{\sigma})k^{2}+2(\mathbf{v}_{A}\cdot\mathbf{k})^{2}(\sigma_{ij} k^{i}k^{j})\bigg{\}}\,\] (7.57c) \[a_{3}=\left[\text{Tr}(\mathbf{\sigma}^{2})k^{2}-2(\mathbf{v}_{A}\cdot\mathbf{k})^{2}k^{2}- \sigma_{ij}^{2}k^{i}k^{j}\right]\,\] (7.57d) \[a_{4}=-i(\sigma_{ij}k^{i}k^{j})\,\] (7.57e) \[a_{5}=k^{2}. \tag{7.57f}\] As in the hydrodynamic case considered earlier, we first consider the case \(\det(\mathbf{\sigma})=0\), and look for modes such that \(\sigma^{ij}k_{j}=0\). It is then easy to see that the general dispersion relation in eq. (7.56) simplifies to (ignoring a trivial root) \[\left[\omega^{2}-\left(\frac{1}{2}\text{Tr}(\mathbf{\sigma}^{2})-(\mathbf{v}_{A} \cdot\mathbf{k})^{2}\right)\right]^{2}=0. \tag{7.58}\] Comparing to the corresponding hydrodynamic modes in eq. (7.31), we immediately see that the magnetic field tends to have a stabilizing effect (provided it is not orthogonal to the wave-vector, in which case it has no effect whatsoever). Next, we take (again, as before) \(\det(\mathbf{\sigma})=0\) and consider modes such that \(\sigma^{ij}k_{i}k_{j}=0\) (but \(\sigma^{ij}k_{j}\neq 0\)). The relevant dispersion relation can then be written (making use of eq. (7.32)) \[\omega^{4}+b_{2}\omega^{2}+b_{4}=0\, \tag{7.59}\] with \[b_{2}=\frac{2}{3}\text{Tr}(\mathbf{\sigma}^{2})-2(\mathbf{v}_{A}\cdot\mathbf{k})^ {2}\, \tag{7.60a}\] and \[b_{4}=\frac{1}{12}\text{Tr}(\sigma^{2})^{2}-\frac{2}{3}\text{Tr}(\sigma^{2})( \mathbf{v}_{A}\cdot\mathbf{k})^{2}+(\mathbf{v}_{A}\cdot\mathbf{k})^{4}. \tag{7.60b}\] The same stabilizing effect of the magnetic field is evident from fig. 7.1, where both the frequency and \(|\mathbf{v}_{A}\cdot\mathbf{k}|\) are plotted in units of \(\sqrt{\text{Tr}(\sigma^{2})}\). The key point here is that, while the background shear is required for the instability (the vanishing-shear modes are stable Alfven waves in the sound-proof limit), the magnetic field is not the main driver. This is evident from the results as the imaginary part of the unstable modes remains finite in the limit \(\mathbf{v}_{A}\to 0\), and the limiting value coincides with the hydrodynamic result (from the previous section). This observation, possibly unexpected at first sight, deserves a thorough discussion, and we will return to this issue in section 7.6.1. Before we expand on this, let us stress that the results make intuitive sense. The magnetic field impacts on the instability in that it breaks the hydrodynamic isotropy and dampens the growth of unstable modes propagating along magnetic field lines. This also suggests that shear-instability driven turbulence is isotropic in the hydrodynamic case but inherently anisotropic for magnetized flows, consistently with the overall picture discussed in section 5.1. Before moving on, it is also worth noting that the background velocity profile considered by Balbus & Hawley [31, 33] is characterized by having a shear matrix with vanishing determinant (and expansion rate), and also that for axisymmetric modes \(\sigma^{ij}k_{i}k_{j}=0\), while for the fastest growing MRI modes (propagating vertically) \(\sigma^{ij}k_{j}=0\). Figure 7.1: Real and imaginary part of the solutions of eq. (7.59),with both the frequency and \(|\mathbf{v}_{A}\cdot\mathbf{k}|\) in units of \(\sqrt{\text{Tr}(\sigma^{2})}\). The solutions plotted correspond to the fastest growing modes evolving on top of an MHD sheared background. We see that the magnetic field has a stabilizing effect, as the growth rates are reduced with respect to those of the corresponding hydrodynamic modes. The stabilizing effect is all the more pronounced the more the wave-vector is aligned with the magnetic field lines, and is switched off for modes propagating in the directions perpendicular to the magnetic field lines. In particular, modes corresponding to sufficiently large values of \(|\mathbf{v}_{A}\cdot\mathbf{k}|\) are turned stable. #### Background with vorticity Before we make contact with the usual MRI and the Rayleigh criterion, let us also consider the case with non-negligible background vorticity only. We re-scale the magnetic field as in eq. (7.46) and introduce a convenient basis \(\{\hat{W},\,\hat{q},\,\hat{s}\}\), where \(\hat{W}=\mathbf{W}/|\mathbf{W}|\) while \(\hat{q},\hat{s}\) complete it to an orthonormal basis. For instance, assuming \(\mathbf{v}_{A}\) is not aligned with \(\mathbf{W}\) we can construct it as \[\mathbf{q}=\mathbf{v}_{A}-\left(\mathbf{v}_{A}\cdot\hat{W}\right)\hat{W}\,, \qquad\hat{q}=\frac{\mathbf{q}}{|\mathbf{q}|}\,,\qquad\hat{s}=\hat{W}\times \hat{q}\, \tag{7.61}\] and the magnetic field7 Footnote 7: Note that the definition of \(\hat{q}\) changes when the background magnetic field is aligned with the vorticity, even though in what follows we would simply have to set \(v_{A}^{q}=0\). \[\mathbf{v}_{A}=v_{A}^{W}\hat{W}+v_{A}^{q}\hat{q}. \tag{7.62}\] The coefficient matrix of the linearized system then is (cf. eqs. (7.23), (7.44) and (7.45) and ignore shear terms) \[\begin{pmatrix}-\omega&\rho k^{W}&\rho k^{q}&\rho k^{s}&0&0&0\\ \frac{c_{A}^{2}}{\rho}k^{W}&-\omega&0&0&-v_{A}^{q}k^{q}&v_{A}^{q}k^{W}&0\\ \frac{c_{A}^{2}}{\rho}k^{q}&0&-\omega&+iW&v_{A}^{W}k^{q}&-v_{A}^{W}k^{W}&0\\ \frac{c_{A}^{2}}{\rho}k^{s}&0&-iW&-\omega&v_{A}^{W}k^{s}&v_{A}^{q}k^{s}&- \left(v_{A}^{W}k^{W}+v_{A}^{q}k^{q}\right)\\ 0&-v_{A}^{q}k^{q}&v_{A}^{W}k^{q}&v_{A}^{W}k^{s}&-\omega&0&0\\ 0&v_{A}^{q}k^{W}&-v_{A}^{W}k^{W}&v_{A}^{q}k^{s}&0&-\omega&-iW\\ 0&0&0&-\left(v_{A}^{W}k^{W}+v_{A}^{q}k^{q}\right)&0&+iW&-\omega\end{pmatrix}. \tag{7.63}\] As for the sheared case, after having sanity-checked the result by contrasting it against the homogeneous background dispersion relation, we take the sound-proof limit. The sound-proof dispersion relation can then be written as \[\omega^{4}+b_{2}\omega^{2}+b_{4}=0\, \tag{7.64}\] with \[b_{2} =-\left[W^{2}+(\hat{k}\cdot\mathbf{W})^{2}+2(\mathbf{v}_{A}\cdot \mathbf{k})^{2}\right]\, \tag{7.65a}\] \[b_{4} =\left[(\mathbf{v}_{A}\cdot\mathbf{k})^{2}+W^{2}\right]\left[( \hat{k}\cdot\mathbf{W})^{2}+(\mathbf{v}_{A}\cdot\mathbf{k})^{2}\right]. \tag{7.65b}\] As this is a particularly simple quartic polynomial, we can study the stability of its roots analytically. Considering eq. (7.64) as an equation for \(\omega^{2}\) and computing the discriminant we obtain \[\left[W^{2}-(\hat{k}\cdot\mathbf{W})^{2}\right]^{2}\geq 0 \tag{7.66}\] so that \(\omega^{2}\)-roots are real. As complex roots of a real algebraic polynomials occur in pairs of complex conjugates, complex \(\omega^{2}\)-roots would correspond to an instability. In order to have stable roots though, we also need other conditions to be met. We, in fact need \(b_{2}<0\) and \(b_{4}>0\) to make sure that the \(\omega^{2}\)-roots are real and positive, so that \(\omega\)-roots are real as well. As this is evidently the case, we conclude that magnetized flows are generically stable in this case. This feature is unchanged from the corresponding fluid case, so that it is reasonable to expect that the same trend we discussed for the purely hydrodynamical case will also apply to the magnetized case with both shear and vorticity: Vorticity tends to stabilize shear-unstable modes in a manner independent of the wave number. ### 7.6 Concluding remarks: The MRI in perspective We set out with the intention of discussing the magneto-rotational instability in a general background, relaxing the symmetry constraints associated with the standard analysis and possibly deriving an instability "criterion" relevant for (highly) dynamical environments and nonlinear simulations. However, having set up the analysis (and the required tools) in an arguably sensible way, we arrived at results which were not in line with the "naive" expectations. Given this, it makes sense to comment on the implications. Moreover, we need to highlight an important "missing ingredient" in the discussion; the need to involve some suitable filtering operation to make the discussion sensible in the first place. We will deal with each of these questions in turn, starting with the implications of our results for the MRI. #### The MRI vs the Rayleigh criterion A key aspect of the MRI is that adding a weak magnetic field on top of a hydrodynamically stable shearing flow changes the nature of the problem and makes it unstable. In discussing this problem, however, it is often "forgotten" that the relevant hydrodynamic stability criterion [180] guarantees stability _only_ for axisymmetric modes (cf. the discussion in appendix D.2). Adding a magnetic field renders such modes unstable--technically, the non-axisymmetric ones are not [32]. Thus it is clear that the MRI is relevant only in situations where we can think of axisymmetric modes being "preferred" in some sense. An immediate example of this is an accretion disk, which involves a globally axisymmetric background for the perturbations. This then immediately tells us that applying the results to the dynamical context of neutron star mergers is a much more subtle endeavour. In fact, this exercise is problematic from the outset. To back up this claim, we show in appendix D that we can reproduce the MRI perturbation equations and dispersion relation through the local frame construction. However, for the specific MRI calculation there exists a preferred local frame. This local frame is associated with an observer that is co-rotating with the fluid on some orbit, and the co-ordinate axes rotate in such a way that one of them always points in the radial direction of the global cylindrical coordinate system. Another coordinate axis always points in the azimuthal direction. This local frame is "preferred" as the axes are (by construction) tied to those of the most natural global coordinate system. In a sense, we could set up different local co-rotating observers and construct the global axes by stitching together the local ones. In the case of a general and truly local analysis, however, this additional piece of information is not available. Moreover, we show in appendix D.2 how one may set up (for the circular and axisymmetric background flow) a local frame that is "co-moving but not co-rotating" with the fluid. In doing so, we derived the corresponding dispersion relation, confirmed that the result is consistent with the general formulae, and showed how we can recover the usual Rayleigh criterion (and hence also the MRI criterion) as long as we perform the conversion to the relevant co-rotating frame frequency. These arguments clarify the sense in which the MRI (and similarly the Rayleigh stability) is a "global instability analyzed with local tools". The local analysis needs to be "augmented" by pieces of information that cannot be truly local. The upshot of this is that, in a merger-like scenario (where assumptions regarding the global properties of the flow are debatable) we should probably not expect the standard instability criteria to provide a faithful indication/diagnostic of what is actually going on. The standard argument will apply, but only if there is a meaningful sense of (Rayleigh stable) flow on a scale larger than that at which the plane-wave analysis is carried out. This complicates the discussion for any given numerical simulation, but so be it. #### The missing ingredient: Filtering Throughout the discussion we have focussed on the analytical development, sweeping issues associated with actual numerical data "under the carpet". The key issue here is that we ignored the question of how one would, in practice, construct the background suitable for the perturbation analysis given nonlinear simulation dynamics. In words, the answer is easy: We need to apply some suitable filtering operation to remove small scale fluctuations from a gradually varying "background". In a nonlinear setting this split is (obviously) not guaranteed to make sense. Suppose that the instability we are trying to uncover acts on some characteristic scale \(L\), say. Then we need a background that varies on a larger scale than this, otherwise the notion of a shear flow that becomes unstable due to smaller scale waves makes no sense. This argument relies on an explicit filtering step, separating the instability scale \(L\) from the variation of the background. The construction of such a filter should be possible, at least in principle, in many situations (see, for example, Celora et al. [58]). Of course, the scale separation may not apply in actual problems of interest. Further complicating the discussion is the unavoidable implicit filtering associated with the finite numerical resolution. We know from the large body of work on turbulence simulations that sub-grid dynamics may play an important role in a robust description of the dynamics. This typically involves a suitable large-eddy scheme to represent the subgrid dynamics. Hence, the analysis involves elements of choice (effectively, the closure relations). Crucially, the effective field theory that is/should be simulated is not that of the ideal theory. All current models--both the ones discussed in [49, 177] as well as the covariant scheme of chapter 4--modify the principal part of the equations of motion. Therefore the analysis of the model "that is actually solved" is fundamentally changed, even when the closure terms are small. In essence, an instability analysis of numerical simulation data needs to consider the impact of an effective viscosity/resistivity. Given the presently available tools, we do not have a particularly good handle on this issue. We are forced to conclude that we need to make progress on the development of robust large-eddy models before we can make a sensible attempt to demonstrate the presence of the MRI in a highly dynamical environment. **Part IV** **Conclusions** Hydrodynamics is an incredibly useful framework with myriads of applications at all scales, so that even after centuries of research it continues to be extremely fascinating and valuable. In this thesis, we have studied different and interconnected problems in the modelling of relativistic fluids, from dissipation to turbulence. The motivation for this work lies in the (extremely thrilling) promises of gravitational wave astronomy, and the range of exciting physics we can explore with binary neutron star mergers. After a brief introduction/review in chapter 2 of the different modelling strategies for dissipative fluids currently on the market, we explored in chapter 3 the close to equilibrium regime of the action-based dissipative multi-fluid model of Andersson and Comer [20]. A first motivation for this lies in the fact that the equations of motion are derived from an action principle, and as such are valid (in principle) in the non-linear regime with no reference to some equilibrium state. However, as the close to equilibrium regime is likely to be relevant for much of the applications we have in mind, it is worth exploring how the model behaves in such a limit. This also has the additional advantage of facilitating a direct comparison with alternative existing models, all of which are based on an expansion around such a notion of equilibrium. In developing complicated dissipative models, however, we need to be pragmatic and keep in mind the extreme computational cost of simulating them. Hence, we continued in chapter 4 focusing on the foundational aspects of performing a "spatial filtering" in relativity. We do so as this is a common strategy for dealing with computationally demanding turbulent flows, for which direct numerical simulations are often impractical. The discussion we provided focuses on the formal underpinnings, as the strategy is complicated by the covariance principle of General Relativity. We then argued that it is natural to set up the framework in the fibration associated with fluid elements, and showed how one can perform filtering ensuring consistency with the tenets of relativity. The framework we put forward has the additional advantage that it allows for a direct link with the underlying thermodynamics, which is ultimately what we aim to constraint with binary neutron star merger observations. In the process we also demonstrated how the filtered equations of motion are effectively equivalent to those describing a dissipative fluid, thus leading back to similar (although, as we discuss in chapter 4, not equivalent) issues faced when modelling dissipation in relativity. As we argued already in this work, for accurate neutron star modelling we are tasked with even more complicated settings involving multiple interpenetrating flows, should this be in the form of a two-fluid plasma and/or superfluid/superconducting mixtures. Given this, we continued in chapter 5 by discussing the first steps towards extending the framework of chapter 4 to magneto-hydrodynamics. We do so as this is the first step towards a multi-fluid LES framework in that it adds the electromagnetic degrees of freedom to the picture, while remaining in the realm of single fluid models. In the last part of this work we focused on applications to problems of relevance for binary neutron star mergers. In chapter 6 we focused on modelling (fast) reactions for neutron star simulations, and the associated bulk viscosity. The reason being that reactions are thought to source the dominant dissipative mechanism at play in mergers. Our discussion, in particular, focuses on the impact that inevitable numerical limitations have on the way we should frame the modelling. We then considered the magneto-rotational instability, which is thought to be a key mechanism for sustaining the development of turbulence in the outer layers of binary neutron star merger remnants. Our aim has been to provide an analysis of this mechanism that is well suited for highly-dynamical environments such as mergers, where usual criteria based on rather restricting assumptions may not hold. Whilst we have presented our different contributions in a relatively independent fashion, it should have become rather clear by now that they are not. This is true from a physics perspective, as all the different aspect we touched upon, from dissipation to turbulence, play a role _"at the same time"_ in mergers. It is even more true if we consider that similar modelling strategies developed in one "area" can find useful applications in another. This is demonstrated, for example, by the specific closure scheme we put forward in chapter 4, where we adapted some of the ideas developed to address the stability and causality issues of traditional dissipative models to fix the issues encountered with the natural relativistic generalization of the model originally put forward by Smagorinsky [205] in a Newtonian setting. In terms of future work, there are a number of possible avenues worthwhile pursuing that originate from the analysis presented in this thesis. They involve, not surprisingly, both aspects of framework developments as well as more specific applications. In terms of framework developments, for example, it would be worthwhile working towards extensions of the framework of chapter 4 to the case of multiple interpenetrating flows/mixtures. Moreover, we also mentioned briefly at the end of chapter 5 how the framework discussed in chapter 4 suggests--due to the covariance of the resulting model--that we may enhance the role of the "large-eddy strategy" to a tool for linking models valid at different scales. Clearly, there are several applications we can envisage, from dynamo theory to try and unify the somewhat arbitrarily separated small scale and large scale models, to superfluids to link mesoscopic models where each single vortex line is resolved to coarse-grained descriptions of the kind discussed in section 2.3--for which the two-fluid models were originally developed [21]. Furthermore, as we discussed in section 4.7, the scale-gap and fluctuations within a fluid box implies that the equation of state used for large-scale merger simulations may not be trivially linked to the underlying microphysics. Notably, as any numerical simulation is performed on a finite grid, there is always at least an implicit filtering associated with it. As the best grid resolution in large-scale merger simulations is of order tens of meters, the impact of this is potentially significant. It would be worth exploring this potential disconnect and try to quantify the impact this may have on neutron star parameters extracted from observations. Further developments of the large-eddy strategy in both these directions would obviously be rather incomplete if not provided with suitable closure schemes. Future developments will also have to involve aspects of developing, testing and calibrating novel/different closure schemes. As we may also envisage rather different schemes to be better suited for diverse applications, there is plenty of work to be done. ## Appendix A Transporting a tetrad and Fermi Coordinates In this appendix we introduce the notion of Fermi coordinates, as these have been used but not derived/discussed in the main body of the thesis. We start by discussing the transport of a tetrad along a curve/worldline since the notion of Fermi coordinates builds on this [101, 155]. In our discussion we will, in particular, make explicit contact with the notion of spin coefficients that have to be introduced whenever one wants to work with an orthonormal basis or tetrad (see notation section at the beginning of this work). ### Transporting a tetrad along a curve We start our analysis by considering a curve/worldline in spacetime, which we here take as time-like with tangent vector, and assuming a set of four orthonormal vectors is given along the worldline. Namely, we have a set of four vectors, with such that (A.1) where we are considering the metric as a bi-linear form on the tangent space, and is the Minkowski metric. We further assume that we can take the time-like unit vector of the tetrad as the one tangent to the curve, that is. We are interested in the rate-of-change of the tetrad basis vectors as we move along the curve. Since is the tangent to the curve, we can start from (A.2) where we have introduced the bi-linear form \(\Omega^{\hat{b}}_{\hat{a}}\) to re-write the rate-of-change as a linear combination of the tetrad basis vector themselves. Next, we observe that \(\Omega_{\hat{b}\hat{a}}\) must be an anti-symmetric bi-linear form1, as this follows from the orthonormal character of the basis, that is Footnote 1: For clarity, let us point out that the bi-linear form is obtained lowering the contravariant index via the metric, namely \(\Omega_{\hat{a}\hat{b}}=g_{\hat{a}\hat{b}}\sigma^{\hat{c}}_{\hat{b}}\). \[\nabla_{U}\left(g(e_{\hat{a}},e_{\hat{b}})\right)=0\Longrightarrow\Omega_{ \hat{a}\hat{b}}=\Omega_{\hat{b}\hat{a}}\.\] (A.3) As such, we can decompose \(\Omega\) as any skew-symmetric bi-linear form [101]. In our case this gives \[\Omega_{\hat{a}\hat{b}}=U_{\hat{a}}a_{\hat{b}}-a_{\hat{a}}U_{\hat{b}}-\varepsilon _{\hat{a}\hat{b}\hat{c}\hat{d}}U^{\hat{c}}W^{\hat{d}}\,\] (A.4) where \(a_{\hat{a}}\) is the four-acceleration of the worldline, while \(W^{\hat{a}}\) is a generic vector orthogonal to \(U^{\hat{a}}\). To better understand the physical meaning of such decomposition, we introduce the spatial index \(\hat{i}=1,2,3\) and use it also to denote the three unit vectors orthogonal to \(U\). We then get \[\left(\nabla_{U}U\right)^{\hat{a}} =a^{\hat{a}}\,\] (A.5) \[\left(\nabla_{U}e_{\hat{i}}\right)_{\hat{a}} =a_{\hat{i}}U_{\hat{a}}+\varepsilon^{U}_{\hat{a}\hat{b}\hat{i}}W^ {\hat{b}}\.\] (A.6) Hence, when the worldline is a geodesic we have \(a^{\hat{a}}=0\) and the time-like unit vector is unchanged as we move along the curve, while the spatial ones \(e_{\hat{i}}\) change due to some (spatial) rotation in the subspace orthogonal to \(U\). In particular, we stress that whilst the four acceleration is an intrinsic property of the worldline, the vector \(W\) represents the angular velocity associated with the rotation of the observer frame. In essence, given a time-like worldline/curve, there exist an infinite number of compatible tetrads such that \(e_{0}=U\), and these are all related by some rotation in the plane orthogonal to the curve. We can then further split the bilinear form \(\Omega\) as the sum of two terms \[\Omega_{\hat{a}\hat{b}}=\Omega^{FW}_{\hat{a}\hat{b}}+\Omega^{rot}_{\hat{a} \hat{b}}\,\] (A.7a) with \[\Omega^{FW}_{\hat{a}\hat{b}}=U_{\hat{a}}a_{\hat{b}}-a_{\hat{a}}U_{\hat{b}}\, \quad\text{and}\ \Omega^{rot}_{\hat{a}\hat{b}}=\varepsilon^{U}_{\hat{a}\hat{b}\hat{c}}W^{ \hat{c}}\.\] (A.7b) This splitting is meaningful because it separates the terms in \(\Omega\) we have control over (the rotation part) from those that are given once the curve is specified (the Fermi Walker part). In particular, a vector is said to be Fermi-transported (or Fermi-Walker transported) along a worldline if its components change with \(\Omega^{FW}\). Before moving on to discuss Fermi coordinates, we take the opportunity to make contact with the spin connection coefficients, which are normally introduced when working with a tetrad basis. Using the spin coefficients [50] we would get \[\nabla_{U}e_{\hat{a}}=U^{b}\nabla_{\hat{b}}e_{\hat{a}}=U^{b}\omega^{\hat{b}}_{ \hat{a}}e_{\hat{b}}=\omega^{\hat{b}}_{\hat{a}}e_{\hat{b}}\,\] (A.8) so that, contrasting this with eq. (A.2) we see \[\omega^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ curve. Second, we can use this simple fact to show that such coordinates are well-defined in some neighbourhood of the central curve [148, 155]. This is intuitively clear as points close enough to the central curve are uniquely connected to the central curve by one (and only one) geodesic. On larger distances (in terms of the connecting geodesic proper length) the procedure fails because different geodesics can mix and touch. This can happen due to both the acceleration of the central curve and the curvature of the spacetime itself--and also due to the tetrad rotation if the basis is not Fermi transported along the curve. We can now use such coordinates to write down an expansion for the metric in the neighbourhood of the central curve. As we know already that the metric along the central curve takes the Minkowski form, we only need the first derivatives of the metric evaluated on the central curve. These can be obtained from the Christoffel symbols which are related to the spin coefficients and hence to the bilinear form \(\Omega\) introduced above. In particular, by inspecting eq. (A.10) one can easily see that \[\Gamma^{\dot{a}}_{\dot{b}\dot{0}}=\Omega^{\dot{a}}_{\dot{b}}\,\] (A.11) Figure A.1: Left: Tetrad transported along the observer worldline. Right: Connecting geodesics starting from and perpendicular to the central curve. Figure adapted from Misner et al. [155]. so that, by means of eq. (A.4), we readily obtain \[\Gamma_{0\dot{0}\dot{0}} =\Gamma^{\dot{0}}_{\dot{0}\dot{0}}=0\,\] (A.12) \[\Gamma_{0\dot{j}\dot{0}} =-\Gamma^{\dot{0}}_{\dot{j}\dot{0}}=-a_{\dot{j}}\,\] (A.13) \[\Gamma_{\dot{i}\dot{j}\dot{0}} =\Gamma^{\dot{i}}_{\dot{j}\dot{0}}=-\varepsilon_{0\dot{i}\dot{j} \dot{k}}W^{\dot{k}}\.\] (A.14) The remaining Christoffel symbols can be found using the geodesic equation satisfied by the connecting geodesics \[\frac{\mathrm{d}^{2}x^{a}}{\mathrm{d}s^{2}}+\Gamma^{\dot{a}}_{\dot{k}\dot{c}} \frac{\mathrm{d}x^{\dot{b}}}{\mathrm{d}s}\frac{\mathrm{d}x^{\dot{c}}}{\mathrm{ d}s}=0\.\] (A.15) As the connecting geodesics are given by \(x^{\dot{0}}=\mathrm{const}\) and \(x^{\dot{i}}=x^{\dot{i}}s\), we readily obtain \[\Gamma_{\dot{a}\dot{i}\dot{j}}=\Gamma^{\dot{a}}_{\dot{i}\dot{j}}=0\.\] (A.16) Next, note that by means of the metric compatibility condition (with the connection given by the Christoffel symbols) we have \[g_{\dot{a}\dot{b},\dot{c}}=2\Gamma_{(\dot{a}\dot{b})_{\dot{c}}}\,\] (A.17) and using eqs. (A.12) and (A.16) we obtain \[g_{\dot{0}\dot{0}} =g_{\dot{0}\dot{0}}\big{|}_{G}+g_{\dot{0}\dot{0}\dot{a}}x^{\dot{a} }=-\left(1+2a_{\dot{j}}x^{\dot{i}}\right)+\mathcal{O}(x^{\dot{i}})^{2}\,\] (A.18) \[g_{\dot{0}\dot{i}} =g_{\dot{0}\dot{i}}\big{|}_{G}+g_{\dot{0}\dot{i},\dot{a}}x^{\dot{ a}}=-\varepsilon_{0\dot{i}\dot{j}\dot{k}}W^{\dot{k}}+\mathcal{O}(x^{\dot{i}})^{2}\,\] (A.19) \[g_{\dot{i}\dot{j}} =g_{\dot{i}\dot{j}}\big{|}_{G}+g_{\dot{i}\dot{j},\dot{a}}x^{\dot{ a}}=\eta_{\dot{i}\dot{j}}+\mathcal{O}(x^{\dot{i}})^{2}\.\] (A.20) In essence, we obtained an expansion for the metric away from the central worldline. The first order expansion depends on the worldline acceleration, and also on the (arbitrary) vector describing the angular rotation of the basis vectors as we move along the worldline. Notably, no information about the space-time curvature enters at first order in the expansion. We conclude with some observations/comments. First of all, let us observe that we can always choose to work with a non-rotating tetrad, so that the associated expansion for the metric simplifies accordingly (noting that this is precisely the choice made in chapter 4, for example). In this case, the coordinates are called Fermi-coordinates [78, 79, 208]. If the observer is also freely-falling, the central curve is a geodesic and the (non-rotating) coordinates are called Fermi normal coordinates [148]--in which case the metric expansion contains no first order terms. The second order corrections to the Minkowski metric have been computed explicitly by Manasse and Misner [148] (for Fermi normal coordinates), showing in particular that the second order corrections are uniquely determined by the Riemann tensor (evaluated on the central curve). The work by Rakhmanov [179], where the Fermi coordinates expansion of the metric is computed to all orders given a spacetime describing a plane gravitational wave, is also relevant. Finally, we also note that when a similar scheme is used with light-like connecting geodesics, the resulting coordinates are known as optical coordinates [179]. ## Appendix B Multi-scale arguments and the invariant manifold method Multi-scale methods are useful whenever the (system of) equations to solve contains different scales, so that is physically (and numerically) useful/convenient to solve an approximate system instead. In this appendix we briefly summarize key results from [170, 223] that are used in the main body of the thesis, specifically in chapter 6. ### B.1 Invariant manifold approach Assume we have a system of ordinary differential equations written as \[\dot{x} =f(x,y),\] (B.1a) \[\dot{y} =\epsilon^{-1}g(x,y).\] (B.1b) The variables \(x\) are called _slow_, and the variables \(y\) are called _fast_, whilst \(\epsilon\ll 1\) is a parameter. In the _invariant manifold_ approach we assume that there exists an _equilibrium (fast) state_\(\varphi(x)\) such that \[g(x,\varphi(x))\equiv 0.\] (B.2) We can then write the fast variables \(y\) as an expansion in the small parameter \(\epsilon\) about the equilibrium state as \[y=\varphi(x)+\epsilon y_{1}+\mathcal{O}(\epsilon^{2}).\] (B.3) Using the equations of motion eq. (B.1) we find that the behaviour of the slow variables \(x\) is approximated, to second order in \(\epsilon\), by the solution to \[\dot{X} =F_{0}(X)+\epsilon F_{1}(X)\] (B.4a) \[=f(X,\varphi(X))+\epsilon\nabla_{y}f(X,\varphi(X))\left(\nabla_{ y}g(X,\varphi(X))\right)^{-1}\nabla_{x}\varphi(X)f(X,\varphi(X)).\] (B.4b) The solution to the simplified system approximates the solution to the full system eq. (B.1) to \(\mathcal{O}(\epsilon^{2})\) up to times \(\mathcal{O}(1)\). The first order correction term needs to be applied consistently to all variables in the reduced system for this accuracy result to hold. ### Appendix B Two timescale approach Strictly, the invariant manifold approach is only valid for ordinary differential equations. A more general approach that applies to partial differential equations is the two-scale approach. Using the two timescale approach as an example, this introduces the _fast time_\(\tau=t/\epsilon\) which is then treated as an independent variable. Applied to eq. (B.1) this leads to \[\partial_{t}x+\epsilon^{-1}\partial_{\tau}x =f(x,y),\] (B.5a) \[\partial_{t}y+\epsilon^{-1}\partial_{\tau}y =\epsilon^{-1}g(x,y).\] (B.5b) By gathering terms in powers of \(\epsilon\), the fast behaviour can be integrated out by taking the integral average in \(\tau\). The result of the mathematical calculation is identical to the invariant manifold approach, when applied to ordinary differential equations. The calculation is general enough to include partial differential equations, and illustrates a different interpretation and potential problems. The interpretation is that the reduced system is valid for the _integral average_ of the slow variables: the fast scales have been integrated out. The potential problem is the requirement that the integral average of the fast behaviour is assumed to not contribute at leading order in \(\epsilon\). This "resonance" behaviour cannot be captured by these approaches. ### Appendix B Linear fast dynamics A particularly relevant example is where the fast behaviour is linear, or can be linearised. In this case we write the full system eq. (B.1) as \[\dot{x} =f(x,y),\] (B.6a) \[\dot{y} =\epsilon^{-1}(-Ay+B),\] (B.6b) with \(A=A(x)\) and \(B=B(x)\) are constants in the fast variables \(y\). The equilibrium solution is therefore \(\varphi(x)=B/A\), and the simplified system is \[\dot{X}=f(X,\varphi(X))\left[1+\epsilon A^{-1}\nabla_{y}f(X,\varphi(x))\nabla_{ x}\varphi(X)\right].\] (B.7) ### B.4 Constructing the fast terms The construction in this section relies on the ratio of scales \(\epsilon\) being explicit in the equations of motion. Usually a non-dimensionalisation of the system is needed to make the scales explicit. However, with complex nonlinear terms (such as tabulated net reaction rates which include many reaction channels) the precise form of the terms may not be obvious. Here we need only the leading order terms and so can proceed as follows. We start from a system of equations which we expect to have fast behaviour \[\dot{z}=h(x,z),\] (B.8) where \(x\) are any variables we expect to be slow. We assume that we know how \(h\) scales asymptotically with the ratio of scales. That assumption means we can explicitly compute \[h^{\text{fast}}=\lim_{\epsilon\to 0}\left(\epsilon h\right).\] (B.9) This defines the source term for the fast behaviour as the piece that diverges linearly with the ratio of scales in the limit of infinitely fast speeds. We then split the source into fast and slow pieces using \[h^{\text{slow}}=h-\epsilon^{-1}h^{\text{fast}},\] (B.10) and perform the equivalent split on the variables \(z\) as \[\dot{z}^{\text{fast}} =\epsilon^{-1}h^{\text{fast}},\] (B.11a) \[\dot{z}^{\text{slow}} =h^{\text{slow}}.\] (B.11b) We can then identify the fast variables \(y\) with \(z^{\text{fast}}\) and augment the slow variables \(x\) with \(z^{\text{slow}}\). ## Appendix C Working with the CompOSE database Our analysis of the reactive problem in chapter 6 is--ultimately--aimed at numerical implementations. Given this target, it makes sense to consider how the discussion impacts on the matter model that needs to provide the input physics. In this appendix we will spell out the connection with the compOSE database, which provides a useful collection of state-of-the-art equation of state models. As a result, the arguments draw heavily on the compOSE manual [210], in particular section 4.1.2 ("Thermodynamic consistency") of version 2.0. The aim here is to explain how the various thermodynamical coefficients introduced in the main text can be worked out from an actual equation of state table. This is obviously a necessary step in the process. It also helps highlight to what extent existing tabulated data needs to be augmented in the future. All equations of state relevant for our work in the compOSE database are provided as tables of \((T,n,Y_{\rm q})\), where \(Y_{\rm q}\) is the fraction of charged strongly interacting particles, which for a system without muons corresponds to the electron fraction \(Y_{\rm q}=Y_{\rm e}\) (as local charge neutrality is assumed to hold). The central thermodynamical potential is the Helmholtz free energy density \(f=\varepsilon-Ts\), and some key quantities in the construction of the tables are \[\left\{\frac{p}{n},\;\frac{s}{n},\;\frac{\mu_{\rm b}}{m_{\rm n}}-1,\;\frac{\mu _{\rm q}}{m_{\rm n}},\;\frac{\mu_{\rm l}}{m_{\rm n}},\;\frac{f}{nm_{\rm n}}-1, \;\frac{\varepsilon}{nm_{\rm n}}-1\right\}\;,\] (C.1) where \(m_{\rm n}\) is the neutron mass--also provided in the tables and specific to each model--while \(\mu_{\rm b},\;\mu_{\rm q},\;\mu_{\rm l}\) are the baryon, charge and lepton "chemical potentials" (respectively). The energy cost of adding a neutron, proton or electron to the system is then \[\mu_{\rm n}=\mu_{\rm b}\;,\quad\mu_{\rm p}=\mu_{\rm b}+\mu_{\rm q}\;,\quad\mu_{ \rm e}=-\mu_{\rm q}+\mu_{\rm l}\] (C.2) as follows straightaway from the respective baryon, charge1 and lepton number. The baryon, charge and lepton chemical potentials are (in general) used to build the free energy, even though in the charge-neutral case with leptons this reduces to Footnote 1: This is the total charge, not the charge of the strongly interacting particles corresponding to. (C.3) From this we see that there are only two independent chemical potentials at the thermodynamical level--consistent with a three-parameter equation of state--that can be written as derivatives of the Helmholtz free energy, namely and. Now, we need to connect with the quantities used in the main text. We have (following from the Gibbs relation eq. (6.12)) (C.4) where in the cold equilibrium assumed throughout this paper. From this differential we see that, by definition (C.5a) (C.5b) Contrasting this with the results in section 4.1.2 of the compOSE manual, it is easy to see that2\(\mu_{\rm n}=\mu_{\rm b}\) and. In practice, the affinity can be either read off from the tables directly, or computed as above. Because the equation of state table is three-dimensional, the quantity extracted is inevitably a function of. This is consistent with the results of section 6.2, where we accounted for the fact that depends on either the temperature or the energy density. Footnote 2: Note that potential is not an independent thermodynamic quantity as the system cannot create protons alone because of the charge neutrality assumption. Now, the coefficients and introduced in section 6.2 can be obtained as combinations of derivatives of considered as a function of. We need to link these expressions to derivatives that can be computed from the available tables (or, which may be more practical for the future, enhance the table with the required information). That is, we have to change variables to arrive at \[\left(\frac{\partial\beta}{\partial n}\right)_{\varepsilon,Y_{\rm e}} =\left(\frac{\partial\beta}{\partial n}\right)_{T,Y_{\rm e}}- \left(\frac{\partial\varepsilon}{\partial n}\right)_{T,Y_{\rm e}}\left(\frac{ \partial\varepsilon}{\partial T}\right)^{-1}_{n,Y_{\rm e}}\left(\frac{ \partial\beta}{\partial T}\right)_{n,Y_{\rm e}}\,,\] (C.6a) \[\left(\frac{\partial\beta}{\partial\varepsilon}\right)_{n,Y_{\rm e }} =\left(\frac{\partial\varepsilon}{\partial T}\right)^{-1}_{n,Y_{\rm e}} \left(\frac{\partial\beta}{\partial T}\right)_{n,Y_{\rm e}}\,,\] (C.6b) \[\left(\frac{\partial\beta}{\partial Y_{\rm e}}\right)_{ \varepsilon,n} =\left(\frac{\partial\beta}{\partial Y_{\rm e}}\right)_{T,n}- \left(\frac{\partial\varepsilon}{\partial Y_{\rm e}}\right)_{T,n}\left(\frac {\partial\varepsilon}{\partial T}\right)^{-1}_{n,Y_{\rm e}}\left(\frac{ \partial\beta}{\partial T}\right)_{n,Y_{\rm e}}\,.\] (C.6c) Recalling that these quantities should be evaluated at equilibrium, we see that we also need to construct the corresponding equilibrium table. Operationally, this can be done as follows. We fix \(n,T\) and vary \(Y_{\rm e}\) until we find a value for which \(\beta=0\). The corresponding value of \(Y_{\rm e}\) is then what we call \(Y_{\rm e}^{\rm eq}\) and the equilibrium composition will automatically only be a function of \((n,T)\). Evaluating the original three-parameter model at \(Y_{\rm e}=Y_{\rm e}^{\rm eq}\) gives the corresponding equilibrium energy density and pressure etc. Using expressions analogous to eq. (C.6), we can rewrite derivatives of \(Y_{\rm e}^{\rm eq}\) with respect to \((n,\varepsilon)\) in terms of derivatives with respect to \((n,T)\), that can be extracted from the tables. With the results in eq. (C.6) and evaluating the relevant quantities at equilibrium, we can work out (for a given equation of state) the value of \(\mathcal{B}\), as required for fig. 6.2. In order to compute \(\mathcal{A}\) though, we also need to evaluate the restoring term \(\gamma\) (effectively, a measure of the reaction timescale). In fig. C.1 we show \(\mathcal{A}\) as obtained from the modified Urca rates for the APR equation of state [210, 195] used in [104]. For this figure we have calculated \(\gamma\) assuming the Fermi surface approximation, which allows us to use the analytic formulae from [9]. Let us stress that the result should be valid for low temperatures (\(T\lesssim 1\) MeV), however the timescales relevant for our purposes occur in the range of (according to \(\mathcal{A}\) as calculated above) \(2\) MeV \(\lesssim T\lesssim 20\) MeV for the densities relevant to the neutron star core, begging the question of how accurate the approximation is. Instead of calculating out-of-equilibrium rates without the Fermi surface approximation, we can take a different approach, and estimate the equilibration timescale using neutrino opacities [106]. Notably, we find some broad similarities between the two estimates--despite expected qualitative differences--particularly in the regions of interest at relevant temperatures and densities. Figure C.1: Plot of \(\mathcal{A}\) for the APR equation of state used in [104]. The restoring term \(\gamma\) is calculated assuming the Fermi surface approximation remains valid. Contours are at \(\mathcal{A}=\{10^{3},10^{4},10^{7},10^{9}\}\)s\({}^{-1}\) (solid, dash, dot-dash, dot). ## Appendix D Formulating the MRI in the local frame In this appendix we show how to formulate the magneto-rotational instability using the local frame construction of chapter 7. In particular, we will show that using a co-rotating local frame we can derive the same equations as in [31]. Next in appendix D.2 we focus on the Rayleigh criterion. The discussion we provide here has the advantage that it makes explicit an important underlying assumption that is key to the usual Rayleigh and MRI criteria [180, 33]. Let us start by considering the circular velocity profile assumed in [31], \(\mathbf{v}=v^{\phi}\hat{\phi}\) with \(v^{\phi}=\Omega(R)R\) where we use cylindrical coordinates and an orthonormal basis on the "tangent space" (as usual). Consistently with the notational conventions adopted in the rest of the thesis, we then distinguish between indices with a "hat" corresponding to the orthonormal basis, and those without that correspond to the coordinate basis. We then pick an orbit at some radial distance \(R_{0}\) and choose an observer that is co-rotating with angular frequency identical to that of the background flow at \(R_{0}\), that is \(\mathbf{v}_{obs}=\Omega_{0}R\hat{\phi}\) where \(\Omega_{0}=\Omega(R_{0})\). The observer is then accelerated with acceleration \(\mathbf{a}=-\Omega_{0}^{2}R\hat{R}\), and the velocity of the fluid with respect to such an observer then is \(\mathbf{v}^{\prime}=(\Omega-\Omega_{0})R\hat{\phi}\). We then set up the axes of the observer local frame so that one is pointing in the radial direction (\(\hat{e}_{1}\)), one is pointing in the azimuthal direction (\(\hat{e}_{2}\)) and the third one is aligned with the rotation axis (\(\hat{e}_{3}\)). Introducing coordinates associated to this observer, we can then write the background fluid velocity as \[\mathbf{v}^{\prime}=\left.\frac{\mathrm{d}\Omega}{\mathrm{d}\mathrm{ln}R} \right|_{R_{0}}x^{\prime}\hat{e}_{2}+\mathcal{O}(x^{\prime 2})\.\] (D.1) We have neglected terms of order \(\mathcal{O}(x^{\prime 2})\) as we will only need the velocity and its gradients evaluated at the origin of the frame--so that such terms will not enter the perturbation equations anyway. Computing the gradients we then obtain \[\partial_{i}^{\prime}v_{j}^{\prime}=\begin{pmatrix}0&s_{0}&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\,\qquad s_{0}=\frac{\mathrm{d}\Omega}{\mathrm{d}\Omega nR} \bigg{|}_{R_{0}}\.\] (D.2) As the local frame of the observer is rotating with angular velocity \(\Omega_{0}\hat{e}_{3}\), we need to include the Coriolis force into the perturbation equations. We then write the perturbed Euler and continuity equations (dropping the primes for clarity, and retaining only fast-gradients in the background velocity) \[\partial_{t}\delta v_{i}+2\Omega_{0}\epsilon_{i3k}\delta v^{k}+ \delta v^{j}\partial_{j}v_{i}+\frac{c_{s}^{2}}{\rho}\partial_{i}\delta\rho+ \frac{1}{\mu_{0}\rho}\left[B_{j}\partial_{i}\delta B^{j}-B^{j}\partial_{j} \delta B_{i}\right]=0\,\] (D.3a) \[\partial_{t}\delta\rho+\rho\partial_{i}\delta v^{i}=0\,\] (D.3b) and, introducing a WKB plane-wave expansion (as detailed in section 7.1), \[-i\omega\delta v_{i}+2\Omega_{0}\epsilon_{i3k}\delta v^{k}+s_{0} \delta_{i2}\delta v^{1}+i\frac{c_{s}^{2}}{\rho}k_{i}\delta\rho+\frac{i}{\mu_{ 0}\rho}\left[B_{j}k_{i}\delta B^{j}-B^{j}k_{j}\delta B_{i}\right]=0\,\] (D.4a) \[-i\omega\delta\rho+i\rho k_{i}\delta v^{i}=0\.\] (D.4b) Next, focus on the induction equation. As we have discussed section 7.3, the induction equation in the co-rotating frame retains the inertial form. We then have \[\delta\partial_{j}\left(v^{j}B^{i}-v^{i}b^{j}\right)=\delta v^{j}\partial_{j} B^{i}+v^{j}\partial_{j}\delta B^{i}-B^{j}\partial_{j}\delta v^{i}-\delta B^{j} \partial_{j}v^{i}\,\] (D.5) where we made use of i)the no-monopoles constraint ii)the vanishing expansion of the background flow iii)the Boussinesq approximation to get rid of the divergence of the perturbed velocity. Introducing the WKB plane-wave expansion and evaluating the background quantities at the origin of the local frame we then end up with \[-i\omega\delta B^{i}-iB^{j}k_{j}\delta v^{i}-\delta B^{1}s_{0}\delta^{i2}=0\.\] (D.6) In eqs. (D.4) and (D.6) we recognize the terms entering the perturbation equations in [31] (with the exception of background gradients in the pressure that we are here neglecting). We also note that we here do not need to formally neglect terms of the form \(B/R\) as these terms do not appear in the explicit local frame construction. We conclude by noting that, at the special relativistic level, a uniformly rotating observer and the co-rotating one are not the same as the latter is also accelerated (see Gourgoulhon [101], ch. 13). However, such a difference is irrelevant at the level of the Newtonian perturbation equations since i) pseudo-acceleration terms drop out of the perturbed Euler equation ii) non-inertial terms in the induction equation involving the four-acceleration are negligible in the Newtonian limit (cf. discussion in section 7.3). ### Chapter 4 Formulating the MRI in the local frame The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the local frame. The MRI is a very important tool to study the dynamics of the MRI in the local frame. The MRI is a very important tool to study the dynamics of the local frame. The MRI is a very important tool to study the dynamics of the local frame. bringing to the fore important aspects to be kept in mind when looking at the general results derived in the main text, specifically in sections 7.4 and 7.5. Starting from eq. (D.4), and ignoring terms associated with the magnetic field, we write the coefficients matrix (ordering the perturbed quantities as ) (D.11) and the dispersion relation reads (D.12) If we now consider the sound-proof limit, namely, we then end up with (D.13) From this we easily see that, if we assume axisymmetric perturbations, namely, we obtain the usual Rayleigh stability criterion, that is. We stress that, as is well-known, the criterion does not guarantee that non-axisymmetric modes are stable. In fact, rewriting the dispersion relation in terms of and using the Routh-Hurwitz criterion (cf. appendix E) we find that, on top of the Rayleigh criterion we would also need (D.14) We also note that, the story changes if we take the opposite limit instead, namely, in which case the Rayleigh criterion is sufficient to guarantee stability of also non-axisymmetric perturbations. This would also be the case had we assumed incompressibility from the start. Having discussed the usual Rayleigh criterion using the co-rotating observer, we now re-work through it using an observer that is orbiting with the fluid at a given orbital distance but whose (local frame) axes are non rotating. We do this for two reasons. First, it will allow for a direct comparison with the general results discussed in the main text, specifically in section 7.4. Second, we have argued that choosing to work with a rotating or non-rotating observer is, in general, just a matter of taste. We then pick up an orbit as before, choose the observer to be co-orbiting with the background flow at the specific orbit (D.15) where we used global Cartesian coordinates and, describe the worldline of the observer (the origin of the axes is suitably chosen so that \(z_{0}(t)=0\)). The background fluid velocity is then \[\mathbf{v}=-\Omega x\hat{y}+\Omega y\hat{x}\,\qquad\Omega=\Omega(\sqrt{x^{2}+y^{2}})\] (D.16) so that, considering the relative velocity \(\mathbf{v}^{\prime}=\mathbf{v}-\mathbf{v}_{obs}\) and expanding around \((x_{0},y_{0})\) we obtain \[\mathbf{v}^{\prime}=-\left[s_{0}\frac{x_{0}y_{0}}{x_{0}^{2}+y_{0} ^{2}}x^{\prime}+\left(\Omega_{0}+s_{0}\frac{y_{0}^{2}}{x_{0}^{2}+y_{0}^{2}} \right)y^{\prime}\right]\hat{x}\\ +\left[\left(\Omega_{0}+s_{0}\frac{x_{0}^{2}}{x_{0}^{2}+y_{0}^{2 }}\right)x^{\prime}+s_{0}\frac{x_{0}y_{0}}{x_{0}^{2}+y_{0}^{2}}x^{\prime} \right]\hat{y}\,\] (D.17) where \(x^{\prime}=x-x_{0}\), \(y^{\prime}=y-y_{0}\). We can now choose a local region around a specific point \((x_{0},y_{0},z_{0})\) on the orbit and choose to re-orient the axes by a constant rotation so that the observer velocity is moving only in the \(y-\)direction. We then set up the local frame in such a way that the local axis are non-rotating and oriented like the global cartesian ones. We can therefore write the gradients as \[\partial_{i}^{\prime}v_{j}^{\prime}=\begin{pmatrix}0&\Omega_{0}+s_{0}&0\\ -\Omega_{0}&0&0\\ 0&0&0\end{pmatrix}\,\] (D.18) and the coefficients matrix of the linearized Euler plus continuity system is (cf. eq. (D.4) and ignore both magnetic field terms and the Coriolis force as the axis are non-rotating) \[\begin{pmatrix}-\omega&k_{1}&k_{2}&k_{3}\\ c_{s}^{2}k_{1}&-\omega&i\Omega_{0}&0\\ c_{s}^{2}k_{2}&-i(\Omega_{0}+s_{0})&-\omega&0\\ c_{s}^{2}k_{3}&0&0&-\omega\end{pmatrix}\.\] (D.19) We can then compute the dispersion relation to find \[\omega^{4}-\left[c_{s}^{2}\mathbf{k}^{2}+\Omega_{0}(\Omega_{0}+s_{0})\right] \omega^{2}+ic_{s}^{2}k_{1}k_{2}s_{0}\,\omega+c_{s}^{2}\Omega_{0}(\Omega_{0}+s _{0})(k_{3})^{2}=0\,\] (D.20) and observe this is consistent with the general dispersion relation in eq. (7.40) when restricted to the shear and vorticity associated with eq. (D.18). However, this is not quite the dispersion relation we obtained above (cf. eq. (D.12)). The reason for this is that the two local observers we have considered measure different frequencies, as the axes of the co-rotating observer rotate with angular velocity \(\Omega_{0}\hat{e}_{3}\) with respect to the other. To show why this is the resolution to the apparent conflict, let us consider once again the Born coordinates (cf. eqs. (D.7) and (D.8)). Given any vector \(a^{\hat{i}}\) we have \[\nabla_{t}a^{\hat{i}}=\partial_{\hat{t}}a^{\hat{i}}+\Omega_{0}e^{i3\hat{k}}a_{ \hat{k}}\.\] (D.21) This relation, when we introduce a plane-wave WKB expansion translates to \[-i\omega_{rot}\delta a^{\hat{t}}=-i\omega_{nr}\delta a^{\hat{t}}+\Omega_{0}\epsilon^ {\hat{2}\hat{3}\hat{k}}\delta a_{\hat{k}}\,\] (D.22) where \(\omega_{rot}\) is the frequency measured by the co-rotating observer, while \(\omega_{nr}\) is the frequency measured by an observer that has the same worldline but uses non-rotating axes. Specifying eq. (D.22) to the perturbed velocity (noting that it would not apply to the continuity equation as the density is a scalar), and noting that the frequency in eq. (D.19) correspons to \(\omega_{nr}\), we can reconcile the results obtained from eq. (D.19) with those from eq. (D.11). We also note here that the same logic applies when we consider magnetized flows. That is, if we work with the inertial induction equation and compute the background velocity gradients as in eq. (D.18), we also need to take into account of the relation in eq. (D.22) for magnetic field disturbances to get back to eq. (D.6) and the MRI dispersion relation. ## Appendix E The Routh-Hurwitz criterion In this appendix we review the Routh-Hurwitz criterion (see [131]), which gives valuable information about the roots of a polynomial with real coefficients. The criterion is often useful for studies of the linear stability of a system of equations, and can also be used prior to a numerical investigation (to inform the numerical study). Given a real algebraic equation \[x^{n}+\tilde{a}_{1}x^{n-1}+\cdots+\tilde{a}_{n-1}x+\tilde{a}_{n}=0\,\] (E.1) the Routh-Hurwitz criterion states that the number of roots with positive real part corresponds to the number of sign changes--disregarding vanishing terms--in the following sequence \[T_{0}\,T_{1}\,T_{1}T_{2}\,T_{2}T_{3}\,\ldots\,T_{n-1}T_{n-2},\tilde{a}_{n}\,\] (E.2) where \[T_{0}=1\,\quad T_{1}=\tilde{a}_{1}\,\quad T_{2}=\det\begin{pmatrix}\tilde{ a}_{1}&1\\ \tilde{a}_{3}&\tilde{a}_{2}\end{pmatrix}\,\quad T_{3}=\det\begin{pmatrix} \tilde{a}_{1}&1&0\\ \tilde{a}_{3}&\tilde{a}_{2}&\tilde{a}_{1}\\ \tilde{a}_{5}&\tilde{a}_{4}&\tilde{a}_{3}\end{pmatrix}\,\] (E.3) and so on. Throughout this work we have derived a number of dispersion relations in terms of the frequency \(\omega\) (as a function of the wave-vector \(\mathbf{k}\)), whereby linear stability corresponds to its roots having negative imaginary part. Rewriting the dispersion relation in terms of \(\Delta=-i\omega\), stability corresponds to \(\Delta\)-roots having negative real part. As such, using of the Routh-Hurwitz criterion we can directly obtain information about the stability of a system without having to explicitly find the solutions to the dispersion relation--which is always possible numerically (but often requires an expensive parameter study) while is viable analytically only in special/simple cases. We conclude this appendix by noting a caveat that is not explicitly mentioned in [131], namely that Routh-Hurwitz criterion can be used _only_ when all the coefficients in a polynomial are non-vanishing. We show this with a trivial trivial example: \[(x^{2}-2)(x^{2}-3)=x^{4}-5x^{2}+6=0\Longrightarrow x=\pm\sqrt{2}\,\pm\sqrt{3}\.\] (E.4) Using the Routh-Hurwitz criterion, the sequence we obtain (neglecting vanishing terms) is \(1,6\) and we would deduce there are no roots with positive real part. This is obviously wrong.
2310.14089
Quantitative Sobolev regularity of quasiregular maps
We quantify the Sobolev space norm of the Beltrami resolvent $(I- \mu \mathcal S)^{-1}$, where $\mathcal S$ is the Beurling-Ahlfors transform, in terms of the corresponding Sobolev space norm of the dilatation $\mu$ in the critical and supercrticial ranges. Our estimate entails as a consequence quantitative self-improvement inequalities of Caccioppoli type for quasiregular distributions with dilatations in $W^{1,p}$, $p \ge 2$. Our proof strategy is then adapted to yield quantitative estimates for the resolvent $(I-\mu \mathcal S_\Omega)^{-1}$ of the Beltrami equation on a sufficiently regular domain $\Omega$, with $\mu\in W^{1,p}(\Omega)$. Here, $\mathcal S_\Omega$ is the compression of $\mathcal S$ to a domain $\Omega$. Our proofs do not rely on the compactness or commutator arguments previously employed in related literature. Instead, they leverage the weighted Sobolev estimates for compressions of Calder\'on-Zygmund operators to domains, recently obtained by the authors, to extend the Astala-Iwaniec-Saksman technique to higher regularities.
Francesco Di Plinio, A. Walton Green, Brett D. Wick
2023-10-21T19:07:49Z
http://arxiv.org/abs/2310.14089v1
# Quantitative Sobolev regularity of quasiregular maps ###### Abstract. We quantify the Sobolev space norm of the Beltrami resolvent \((I-\mu\mathcal{S})^{-1}\), where \(\mathcal{S}\) is the Beurling-Ahlfors transform, in terms of the corresponding Sobolev space norm of the dilatation \(\mu\) in the critical and supercritical ranges. Our estimate entails as a consequence quantitative self-improvement inequalities of Caccioppoli type for quasiregular distributions with dilatations in \(W^{1,p}\), \(p\geq 2\). Our proof strategy is then adapted to yield quantitative estimates for the resolvent \((I-\mu\mathcal{S}_{\Omega})^{-1}\) of the Beltrami equation on a sufficiently regular domain \(\Omega\), with \(\mu\in W^{1,p}(\Omega)\). Here, \(\mathcal{S}_{\Omega}\) is the compression of \(\mathcal{S}\) to a domain \(\Omega\). Our proofs do not rely on the compactness or commutator arguments previously employed in related literature. Instead, they leverage the weighted Sobolev estimates for compressions of Calderon-Zygmund operators to domains, recently obtained by the authors, to extend the Astala-Iwaniec-Saksman technique to higher regularities. Key words and phrases:Beltrami equation, quasiregular, quasiconformal, Sobolev regularity, compression of singular integrals, \(T\mathbf{1}\)-theorems, weighted bounds, Beurling-Ahlfors transform 2010 Mathematics Subject Classification: Primary: 30C62. Secondary: 42B20,42B37 F. Di Plinio was partially supported by the National Science Foundation under the grant NSF-DMS-200510. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while this author was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the _Harmonic Analysis and Convexity_ program of Fall 2022. F. Di Plinio is also partially supported by the FRA 2022 Program of University of Napoli Federico II, project ReSinAPAS - Regularity and Singularity in Analysis, PDEs, and Applied Sciences. A. W. Green's research partially supported by NSF grant NSF-DMS-2202813. B. D. Wick's research partially supported in part by NSF grant NSF-DMS-1800057, NSF-DMS-200510, NSF-DMS-2054863 as well as ARC DP 220100285. Introduction Let \(\mathcal{S}\) be a smooth smooth \(L^{p}(\mathbb{C})\)-valued complex field with \(p\in\mathbb{C}\). We denote by \(\mathcal{S}\) the _adjoint operator_\(I-\mu\mathcal{S}\) of \(\mathcal{S}\), and \(\mu\mathcal{S}\) the _adjoint operator_\(I-\mu\mathcal{S}\) of \(\mathcal{S}\). The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.1}\] Here \(K\) is a smooth smooth \(L^{p}(\mathbb{C})\)-valued complex field with \(p_{K}\) the _adjoint operator_\(I-\mu\mathcal{S}\) of \(\mathcal{S}\). The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.2}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.3}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.4}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.5}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.6}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.7}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.8}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.9}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.10}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.11}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.12}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.13}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.14}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.15}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.16}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.17}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.18}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.19}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.20}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.21}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.22}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.23}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.24}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.25}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.26}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.27}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.28}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.29}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.30}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.40}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.50}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.61}\] The operator \(I-\mu\mathcal{S}\) is defined as \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} \coloneqq\frac{2K}{K-1}. \tag{1.7}\] The operator \(I-\mu\mathcal{S}\) is defined as (1.8) \[I_{K}=(q_{K},p_{K}),\qquad q_{K}\coloneqq\frac{2K}{K+1},\qquad p_{K} Our first result is a quantitative version of [7] in the critical and supercritical range, which we obtain as a consequence of weighted \(W^{1,p}\) bounds for the Beurling-Ahlfors operator, in consonance with the strategy of [3] for the zero-th order problem. **Theorem A**.: _Assume the dilatation \(\mu\in L^{\infty}(\mathbb{C})\cap W^{1,2}(\mathbb{C})\) satisfies (1.1) for some \(K\geq 1\). Then, for each \(1<r<2\),_ \[\big{\|}(I-\mu\mathcal{S})^{-1}\big{\|}_{\mathcal{L}(W^{1,p}(\mathbb{C}))} \lesssim 1 \tag{1.4}\] _with implicit constant depending exponentially on \(K\), \(\|\mu\|_{W^{1,2}(\mathbb{C})}\) and \(\frac{1}{\min\{2-r,r-1\}}\)._ _If in addition \(\mu\in W^{1,p}(\mathbb{C})\) for some \(2<p<\infty\),_ \[\big{\|}(I-\mu\mathcal{S})^{-1}\big{\|}_{\mathcal{L}(W^{1,p}(\mathbb{C}))} \lesssim 1+\|\mu\|_{W^{1,p}(\mathbb{C})}^{2} \tag{1.5}\] _with implicit constant depending exponentially on \(K\), \(\|\mu\|_{W^{1,2}(\mathbb{C})}\) and \(\max\left\{\frac{1}{p-2},p\right\}\)._ In the critical case (1.4), one cannot hope for \(I-\mu\mathcal{S}\) to be invertible on \(W^{1,2}(\mathbb{C})\) because in the corollary below, (1.10) fails for \(p=2\). Indeed, from [7, pp. 205-206], one can consider the quasiregular distribution \(\phi(z)=z(1-\log|z|)\) which does not belong to \(W^{2,2}_{\rm loc}(\mathbb{C})\), though its Beltrami coefficient, \(\mu(z)=\frac{z}{z}\frac{1}{2\log|z|-1}\) does in fact belong to \(W^{2,2}_{\rm loc}(\mathbb{C})\). Quantitative self-improvement of quasiregular maps is often expressed through Caccioppoli inequalities (see the survey [20] or [2, SS5.4.1]). In our case, Theorem A implies the following Caccioppoli inequalities for quasiregular distributions (see (1.7) - (1.10) below). Given an open set \(\Omega\), say \(f\in L^{q}_{\rm loc}(\Omega)\) is a \(K\)-quasiregular distribution if it satisfies the distributional Beltrami equation on \(\Omega\), namely for some \(\mu\) satisfying (1.1), \[\big{\langle}f,\overline{\partial}\psi-\partial(\mu\psi)\big{\rangle}=0\quad \forall\psi\in C_{0}^{\infty}(\Omega). \tag{1.6}\] **Corollary A.1**.: _Let \(\mu\in L^{\infty}(\Omega)\cap W^{1,2}_{\rm loc}(\Omega)\) satisfy (1.1) for some \(K\geq 1\). Then, for \(2<q<\infty\), \(f\in L^{q}_{\rm loc}(\Omega)\) satisfying (1.6), and any \(\eta\in C_{0}^{\infty}(\Omega)\),_ \[\|\eta(Df)\|_{L^{q}} \lesssim\|(D\eta)f\|_{L^{q}}; \tag{1.8}\] \[\text{for all }1<r<2,\quad\|\eta(D^{2}f)\|_{L^{r}} \lesssim\|(D\eta)f\|_{L^{r}}+\|(D\eta)(Df)\|_{L^{r}}+\|(D^{2}\eta)f\|_{L^{r}}. \tag{1.7}\] _In particular, \(f\in W^{2,r}_{\rm loc}(\Omega)\) for every \(r<2\). If furthermore, \(\mu\in W^{1,p}_{\rm loc}(\Omega)\) for some \(p>2\), then, for \(\frac{p}{p-1}\leq q<\infty\), \(f\in L^{q}_{\rm loc}(\Omega)\) satisfying (1.6), and any \(\eta\in C_{0}^{\infty}(\Omega)\),_ \[\|\eta(Df)\|_{L^{q}} \lesssim\|(D\eta)f\|_{L^{q}}; \tag{1.10}\] \[\text{for all }1<r\leq p,\quad\|\eta(D^{2}f)\|_{L^{r}} \lesssim\|(D\eta)f\|_{L^{r}}+\|(D\eta)(Df)\|_{L^{r}}+\|(D^{2}\eta)f\|_{L^{r}}. \tag{1.9}\] _In particular, \(f\in W^{2,p}_{\rm loc}(\Omega)\)._ Experts in the area will readily observe that without the precise dependence on \(\|\mu\|_{W^{1,2}(\mathbb{C})}\) we provide, the inequalities (1.7) and (1.9) follow from the fact that \(W^{1,2}(\mathbb{C})\) embeds into the space of functions with vanishing mean oscillation, or \(\operatorname{VMO}(\mathbb{C})\), together with the invertibility of \(I-\mu\mathcal{S}\) on \(L^{p}(\mathbb{C})\) for every \(1<p<\infty\) whenever \(\mu\in\operatorname{VMO}(\mathbb{C})\), see [3, Theorem 5]. However, by sharpening the assumption to \(\mu\in W^{1,2}(\mathbb{C})\), we establish, in Lemma 2.3 below, a quantitative version of this result which is a crucial step in Theorem A and Corollary A.1. The conclusion that quasiregular distributions in \(L^{q}_{\rm loc}(\Omega)\) self-improve to membership in \(W^{2,r}_{\rm loc}(\Omega)\) for \(q\) and \(r\) in the above specified ranges is one of the main results of Clop et. al. in [7]. However, the Caccioppoli inequalities (1.8) and (1.10), with the precise implicit dependence on the local regularity of \(\mu\) inherited from (1.4) and (1.5) in Theorem A are new to the best of our knowledge. Theorem A and the Caccioppoli inequalities are proved in SS3, and rely on a Moser-Trudinger estimate for the Jacobian of the principal solution to (B) with \(\mu\in W^{1,2}(\mathbb{C})\). ### Global Sobolev regularity of Beltrami equations on domains \(\Omega\) Theorem A was actually uncovered in our attempts to address a more delicate problem, the invertibility of \(I-\mu\mathcal{S}_{\Omega}\), where \(\mathcal{S}_{\Omega}\) is the compression of the Beurling-Ahlfors transform to a bounded Lipschitz domain \(\Omega\subset\mathbb{C}\) defined by \[\left\langle\mathcal{S}_{\Omega}f,g\right\rangle=\left\langle\mathcal{S} \left(f1_{\overline{\Omega}}\right),1_{\overline{\Omega}}g\right\rangle, \qquad f,g\in\mathcal{C}_{0}^{\infty}(\mathbb{C}).\] In [12], we developed new \(T(1)\)-type theorems and weighted Sobolev space estimates for Calderon-Zygmund operators on domains, a broad class which includes compressions of global CZ operators. In particular, together with past work of e.g. Tolsa [29], the estimates of [12] uncover the precise connection between boundary regularity of \(\Omega\) and weighted Sobolev estimates for \(\mathcal{S}_{\Omega}\). In this article, this connection is exploited to extend the resolvent strategy of [3] to the Sobolev case and obtain the first quantitative Sobolev estimate for \((I-\mu\mathcal{S}_{\Omega})^{-1}\). The compressed Beltrami resolvent \((I-\mu\mathcal{S}_{\Omega})^{-1}\) is connected to the Beltrami equation (B) for dilatations \(\mu\) whose support is contained in \(\overline{\Omega}\) and belonging to \(W^{1,p}(\Omega)\) for some \(p>2\). The Caccioppoli inequalities of Corollary A imply that any solution \(f\) to (B) with \(\mu\) of this form belongs to \(W^{2,p}_{\mathrm{loc}}(\Omega)\). Thus, the interest is in global regularity, i.e. whether \(f\) belongs to \(W^{2,p}(\Omega)\). This problem is even of interest when \(f\) is the principal solution, in which case one has the representation from [2, p. 165], \[\overline{\partial}f=(I-\mu\mathcal{S}_{\Omega})^{-1}\mu.\] Furthermore, since \(\partial f=\mathcal{S}_{\Omega}(\overline{\partial}f)\) by (2.1) below, \[\|f\|_{W^{2,p}(\Omega)}\lesssim\left(1+\|\mathcal{S}_{\Omega}\|_{\mathcal{L}( W^{1,p}(\Omega))}\right)\|(I_{\Omega}-\mu\mathcal{S}_{\Omega})^{-1}\|_{ \mathcal{L}(W^{1,p}(\Omega))}\,\|\mu\|_{W^{1,p}(\Omega)}\,. \tag{1.11}\] The first factor, the norm of \(\mathcal{S}_{\Omega}\) on the Sobolev space \(W^{1,p}(\Omega)\) is now well-understood in the supercritical range in terms of the boundary regularity of \(\Omega\), see [29, 11, 25]. In fact, by these results, it is quantitatively equivalent to the Besov space \(B_{p,p}^{1-\frac{1}{p}}(\partial\Omega)\) norm of the boundary normal of \(\Omega\); see Definition 4.1 below. Accordingly, we say \(\Omega\) is a \(\mathcal{B}_{p}\) domain if this boundary regularity condition is satisfied. The second factor in (1.11) is our chief object of interest. While quantitative estimates of this norm appear to be unavailable in past literature, several results of qualitative nature have been obtained through methods in antithesis with those developed herein. Initially, invertibility of \(I-\mu\mathcal{S}_{\Omega}\) was studied in the Holder scale in [22] and subsequently extended to the Sobolev and Triebel-Lizorkin scales, [4, 27, 25, 10]. These works all share the Neumann series blueprint initially introduced by Iwaniec in [19]. The main ingredients are unweighted bounds for \(\mathcal{S}_{\Omega}\) on smoothness spaces, established by means of _unweighted_\(T(1)\)-type theorems, The apex of this line of attack was a pair of papers by M. Prats in [25, 27], sharpening the result of [10], and establishing among other results the remarkable qualitative fact that \(I-\mu\mathcal{S}_{\Omega}\) is invertible on \(W^{1,p}(\Omega)\) assuming only that \(\Omega\) is a \(\mathcal{B}_{p}\) domain. **Theorem B**.: _Let \(p\geq r>2\), and \(\Omega\subset\mathbb{C}\) be a bounded simply connected \(\mathcal{B}_{p}\) domain. Let \(\mu\) be supported in \(\overline{\Omega}\), satisfying (1.1) for some \(K\geq 1\), and in addition \(\mu\in W^{1,p}(\Omega)\). Let \(f\) be the principal solution to (B). Then, for \(O=f(\Omega)\) and \(\omega=\left|Jf^{-1}\right|^{1-p}\),_ \[\left\|(I-\mu\mathcal{S}_{\Omega})^{-1}\right\|_{\mathcal{L}(W^{1,p }(\Omega))}\lesssim\mathcal{O}\left[\left\|\mathcal{S}_{O}\right\|_{\mathcal{L }(W^{1,p}(O,\omega))}+\mathcal{O}^{3}\left(1+\left\|\mu\right\|_{W^{1,p}( \Omega)}^{6}\right)\right],\] \[\mathcal{O}=1+\left\|O\right\|_{\mathcal{B}_{p}}+\left\|\Omega \right\|_{\mathcal{B}_{p}}.\] _The implicit constant depends exponentially on \(\left\|\mu\right\|_{W^{1,p}(\Omega)}\) and double exponentially on \(\max\left\{\frac{1}{r-2},p\right\}\), \(K\), \(\left\|\mu\right\|_{W^{1,2}(\Omega)}\), and the Dini character of \(O\) and \(\Omega\)._ Using the novel weighted \(T(1)\) theorems on domains established by the authors in [12], and the relationship between these testing conditions and boundary smoothness developed in [11, 29], \(\left\|\mathcal{S}_{O}\right\|_{W^{1,p}(O,\omega)}\) can be quantitatively controlled by \(\left\|O\right\|_{\mathcal{B}_{p+\varepsilon}}\) (see Lemma 4.3.iii below) for any \(\varepsilon>0\), yielding the following corollary. **Corollary B.1**.: _Let \(p\), \(r\), \(\Omega\), \(\mu\), \(f\), \(O\), and \(\mathcal{O}\) be as in Theorem B. Then, for any \(\varepsilon>0\),_ \[\left\|(I-\mu\mathcal{S}_{\Omega})^{-1}\right\|_{\mathcal{L}(W^{1,p}(\Omega)) }\lesssim\mathcal{O}\left[\left\|O\right\|_{\mathcal{B}_{p+\varepsilon}}+ \mathcal{O}^{3}\left(1+\left\|\mu\right\|_{W^{1,p}(\Omega)}^{6}\right)\right]\] _The implicit constant depends on the same parameters of Theorem B, as well as \(\varepsilon^{-1}\)._ Let us provide a more specific description of the relation between Theorem B, as well as Corollary B.1, and the results of [27]. In particular, [27, Theorem 1.1] tells us that if \(\Omega\in\mathcal{B}_{p}\) and \(\mu\in W^{1,p}(\Omega)\), then the principal solution \(f\) lies in \(W^{2,p}(\Omega)\). Standard trace results [15, 30] then entail that \(O=f(\Omega)\) is a \(\mathcal{B}_{p}\) domain as well, which in turn is qualitatively equivalent, see Lemma 4.2 below, to \(\mathcal{S}_{O}:W^{1,p}(O,\omega)\to W^{1,p}(O,\omega)\), where \(\omega\) is as in Theorem B. Thus, Theorem B holds under the same assumptions as [27, Theorem 1.1, \(n=1\)], and may be viewed as a strict quantification of that result. Furthermore, Corollary B.1 replaces the analytic condition on \(\mathcal{S}_{O}\) with a fully geometric testing condition on \(O\), namely its membership to \(\mathcal{B}_{q}\) for some \(q>p\), thus providing an explicit dependence on the data \(\mu,\Omega\) and \(O\). We close this introduction with a circle of questions motivated by Corollary B.1. First of all, a crude version of Corollary B.1 with \(\varepsilon=0\) can be obtained without weighted estimates at the price of exponential dependence on the data \(\left\|\Omega\right\|_{\mathcal{B}_{p}}\) and \(\left\|O\right\|_{\mathcal{B}_{p}}\); cf. Remark 4.4 below. It is thus natural to ask whether a version of Corollary B.1 holds with \(\varepsilon=0\) and uniform polynomial estimates in the sharp Besov norms on \(\Omega\) and \(O\). This would hold if the Jacobian power \(\omega\) were an \(\mathrm{A}_{1}(O)\) weight polynomially in \(\mathcal{O}\) and \(\left\|\mu\right\|_{W^{1,p}(\Omega)}\), and \(\left\|\mathcal{S}_{O}1\right\|_{W^{1,p}(\omega,O)}\) were controlled by a constant depending only on \([v]_{\mathrm{A}_{1}(O)}\) and on the \(\mathcal{B}_{p}\) character of \(O\). The latter statement for \(v=1\) is the content of [11, Theorem 1.1], whence it is legitimate to ask whether Lebesgue measure can be replaced with a generic \(\mathrm{A}_{1}\) weight therein, and whether a full analogue of Corollary B.1 holds for \(\varepsilon=0\). Furthermore, let us propose a strategy for removing the exponential dependence on the auxiliary \(W^{1,r}\)-norm of \(\mu\) in both Theorem B and Corollary B.1. The space \(W^{1,r}(\Omega)\) can actually be replaced by any Sobolev-type space \(X(\Omega)\) enjoying both properties 1. \(X(\Omega)\) continuously embeds into both \(W^{1,2}(\Omega)\) and \(L^{\infty}(\Omega)\); 2. For some \(Y\in\{X(\mathbb{C}),W^{1,2}(\mathbb{C})\}\), there exists \(M>0\) such that \[\left\|(I-\mu\mathcal{S})^{-1}\right\|_{Y}\lesssim 1+\left\|\mu\right\|_{X( \mathbb{C})}^{M},\] By Theorem A, \(X=W^{1,r}\) for \(r>2\) satisfies these conditions. A candidate for a space \(X\) larger than \(W^{1,r}\) is the Lorentz-Sobolev space consisting of \(L^{2}\) functions with derivatives in the Lorentz space \(L^{2,1}\). While it is known that \(I-\mu\mathcal{S}\) is invertible on this space [10], no norm estimates are known. This leads to ask whether there is a version of Theorem A for \(D\mu\in L^{2,1}(\mathbb{C})\). **Structure of the article.** In Section 2, after a few preliminaries, we deduce quantitative estimates for the Beltrami resolvent associated to dilatations \(\mu\in W^{1,2}(\mathbb{C})\) from a precise \(\mathrm{A}_{p}\)-class embedding for powers of Jacobians of the corresponding principal solutions, see Lemma 2.3 and 2.2 respectively. Section 3 contains the proofs of Theorem A and Corollary A.1. In Section 4, we provide proofs of Theorem B and Corollary B.1. As an intermediate step, in Proposition 4.8, we establish a quantitative version of a recent result of Astala, Prats and Saksman of [4, Theorem 1.1] on the regularity of quasiconformal solutions to Beltrami equations on \(\mathcal{B}_{p}\) domains. Furthermore, we explain why methods that do not treat Jacobians of principal solutions as Muckenhoupt weights lead to exponential type estimates in the data \(\mu\), \(\Omega\), and \(O\), so that Theorem B is not within their reach, cf. Remark 4.4 below. Finally, Section 5 deals with the technical proof of Lemma 2.2. ## 2. The Beltrami resolvent when \(\mu\in W^{1,2}(\mathbb{C})\) To prove Theorem A, we will need a few preliminaries. The facts that we will need from the classical theory of quasiconformal maps will be recalled throughout from the monograph [2]. Recall the definition of the Beurling-Ahlfors transform from (1.3). It is of particular use because it intertwines the derivatives \(\partial:=\frac{\partial}{\partial z}\) and \(\overline{\partial}:=\frac{\partial}{\partial z}\), which means \[\mathcal{S}(\overline{\partial}f)=\partial f,\qquad f\in W^{1,2}(\mathbb{C}). \tag{2.1}\] This property can be established by appealing to the Fourier transform, or through the Cauchy transform \(\mathcal{K}\) defined by \[\mathcal{K}f(z)=\frac{1}{\pi}\lim_{\varepsilon\to 0}\int_{|z-w|> \varepsilon}\frac{f(w)}{z-w}\,\mathrm{d}w. \tag{2.2}\] The Cauchy transform is the inverse of the \(\overline{\partial}\) operator and \(\partial\mathcal{K}=\mathcal{S}\), so that \(\partial f=\partial\mathcal{K}(\overline{\partial}f)=\mathcal{S}(\overline{ \partial}f)\). Use \(Df\) to denote the gradient \((\partial f,\overline{\partial}f)\) and for each integer \(n\geq 2\), let \(D^{n}f\) denote the vector function consisting of all combinations of \(n\)-th order partial derivatives in \(z\) and \(\overline{z}\) of \(f\). \(D^{1}f=Df\) and \(D^{0}f=f\). We will use \(|D^{n}f|\) to denote the \(\ell^{1}\) norm of this vector. Given an open set \(E\subset\mathbb{C}\), an a.e. positive element of \(L^{1}_{\mathrm{loc}}(E)\) is called a weight on \(E\). For \(\omega\) a weight on \(E\), \(n\) a nonnegative integer, and \(0<p<\infty\), define the homogeneous and inhomogeneous weighted Sobolev norms by \[\|f\|_{\dot{W}^{n,p}(E,\omega)}=\sum_{|\alpha|=n}\left\|\left(\partial^{\alpha _{1}}\overline{\partial}^{\alpha_{2}}f\right)\omega^{\frac{1}{p}}\right\|_{L ^{p}(E)},\qquad\|f\|_{W^{n,p}(E,\omega)}=\sum_{j=0}^{n}\|f\|_{\dot{W}^{j,p}(E, \omega)}\,,\] where \(\alpha=(\alpha_{1},\alpha_{2})\in\mathbb{N}^{2}\) and \(|\alpha|=\alpha_{1}+\alpha_{2}\). We also use the local average notation for a cube \(Q\subset\mathbb{C}\), \[\langle f\rangle_{p,Q}=\left(|Q|^{-1}\int_{Q}|f(z)|^{p}\,\,\mathrm{d}z\right)^ {\frac{1}{p}},\] with the simplification \(\langle f\rangle_{Q}=\langle f\rangle_{1,Q}\) when \(p=1\). We say a weight \(\omega\) on \(\mathbb{C}\) belongs to the Muckenhoupt class \(\mathrm{A}_{p}(\mathbb{C})\) if the associated characteristic, \[[\omega]_{\mathrm{A}_{p}(\mathbb{C})}=\sup_{Q\,\mathrm{cube}\,\mathrm{in}\, \mathbb{C}}\,\langle\omega\rangle_{Q}\,\big{\langle}\omega^{-1}\big{\rangle}_ {\frac{1}{p-1},Q}\] is finite. To apply the strategy of [3], we will need now weighted Sobolev estimates for \(\mathcal{S}\), which were recently obtained for smooth Calderon-Zygmund operators in sharp quantitative form in [13]. The estimates we require are summarized in the following proposition. **Proposition 2.1**.: _Let \(n\in\mathbb{N}\), \(1<p<\infty\). There exists \(C_{p,n}>0\) such that for any \(\omega\in\mathrm{A}_{p}(\mathbb{C})\),_ \[\|\mathcal{S}f\|_{\dot{W}^{n,p}(\mathbb{C},\omega)}\leq C_{p,n}[\omega]^{\max \{1,\frac{1}{p-1}\}}_{\mathrm{A}_{p}(\mathbb{C})}\,\|f\|_{\dot{W}^{n,p}( \mathbb{C},\omega)}\,. \tag{2.3}\] Proof.: Since \(\mathcal{S}\) is of convolution type, (2.3) more or less follows from the case \(n=0\), which is well-known, [23], though some care must be taken with the principal value integral. So, one can consult [13, Corollary A.1] for a complete proof of (2.3). Introduce the notation \[|Df|=|\partial f|^{2}-\left|\overline{\partial}f\right|^{2},\] which is equal to the determinant of the Jacobian of \(f\) as a mapping from \(\mathbb{R}^{2}\) to itself. A characterization of \(K\)-quasiconformal mappings equivalent to (B) is the distortion inequality \[|Df|^{2}\leq K\left|Jf\right|. \tag{2.4}\] The main lemma concerning \(|Jf|\) for \(\mu\in W^{1,2}(\mathbb{C})\) is a consequence of the critical Moser-Trudinger Sobolev embedding, and is proved in SS5.1 and 5.3. **Lemma 2.2**.: _Suppose \(\mu\) satisfies (1.1) for some \(K\geq 1\), and in addition that \(\mu\in W^{1,2}(\mathbb{C})\) with \(\|\mu\|_{W^{1,2}C}\leq L\). Let \(f\) be the principal solution to (B), \(a\in\mathbb{R}\), and \(1<p<\infty\). Then, the Jacobians \(|Jf|^{a}\) and \(\left|Jf^{-1}\right|^{a}\) are both \(\mathrm{A}_{p}(\mathbb{C})\) weights. In particular, there exists a constant \(C=C(K)>0\) such that for any \(1<p<\infty\),_ \[\left[\left|Jf^{-1}\right|^{1-\frac{p}{2}}\right]^{\max\left\{1, \frac{1}{p-1}\right\}}_{\mathrm{A}_{p}(\mathbb{C})} \leq C\exp\left(C\max\left\{p,\frac{1}{p-1}\right\}^{2}L^{2} \right); \tag{2.6}\] \[\left[\left|Jf^{-1}\right|^{1-p}\right]^{\max\left\{1,\frac{1}{p-1 }\right\}}_{\mathrm{A}_{p}(\mathbb{C})} \leq C\exp\left(C\max\left\{p^{2},\frac{1}{p-1}\right\}L^{2} \right). \tag{2.5}\] The second lemma we will use follows from Lemma 2.2 and the strategy of [3]. The estimate (2.7) below is known qualitatively since \(W^{1,2}(\mathbb{C})\) embeds into \(\mathrm{VMO}(\mathbb{C})\), the functions with vanishing mean oscillation. And it is well-known that \(I-\mu\mathcal{S}\) is invertible on all \(L^{p}(\mathbb{C})\) for \(1<p<\infty\) and \(\mu\in\mathrm{VMO}(\mathbb{C})\), a result that be found in [3, Theorem 5]. **Lemma 2.3**.: _Suppose \(\mu\) satisfies (1.1) for some \(K\geq 1\), and in addition that \(\mu\in W^{1,2}(\mathbb{C})\) with \(\|\mu\|_{W^{1,2}C}\leq L\). Then, there exists \(C=C(K)>0\) such that for all \(1<p<\infty\),_ \[\left\|(I-\mu\mathcal{S})^{-1}\right\|_{\mathcal{L}(L^{p}(\mathbb{C}))}\leq C \exp\left(C\max\left\{p,\frac{1}{p-1}\right\}^{2}L^{2}\right). \tag{2.7}\] Proof of Lemma 2.3.: The Astala-Iwaniec-Saksman strategy from [3] shows that \[\|(I-\mu\mathcal{S})^{-1}\|_{\mathcal{L}(L^{p}(\mathbb{C}))}\lesssim_{K}\| \mathcal{S}\|_{\mathcal{L}(L^{p}(\mathbb{C},\omega))}, \tag{2.8}\] where \(\omega=\left|Jf^{-1}\right|^{1-\frac{p}{2}}\) and \(f\) is the principal solution to (B). See [2, SS14.2 (14.25)] for this exact statement, or refer to the proof of Proposition 4.6 below. Estimating the right hand side of (2.8) by (2.3) in Proposition 2.1 with \(n=0\), and (2.5) in Lemma 2.2 concludes the proof. ## 3. Proof of Theorem A and Corollary A.1 We will prove the critical (1.4) and supercritical (1.5) estimates in Theorem A at the same time. To this end, let \(2\leq p<\infty\) and introduce \[r\in\left\{\begin{array}{ll}\left\{\frac{p\rho^{\prime}}{2},p\right\}&p>2,\\ (1,2)&p=2,\end{array}\right.\quad q\coloneqq\left\{\begin{array}{ll}p,&p>2, \\ r,&p=2.\end{array}\right.\] The key relationship among these exponents is that \[\|D(g_{1}g_{2})\|_{L^{r}(\mathbb{C})}\lesssim\left(\|g_{2}\|_{L^{\infty}( \mathbb{C})}+\|g_{2}\|_{W^{1,p}(\mathbb{C})}\right)\|g_{1}\|_{W^{1,q}(\mathbb{ C})}, \tag{3.1}\] whenever the right hand side is finite. When \(p=2\) or \(r=p\), (3.1) is a consequence of the product rule, Holder's inequality, and Sobolev embedding. When \(p>2\) and \(r=\frac{p\rho^{\prime}}{2}\), Holder's inequality shows that \[\|(Dg_{1})g_{2}\|_{L^{\frac{p\rho^{\prime}}{2}}(\mathbb{C})}\leq\|Dg_{1}\|_{L ^{p}(\mathbb{C})}\|g_{2}\|_{L^{\frac{p\rho^{\prime}}{2-p^{\prime}}}(\mathbb{ C})}.\] The second term is then handled by interpolation between \(L^{\infty}\) and \(L^{p}\), and subsequently bounded by \(\|g_{2}\|_{W^{1,p}(\mathbb{C})}\) due to Sobolev embedding. This same argument with \(g_{1}\) and \(g_{2}\) exchanged establishes (3.1) in all cases. With this notation, to prove Theorem A it suffices to estimate \(\left\|(I-\mu\mathcal{S})^{-1}\right\|_{W^{1,p}(\mathbb{C})}\) with the prescribed dependence on \(\mu\), \(p\), \(q\), and \(r\). Since the final estimate will have exponential blow-up at the endpoints of our ranges (\(r\) approaching \(1\) or \(2\) in the case \(p=2\), or in the other case as \(p\) approaches \(2\) or \(\infty\)), we use \(A\lesssim B\) to denote \(A\leq CB\) for some \(C\) depending polynomially on \(k\), \(p\), \(r\), \(q\), and \(L\). Most importantly, the Sobolev embedding theorems have polynomial blow up at these endpoints, hence (3.1) holds with this prescribed convention for \(\lesssim\). We make the further reduction following [3, pp. 39-40] to consider \(w,h\in\mathcal{C}^{\infty}(\mathbb{C})\) satisfying the following inhomogeneous Beltrami equation \[\overline{\partial}w=\mu\partial w+h.\] Normalize so that \(\|h\|_{W^{1,q}(\mathbb{C})}\leq 1\) and let \(L\coloneqq\|\mu\|_{W^{1,2}(\mathbb{C})}\)Then, Theorem A amounts to the _a priori_ estimate \[\|D\overline{\partial}w\|_{L^{r}(\mathbb{C})}\lesssim\left\{\begin{array}{ ll}\exp\left(C\max\left\{\frac{1}{r-1},\frac{1}{(2-r)^{2}}\right\}L^{2}\right)&p=2;\\ \exp\left(C\max\left\{p,\frac{1}{p-2}\right\}^{2}L^{2}\right)\left(1+\|\mu\|_ {W^{1,p}(\mathbb{C})}\right)^{2}&p>2.\end{array}\right. \tag{3.2}\] Indeed, the estimates in Theorem A are now special cases of (3.2). The critical case (1.4) is recovered when \(1<r<2\) and \(p=2\); (1.5) is recovered by \(r=p>2\). ### Main line of proof of Theorem A Let \(f\) be the principal solution to (B). By [2, Theorem 5.2.3] for \(\mu\in W^{1,2}(\mathbb{C})\), \(\partial f=\mathrm{e}^{\sigma}\) where \(\sigma\) satisfies \[\overline{\partial}\sigma=\mu\partial\sigma+\partial\mu.\] Applying Lemma 2.3 shows that \[\|\sigma\|_{W^{1,p}(\mathbb{C})}\lesssim\exp(Cp^{2}L^{2})\|\mu\|_{W^{1,p}( \mathbb{C})}. \tag{3.3}\] Next, introduce \(u=w\circ f^{-1}\) so by the chain rule, \[\begin{split}\overline{\partial}w&=\left(\partial u \circ f\right)\overline{\partial}f+\left(\overline{\partial}u\circ f\right) \overline{\partial f}\\ \mu\partial w+h&=\mu\left[\left(\partial u\circ f \right)\partial f+\left(\overline{\partial}u\circ f\right)\overline{ \overline{\partial}f}\right]+h.\end{split} \tag{3.4}\] #### 3.1.1. A priori bounds on the antiholomorphic part Using the equations for \(f\) and \(w\), we obtain from (3.4) that \[\left(\overline{\partial}u\circ f\right)\overline{\partial f}=\frac{h}{1-|\mu |^{2}}\rightrightarrows:H. \tag{3.5}\] It follows from (3.1) that \[\left\|DH\right\|_{L^{r}(\mathbb{C})}\lesssim 1+\|\mu\|_{W^{1,p}(\mathbb{C})}. \tag{3.6}\] Referring to (3.5), observe the crucial equality \[\begin{split} DH&=D\left[\left(\overline{\partial}u \circ f\right)\overline{e}^{\overline{\sigma}}\right]=\left(\overline{ \partial}u\circ f\right)\overline{\partial fD\sigma}+D\left(\overline{ \partial}u\circ f\right)\overline{\partial f}\\ &=H\overline{D\sigma}+D\left(\overline{\partial}u\circ f\right) \overline{\partial f}\equiv H\overline{D\sigma}+G.\end{split}\] Using the regularity of \(\sigma\) from (3.3) and of \(H\) from (3.6), together with Sobolev embedding gives \[\|G\|_{L^{r}(\mathbb{C})}\lesssim\|DH\|_{L^{r}(\mathbb{C})}+\|hD\sigma\|_{L^{ r}(\mathbb{C})}\lesssim\exp\left(Cp^{2}L^{2}\right)\left(1+\|\mu\|_{W^{1,p}( \mathbb{C})}\right). \tag{3.7}\] Another calculation relying on the chain rule yields the equality \[\left(\overline{\partial f}\right)^{2}\left(D\overline{\partial}u\circ f \right)=\mathbb{A}\overline{\partial f}D\left(\overline{\partial}u\circ f \right),\qquad\mathbb{A}\coloneqq\frac{1}{|Jf|}\left[\begin{array}{cc} \overline{\partial f}^{2}&-\overline{\partial f}\overline{\partial f}\\ -\overline{\partial f}\overline{\partial}f&|\partial f|^{2}\end{array}\right],\] and (3.7) together with \(K\)-quasiconformality of \(f\) from (2.4) entail the estimate \[\left\|\left(\overline{\partial f}\right)^{2}\left(D\overline{\partial}u \circ f\right)\right\|_{L^{r}(\mathbb{C})}\lesssim\|\mathbb{A}\|_{\infty}\|G \|_{L^{r}(\mathbb{C})}\lesssim\exp\left(Cp^{2}L^{2}\right)\left(1+\|\mu\|_{W^ {1,p}(\mathbb{C})}\right). \tag{3.8}\] #### 3.1.2. Estimating the norm \(\|\overline{\partial}w\|_{W^{1,p}}\) Differentiating the first line of (3.4), we see that \[D\overline{\partial}w=D\left(\partial u\circ f\right)\overline{\partial}f+ \left(\partial u\circ f\right)D\overline{\partial}f+DH. \tag{3.9}\] By virtue of (3.6), we are left with estimating the first two terms on the right hand side. By (2.1), the chain rule, (2.4), and a change of variables, the first term in (3.9) is estimated as \[\begin{split}\left\|D\left(\partial u\circ f\right)\overline{ \partial}f\right\|_{L^{r}(\mathbb{C})}^{r}&=\int_{\mathbb{C}} \left|D[\mathcal{S}(\overline{\partial}u)\circ f]\right|^{r}\left|\overline{ \partial}f\right|^{r}\\ &\lesssim\int_{\mathbb{C}}\left|D[\mathcal{S}(\overline{\partial} u)]\circ f\right|^{r}\left|Jf\right|^{r}\\ &\lesssim\int_{\mathbb{C}}|D\mathcal{S}(\overline{\partial}u)|^{r }|Jf^{-1}|^{1-r}.\end{split} \tag{3.10}\] By Lemma 2.2, \(|Jf^{-1}|^{1-r}\in\mathrm{A}_{r}(\mathbb{C})\) with the dependence (2.6) so the weighted Sobolev estimate for \(\mathcal{S}\) (Proposition (2.1) with \(n=1\) and \(p=r\)), (3.10) implies \[\left\|D(\partial u\circ f)\overline{\partial}f\right\|_{L^{r}( \mathbb{C})} \lesssim\exp\left(C\max\left\{r^{2},\tfrac{1}{r-1}\right\}L^{2} \right)\left(\int_{\mathbb{C}}|D\overline{\partial}u|^{r}|Jf^{-1}|^{1-r} \right)^{\frac{1}{r}}\] \[\lesssim\exp\left(C\max\left\{p^{2},\tfrac{1}{r-1}\right\}L^{2} \right)\left(1+\|\mu\|_{W^{1,p}(\mathbb{C})}\right), \tag{3.11}\] where the last line followed from changing variables back, applying (3.8), and the fact that \(r\leq p\). Combining (3.9), (3.11), and (3.6), we have \[\left\|D\overline{\partial}w\right\|_{L^{r}(\mathbb{C})}\lesssim\exp\left(C \max\left\{p^{2},\tfrac{1}{r-1}\right\}L^{2}\right)\left(1+\|\mu\|_{W^{1,p}( \mathbb{C})}\right)+\left\|(\partial u\circ f)D\overline{\partial}f\right\|_ {L^{r}(\mathbb{C})}. \tag{3.12}\] It remains to estimate the final term in (3.12). Notice that \(D\overline{\partial}f=(D\mu+\mu D\sigma)\partial f\eqqcolon\lambda\partial f\) and \(\lambda\in L^{p}(\mathbb{C})\) by (3.3). With (3.12) in hand, we now complete the proof of (3.2) in the case \(r<p\). Let \(p<s<\infty\) be determined by \(\tfrac{1}{r}=\tfrac{1}{p}+\tfrac{1}{s}\). Then, by Holder's inequality, \[\|(\partial u\circ f)D\overline{\partial}f\|_{L^{r}(\mathbb{C})}\leq\|( \partial u\circ f)\partial f\|_{L^{s}(\mathbb{C})}\|\lambda\|_{L^{p}(\mathbb{ C})}. \tag{3.13}\] As in (3.10) above, by (2.1), a change of variable, (2.4), and the weighted estimate for \(\mathcal{S}\) on \(L^{s}(\mathbb{C})\), \[\int_{\mathbb{C}}|\partial u\circ f|^{s}|\partial f|^{s}\lesssim\int_{\mathbb{ C}}|\mathcal{S}(\overline{\partial}u)|^{s}|Jf^{-1}|^{1-\frac{s}{2}}\lesssim\exp \left(Cs^{3}L^{2}\right)\int_{\mathbb{C}}|\overline{\partial}u|^{s}|Jf^{-1}|^{ 1-\frac{s}{2}}. \tag{3.14}\] The same logic leading to (3.1), after changing variables, facilitates the final estimate \[\left(\int_{\mathbb{C}}|\overline{\partial}u|^{s}|Jf^{-1}|^{1-\frac{s}{2}} \right)^{\frac{1}{s}}\lesssim\|H\|_{L^{s}(\mathbb{C})}\lesssim\|h\|_{L^{s}( \mathbb{C})}\lesssim 1. \tag{3.15}\] Stringing together (3.13), (3.14), and (3.15), using (3.3) to estimate \(\lambda\), and recalling \(s>p\), the final term in (3.12) is estimated for \(r<p\) by \[\|(\partial u\circ f)D\overline{\partial}f\|_{L^{r}(\mathbb{C})}\lesssim\exp \left(Cs^{2}L^{2}\right)\left(1+\|\mu\|_{W^{1,p}(\mathbb{C})}\right). \tag{3.16}\] In the case \(p=2\), \(s=\tfrac{2r}{2-r}\sim\tfrac{1}{2-r}\), so by (3.12) and (3.16), (3.2) is established for \(p=2\). For \(p>2\), \(r=\tfrac{pp^{\prime}}{2}\) and one has \(s=\tfrac{p^{2}}{p-2}\sim\max\left\{p,\tfrac{1}{p-2}\right\}\). In this case (3.12) and (3.16) imply \[\left\|\overline{\partial}w\right\|_{W^{1,\tfrac{pp^{\prime}}{2}}(\mathbb{C} )}\lesssim\exp\left(C\max\left\{p,\tfrac{1}{p-2}\right\}^{2}L^{2}\right) \left(1+\|\mu\|_{W^{1,p}(\mathbb{C})}\right). \tag{3.17}\] Finally, to achieve (3.2) when \(r=p\), we notice that since \(\tfrac{pp^{\prime}}{2}>2\), by (3.4), Sobolev embedding, and (3.17), \[\|(\partial u\circ f)\partial f\|_{L^{\infty}(\mathbb{C})}\lesssim\|\partial w \|_{L^{\infty}(\mathbb{C})}+\|\mu H\|_{L^{\infty}(\mathbb{C})}\lesssim\exp\left( C\max\left\{p,\tfrac{1}{p-2}\right\}^{2}L^{2}\right)\left(1+\|\mu\|_{W^{1,p}( \mathbb{C})}\right). \tag{3.18}\] Apply (3.18) to estimate the final term in (3.12), \[\|(\partial u\circ f)D\overline{\partial}f\|_{L^{p}(\mathbb{C})}\lesssim\|( \partial u\circ f)\partial f\|_{L^{\infty}(\mathbb{C})}\|\lambda\|_{L^{p}( \mathbb{C})}\lesssim\exp\left(C\max\left\{p,\tfrac{1}{p-2}\right\}^{2}L^{2} \right)\left(1+\|\mu\|_{W^{1,p}(\mathbb{C})}\right)^{2},\] and (3.2) immediately follows for \(r=p\). ### Proof of Corollary a.1 We will again give a unified proof of the critical and supercritical cases. To this end, let \(p\geq 2\) and consider \[q\in\left\{\begin{array}{ll}(2,\infty),&p=2,\\ \left[\frac{p}{p-1},\infty\right)&p>2,\end{array}\right.\qquad r=\frac{q}{q-1}.\] Then, (1.7) and (1.9) will follow from \[\|\eta(Df)\|_{L^{q}(\mathbb{C})}\lesssim\|(D\eta)f\|_{L^{q}(\mathbb{C})}, \tag{3.19}\] while (1.8) and (1.10) will follow from \[\|\eta(D^{2}f)\|_{L^{r}(\mathbb{C})}\lesssim\|(D\eta)f\|_{L^{r}(\mathbb{C})}+ \|(D\eta)(Df)\|_{L^{r}(\mathbb{C})}+\|(D^{2}\eta)f\|_{L^{r}(\mathbb{C})}. \tag{3.20}\] In this proof, we will not track the constants implicit in (3.19) and (3.20) but they are an absolute constant multiple of the corresponding estimates for \((I-\mu\mathcal{S})^{-1}\) in Lemma 2.3 and Theorem A. First we will prove (3.19). Since \(f\in L^{q}_{\rm loc}\), we claim that we can extend (1.6) to all \(\psi\) of the form \(\eta\phi\) for \(\eta\in C_{0}^{\infty}(E)\) and \(\phi\in W^{1,r}(\mathbb{C})\). Indeed, for such \(\eta\) and \(\phi\), by Holder's inequality and Sobolev embedding, \[\|(\partial-\partial\mu)(\eta\phi)\|_{L^{r}(\mathbb{C})}\lesssim\|\eta\phi\|_ {W^{1,r}(\mathbb{C})}+\left(\|\eta\mu\|_{L^{\infty}(\mathbb{C})}+\|\eta\mu\|_ {W^{1,p}(\mathbb{C})}\right)\|\phi\|_{W^{1,r}(\mathbb{C})}.\] Therefore, (1.6) holds for all such \(\psi=\eta\phi\) by density. Now, fix \(\eta\in C_{0}^{\infty}(E)\), let \(\chi\in C_{0}^{\infty}(E)\) with \(\chi\equiv 1\) on \(\operatorname{supp}\eta\), and set \(\nu=\mu\chi\). Let \(g\in C_{0}^{\infty}(E)\) and let \(v\) satisfy \[\overline{\partial}v-\nu\partial v=g. \tag{3.21}\] Then, Lemma 2.3 and Theorem A respectively imply \[\|v\|_{W^{1,r}(\mathbb{C})}\lesssim\|g\|_{L^{r}(\mathbb{C})},\qquad\|v\|_{W^ {2,r}(\mathbb{C})}\lesssim\|g\|_{W^{1,r}(\mathbb{C})}. \tag{3.22}\] Furthermore, applying \(\partial\) to (3.21) we obtain \[\partial g=(\overline{\partial}-\partial v)(\partial v).\] Pair the above display with \(F=\eta f\) to obtain \[\left\langle F,\partial g\right\rangle=\left\langle\eta f,\left(\overline{ \partial}-\partial v\right)\partial v\right\rangle=\left\langle f,\left( \overline{\partial}-\partial v\right)(\eta\partial v)\right\rangle-\left\langle f,\left(\overline{\partial}\eta-\nu\partial\eta\right)\partial v\right\rangle.\] The precise form of \(\chi\) shows that \(\nu\eta=\mu\chi\eta=\mu\eta\) so that, since \(\partial v\in W^{1,r}(\mathbb{C})\) by (3.22), the final term in the above display vanishes by (1.6). On the other hand, the first estimate in (3.22) shows that \[\left|\left\langle f,\left(\overline{\partial}\eta-\nu\partial\eta\right) \partial v\right\rangle\right|\lesssim\|(D\eta)f\|_{L^{q}(\mathbb{C})}\|g\|_{ L^{r}(\mathbb{C})}.\] Therefore, combining the previous two displays and using compact support, \[|\left\langle\partial F,g\right\rangle|\lesssim\|(D\eta)f\|_{L^{q}(\mathbb{C })}\|g\|_{L^{r}(\mathbb{C})},\] for all \(g\in C_{0}^{\infty}\). This establishes that \(\|\eta\partial f\|_{L^{q}}\) is bounded by the right hand side of (3.19). To prove the same for to \(\eta\overline{\partial}f\), simply notice that by (1.6), there holds for any \(h\in C_{0}^{\infty}(\mathbb{C})\), \[\left|\left\langle\eta\overline{\partial}f,h\right\rangle\right|\leq|\left\langle \mu\eta\partial f,h\right\rangle|.\] We proceed to prove (3.20). Notice that (3.19), which we just proved, together with the assumption \(f\in L^{q}_{\rm loc}\) establishes that \(F=\eta f\) indeed belongs to \(W^{1,s}(\mathbb{C})\) for every \(s\leq q\). Therefore, \[\overline{\partial}F-\nu\partial F=(\overline{\partial}\eta-\nu\partial\eta)f.\] Using Sobolev embedding, the right hand side of the above display belongs to \(W^{1,r}\) with norm \[\|(\overline{\partial}\eta-v\partial\eta)f\|_{W^{1,r}(\mathbb{C})}\lesssim\|(D \eta)f\|_{L^{r}(\mathbb{C})}+\|(D\eta)(Df)\|_{L^{r}(\mathbb{C})}+\|(D^{2}\eta)f \|_{L^{r}(\mathbb{C})}. \tag{3.23}\] On the other hand, by (2.1) and Proposition 2.1, \[\|D^{2}F\|_{L^{r}(\mathbb{C})}\leq\left\|\overline{\partial}F\right\|_{W^{1,r} (\mathbb{C})}+\|\partial F\|_{W^{1,r}(\mathbb{C})}\lesssim\left\|\overline{ \partial}F\right\|_{W^{1,r}(\mathbb{C})}.\] Furthermore, Theorem A applies to give \[\left\|\overline{\partial}F\right\|_{W^{1,r}(\mathbb{C})}\lesssim\|(\overline {\partial}\eta-v\partial\eta)f\|_{W^{1,r}(\mathbb{C})}\] and the proof of (3.20) is concluded by (3.23). Finally, we establish the concluding statement that \(f\in W^{2,r}_{\rm loc}(E)\). To this end, we iterate the first Caccioppoli inequality (3.19) to show that \(f\in W^{1,s}_{\rm loc}(E)\) for every \(1<s<\infty\). Indeed, let \(f\in L^{q}_{\rm loc}(E)\) and suppose \(s>q>1.\) By (3.19), \(f\in W^{1,q}_{\rm loc}(E)\) so by Sobolev embedding, \(f\in L^{q_{1}}_{\rm loc}(E)\) for every \(q_{1}\) satisfying \[q_{1}<\left\{\begin{array}{ll}\frac{2q}{2-q},&q\leq 2;\\ \infty,&q>2.\end{array}\right.\] In particular, we can choose \(q_{1}>2\) so that a second application of (3.19) implies \(f\in W^{1,q_{1}}_{\rm loc}(E)\) and by Sobolev embedding \(f\in L^{q_{2}}_{\rm loc}(E)\) for every \(q_{2}<\infty\). One final application of (3.19) proves the claim. In particular, \(f\in W^{1,r}_{\rm loc}(E)\) so the right hand side of (3.20) is finite hence \(f\in W^{2,r}_{\rm loc}(E)\). ## 4. Proof of Theorem B and Corollary B.1 Throughout, let \(2<p<\infty\) and \(\Omega\) be a simply connected bounded domain in \(\mathbb{C}\). The restriction to simply connected domains is for convenience, but can be lifted to finitely connected domains. In this section we still consider global solutions to the Beltrami equation, but we assume \(\mu\) to be of a special form, linked to \(\Omega\) in the following way. Let \(\mu\) satisfy \[\operatorname{supp}\mu\subset\overline{\Omega},\qquad\|\mu\|_{\infty}=k= \frac{K-1}{K+1}<1,\qquad\mu\in W^{1,p}(\Omega). \tag{4.1}\] In proving Theorem B, the Beurling-Ahlfors operator \(\mathcal{S}\) is replaced by its compression to a domain \(O\), defined by \[\mathcal{S}_{O}=1_{\overline{O}}\mathcal{S}(\cdot 1_{\overline{O}}).\] Lebesgue space estimates for \(\mathcal{S}_{O}\) follow from those established for \(\mathcal{S}\) in Proposition 2.1, by simply considered functions supported in \(\overline{O}\). However, the Sobolev estimates must be approached differently. The systematic study of unweighted Sobolev estimates for compressions of Calderon-Zygmund operators was taken up by Prats and Tolsa in [28]. A completely different approach by the authors has led to general \(T(1)\) theorems and weighted estimates in Sobolev spaces in [12]. The necessary ingredients from these papers are extracted in Lemma 4.3 below. **Definition 4.1**.: We first define two norms for functions \(f:\Gamma\to\mathbb{C}\) where \(\Gamma\) is a piecewise continuous curve. Say \(f\) is Dini continuous if the norm \[\left\|f\right\|_{\rm Dini}=\int_{0}^{1}\sup_{|x-y|\leq t}|f(x)-f(y)|\ \frac{ \mathrm{d}t}{t}\] is finite. Define the homogeneous Besov norm on \(\Gamma\) by \[\|f\|_{\dot{B}^{i-\frac{1}{q}}_{q}(\Gamma)}=\left(\int_{\Gamma}\int_{\Gamma} \frac{|f(x)-f(y)|^{q}}{|x-y|^{q}}\,\mathrm{d}s(x)\,\mathrm{d}s(y)\right)^{\frac{ 1}{q}}<\infty,\] where \(\mathrm{d}s\) is the surface measure on \(\Gamma\). Say a bounded domain \(O\) is a Dini-smooth domain if there exists \(\delta,R>0\) such that for each \(z\in\partial O\) there exists a function \(A:\mathbb{R}\to\mathbb{R}\) with \(\|A^{\prime}\|_{\mathrm{Dini}}<\delta\) and an angle \(\theta\) such that \[\Omega\cap B(z,r)=\left\{\mathrm{e}^{\mathrm{i}\theta}(x+iy)\in B(z,r):y>A(x) \right\}\,.\] Dini-smooth domains are the natural setting for higher order conformal estimates as the conformal map onto a Dini-smooth domain is bi-Lipschitz. We denote \([O]_{\mathrm{Dini}}=(\delta,R)\). Given \(q>2\), we say a Dini-smooth domain \(O\) is a \(\mathcal{B}_{q}\) domain if there exists \(C>0\) such that each parameterization \(A\) described above also satisfies \[\|A^{\prime}\|_{\dot{B}^{i-\frac{1}{q}}_{q}(\mathbb{R})}\leq C,\] Covering the boundary of \(\partial\Omega\) by balls \(B_{j}\) of radius \(R\), and letting \(A_{j}\) denote the associated parameterizations, one has \[\sum_{j}\left\|A^{\prime}_{j}\right\|_{\dot{B}_{q,q}^{-\frac{1}{q}}(\mathbb{R })}\sim\|N_{O}\|_{\dot{B}^{i-\frac{1}{q}}_{q}(\partial O)}\,, \tag{4.2}\] where \(N_{O}\) is the normal vector to the boundary \(\partial O\) and the implicit constants in (4.2) depend only on \([O]_{\mathrm{Dini}}\), see [11, Lemmata 3.1 and 3.3]. In light of (4.2), we define \[\|O\|_{\mathcal{B}_{q}}=\|N_{O}\|_{\dot{B}^{i-\frac{1}{q}}_{q}(\partial O)}\,.\] ### Standing assumptions and implicit constants Let us fix some parameters and establish some notational conventions for the remainder of this section. Henceforth, let \(2<r\leq p\), \(\Omega\) be a bounded simply connected \(\mathcal{B}_{p}\) domain, and \(\mu\) satisfy (4.1). Furthermore, let \(f\) be the principal solution of (B), set \(O=f(\Omega)\). The shorthand \(\lesssim\) and \(\sim\) will denote one or two-sided inequalities with implicit dependence on \([\Omega]_{\mathrm{Dini}}\), \([O]_{\mathrm{Dini}}\), \(\mathrm{diam}\,\Omega\), \(\mathrm{diam}\,O\), \(p\), \(K\), and \(L=\|\mu\|_{W^{1,2}(\Omega)}\). In Theorem B, the dependence will be double exponential on these quantities, so we do not track them precisely. Furthermore, \(\mathcal{G}\) and \(\mathcal{E}\) will be generic functions such that \(\mathcal{G}\) and \(\log\mathcal{E}\) have polynomial growth, which is implicitly determined by these parameters. ### Weighted estimates for compressions To apply the same strategy as in the proof of Theorem A, we will need estimates for \(\mathcal{S}_{O}\) on a certain weighted Sobolev space \(W^{1,p}(O,\omega)\). It is known that the Besov norm of the boundary normal introduced above is precisely connected to _unweighted_ Sobolev estimates for \(\mathcal{S}_{O}\); consult the following references [11, 25, 29]. In particular, \[\|\mathcal{S}_{O}\|_{\mathcal{L}(W^{1,p}(O))}\sim 1+\|O\|_{\mathcal{B}_{p}}\,, \qquad p>2. \tag{4.3}\] An analog of the quantitative geometric characterization (4.3) for \(\|\mathcal{S}_{O}\|_{\mathcal{L}(W^{1,p}(O,\omega))}\) is not currently available, though qualitatively they are equivalent. In fact, the following Lemma demonstrates the sharpness of our assumption on \(O=f(\Omega)\) in Theorem B. **Lemma 4.2**.: _If \(I-\mu\mathcal{S}_{\Omega}\) is invertible on \(W^{1,p}(\Omega)\), then_ \[\mathcal{S}_{O}:W^{1,p}(O,\omega)\to W^{1,p}(O,\omega),\qquad\omega=\left|Jf^{ -1}\right|^{1-p}.\] The proof is postponed until SS4.4 below. Concerning quantitative geometric conditions on \(O\), using the \(T(1)\) theorems developed by the authors in [12], we can give some alternatives of varying of degrees of sharpness. **Lemma 4.3**.: _Set \(\omega=\left|Jf^{-1}\right|^{1-p}\). Then,_ 1. \(\left\|\mathcal{S}_{O}\right\|_{\mathcal{L}(W^{1,p}(O,\omega))}\lesssim\left( \left\|Jf\right\|_{\infty}\|Jf^{-1}\|_{\infty}\right)^{1-\frac{1}{p}}\left(1+ \|O\|_{\mathcal{B}_{p}}\right)\)_._ 2. \(\left\|\mathcal{S}_{O}\right\|_{\mathcal{L}(W^{1,p}(O,\omega))}\lesssim\left( 1+\frac{\left\|\mathcal{S}_{O}(1)\right\|_{W^{1,p}(O,\omega)}}{\left\|\Omega \right\|_{D^{p}(O,\omega)}}\right)\mathcal{E}\left(\|\mu\|_{W^{1,r}(\Omega)} \right)\)_._ 3. _For any_ \(\varepsilon>0\)_,_ \(\left\|\mathcal{S}_{O}\right\|_{\mathcal{L}(W^{1,p}(O,\omega))}\lesssim\left( 1+\|O\|_{\mathcal{B}_{p+\varepsilon}}\right)\mathcal{E}\left(\frac{1}{ \varepsilon},\|\mu\|_{W^{1,r}(\Omega)}\right)\)_._ **Remark 4.4**.: Lemma 4.3.i is achieved by appealing to the unweighted estimate (4.3), and crude bounds on \(\left\|\left|Jf\right|^{-1}\right\|_{\infty}\) and \(\|Jf\|_{\infty}\) can be derived from (4.7) of Proposition 4.8. In this way, if the reader is interested in a quick bound for \((I-\mu\mathcal{S}_{\Omega})^{-1}\) which bypasses the more difficult weighted Sobolev theory of CZOs that ii. and iii. rely on, one can appeal to the logarithmic Sobolev inequality \[\left\|g\right\|_{\infty}\lesssim\frac{1}{q-2}\left(1+\left\|g\right\|_{W^{1,2 }(\Omega)}\right)\log\left(\mathrm{e}+\left\|g\right\|_{W^{1,q}(\Omega)} \right),\qquad q>2,\] to obtain \[\left\|\mathcal{S}_{O}\right\|_{\mathcal{L}(W^{1,p}(O,\omega))}\lesssim\left( 1+\left\|O\right\|_{\mathcal{B}_{q}}+\left\|\Omega\right\|_{\mathcal{B}_{q}}+ \left\|\mu\right\|_{W^{1,q}(\Omega)}\right)^{\frac{C}{q-2}}\left\|O\right\|_ {\mathcal{B}_{p}}.\] Notice that the middle factor blows up exponentially as \(q\to 2\), while weighted estimates in e.g. Lemma 4.3.iii and Corollary B.1 produce an absolute polynomial bound in terms of the boundary data \(\left\|\Omega\right\|_{\mathcal{B}_{p}}\) and \(\left\|O\right\|_{\mathcal{B}_{p}}\). To prove Lemma 4.3, we need to verify that the weights \(\omega=\left|Jf^{-1}\right|^{1-p}\) belong to the appropriate Muckenhoupt \(\mathrm{A}_{p}\) class for the results from [12] to apply. The following adaptation of Lemma 2.2 is proved in SS5.2. **Lemma 4.5**.: _Let \(F\in W^{1,2}_{\mathrm{loc}}(\Omega)\) be a homeomorphism satisfying_ \[\overline{\partial}F(z)=\mu(z)\partial F(z),\quad z\in\Omega,\] _and further assume that \(F(\Omega)\) is a bounded \(\mathcal{B}_{r}\) domain. Then, for each \(a\in\mathbb{R}\) and \(1<q<\infty\), there exists \(v:\mathbb{C}\to[0,\infty]\) such that_ \[v=\left|JF^{-1}\right|^{a}\text{ on }F(\Omega),\] _and_ \[\left[v\right]_{\mathrm{A}_{q}(\mathbb{C})}\lesssim\mathcal{E}\left(a,q, \tfrac{1}{q-1},\left\|\mu\right\|_{W^{1,r}(\Omega)}\right). \tag{4.4}\] Proof of Lemma 4.3.: i. follows by pulling the weight \(\omega\) outside, applying the unweighted estimate, using (4.3), and then reinserting the weight. To prove ii. and iii., we rely on the weighted Sobolev estimates for Calderon-Zygmund operators on domains established in [12]. According to Lemma 4.5 below, \(\omega\) does belong to the relevant weight classes. The main result for compressions of Calderon-Zygmund operators from [12, Corollary B.1 and Lemma 4.22] states that if the weight \(\omega\) possesses an extension which belongs to \(\mathrm{A}_{s}(\mathbb{C})\) for some \(1<s<\frac{p}{2}\), then it suffices to test \(\mathcal{S}_{O}\) on the constant function \(1\). Letting \(v\) be the extension of \(\omega\) provided by Lemma 4.5, \[\left[v\right]_{\mathrm{A}_{\frac{p+2}{2}}}(\mathbb{C})\lesssim\mathcal{E} \left(\left\|\mu\right\|_{W^{1,r}(\Omega)}\right),\] from which ii. follows. To prove iii., let \(\varepsilon>0\), set \(s=\frac{p+\varepsilon}{\varepsilon}\), and let \(\tilde{v}\) be the extension of \(\omega^{s}\) from Lemma 4.5. Let \(Q\) be a large cube so that \(O\subset Q\) and \(|O|\sim|Q|\). Standard considerations show that \[\left(\int_{O}\omega^{s}\right)^{\frac{1}{p^{s}}}\lesssim\mathcal{G}\left([ \tilde{v}]_{A_{2}(\mathbb{C})}\right)\left(\int_{O}\omega\right)^{\frac{1}{p}}.\] Therefore, by Holder's inequality, and (4.3) \[\|\mathcal{S}_{O}(1)\|_{W^{1,p}(O,\omega)}\leq\|\mathcal{S}_{O}(1)\|_{W^{1,p+ \varepsilon}(O)}\left\|\omega^{\frac{1}{p}}\right\|_{L^{p^{s}}(O)}\lesssim \left(1+\|O\|_{B_{p\varepsilon}}\right)\mathcal{G}\left([\tilde{v}]_{A_{2}( \mathbb{C})}\right)\|1\|_{L^{p}(O,\omega)},\] and iii. is now a consequence of ii. and (4.4) The second consequence of Lemma 4.5 is the following analogue of Lemma 2.3. **Proposition 4.6**.: _For each \(1<s<\infty\),_ \[\left\|(I-\mu\mathcal{S}_{\Omega})^{-1}\right\|_{\mathcal{L}(L^{s}(\Omega))} \lesssim\mathcal{E}\left(s,\tfrac{1}{s-1},\|\Omega\|_{\mathcal{B}},\,\|\mu\|_{ W^{1,r}(\Omega)},\|f(\Omega)\|_{B_{2}}\right).\] Proof.: The proof consists of one small modification to the argument leading to (2.8) which we outline. Let \(g\in\mathcal{C}^{\infty}(\mathbb{C})\) and set \(w=\mathcal{K}(\mathbf{1}_{\overline{\Omega}}g)\) where \(\mathcal{K}\) is the Cauchy transform defined in (2.2). Setting \(h=(I-\mu\mathcal{S}_{\Omega})(\mathbf{1}_{\overline{\Omega}}g)\), one can verify \[\overline{\partial}w=\mu\partial w+h\] and the desired estimate will follow from \(\|\overline{\partial}w\|_{L^{s}(\Omega)}\lesssim\|h\|_{L^{s}(\Omega)}\). Let \(f\) be \(\mu\)-quasiconformal and set \(u=w\circ f^{-1}\). The chain rule shows that \(\overline{\partial}w=(\partial u\circ f)\overline{\partial}f+(\overline{ \partial}u\circ f)\overline{\partial}f\) and the equations for \(w\) and \(f\) imply \((1-|\mu|^{2})^{-1}h=(\overline{\partial}u\circ f)\overline{\partial}f\). Therefore it remains to estimate \((\partial u\circ f)\overline{\partial}f\) by \(h\) in \(L^{s}(\Omega)\)-norm. Changing variables, using (2.4) and (2.1), \[\int_{\Omega}\left|(\partial u\circ f)\overline{\partial}f\right|^{s}\lesssim \int_{f(\Omega)}\left|\mathcal{S}(\overline{\partial}u)\right|^{s}\left|Jf^{- 1}\right|^{1-s/2}.\] Letting \(v\) be the extension of \(\left|Jf^{-1}\right|^{1-\frac{s}{2}}\) from Lemma 4.5, and applying Proposition 2.1, \[\int_{f(\Omega)}\left|\mathcal{S}(\overline{\partial}u)\right|^{s}\left|Jf^{- 1}\right|^{1-s/2}\leq\left\|\mathcal{S}(\overline{\partial}u)\right\|_{L^{s} (\mathbb{C},v)}^{s}\leq\mathcal{G}\left([v]_{A_{s}(\mathbb{C})}\right)\left\| \overline{\partial}u\right\|_{L^{s}(\mathbb{C},v)}^{s}.\] The proof is concluded by (4.4) and recalling that \(\overline{\partial}u\) is supported on \(f(\Omega)\) so that by a change of variable, \(\left\|\overline{\partial}u\right\|_{L^{s}(\mathbb{C},v)}=\|h\|_{L^{s}(\Omega)}\). ### Conformal estimates The final ingredients for the proof of Theorem B are the following conformal estimates which are qualitatively contained in [4]. The first one is a quantification of [4, Theorem 1.2]. **Lemma 4.7**.: _Let \(O_{1}\) and \(O_{2}\) be simply connected \(\mathcal{B}_{p}\) domains, and \(g\) conformally map \(O_{1}\) onto \(O_{2}\). Then \(\log g^{\prime}\in W^{1,p}(O_{1})\) with_ \[\left\|[\log g^{\prime}]^{\prime}\right\|_{L^{q}(O_{1})}\lesssim 1+\|O_{1}\|_{ B_{q}}+\|O_{2}\|_{B_{q}}\qquad 1<q\leq p. \tag{4.5}\] Proof.: It is shown in [4, Theorem 1.2] that if \(h_{j}\) is a conformal mapping from \(\mathbb{D}\) onto \(O_{j}\), then \(h_{j}^{\prime},\log h_{j}^{\prime}\in W^{1,p}(\mathbb{D})\) for a suitable branch of the logarithm. To track the dependence on \(\left\|O_{j}\right\|_{\mathcal{B}_{p}}\), let us outline their argument. For \(\mathrm{e}^{it}\in\mathbb{T}\), it is not hard to compute that \[\arg N_{O_{j}}(h_{j}(\mathrm{e}^{it}))=-\arg h_{j}^{\prime}(\mathrm{e}^{it})+t\] so that \(1+\left\|O_{j}\right\|_{\mathcal{B}_{p}}\sim\left\|\arg h^{\prime}_{j}\right\|_{ \mathcal{B}_{q,q}^{1-\frac{1}{q}}(\mathbb{T})}\) for any \(q>1\). Furthermore, the Herglotz extension maps \(B_{q,q}^{1-\frac{1}{q}}(\mathbb{T})\to W^{1,q}(\mathbb{D})\)[30, Theorem 4.3.3], which together with the well-known Herglotz representation [24, Theorem 3.3.2], yields \[\log h^{\prime}_{j}=\log\left|h^{\prime}_{j}(0)\right|+H_{j},\qquad\left\|H_{ j}\right\|_{W^{1,q}(\mathbb{D})}\lesssim 1+\left\|O_{j}\right\|_{\mathcal{B}_{q}}. \tag{4.6}\] Now \(g\) can be factored as \(h_{2}\circ h_{1}^{-1}\). By the chain rule and inverse function theorem, \[[\log g^{\prime}]^{\prime}=\frac{\tilde{H}}{h^{\prime}_{1}}\circ h_{1}^{-1}, \qquad\tilde{H}=\left[\log h^{\prime}_{2}\right]^{\prime}-\left[\log h^{ \prime}_{1}\right]^{\prime}.\] So, changing variables \[\int_{O}\left|[\log g^{\prime}]^{\prime}\right|^{q}=\int_{\mathbb{D}}\left| \tilde{H}\right|^{q}\left|h^{\prime}_{1}\right|^{2-q}.\] Because \(O_{1}\) is Dini-smooth, \(h^{\prime}_{1}\) is bounded above and below [24, Theorem 3.3.5], so the proof is concluded by appealing to (4.6). Immediately from Lemma 4.7 and Theorem A, we obtain a quantitative version of [4, Theorem 1.1] in the Sobolev case. **Proposition 4.8**.: _Let \(F\in W^{1,2}_{\mathrm{loc}}(\Omega)\) be a homeomorphism satisfying_ \[\overline{\partial}F(z)=\mu(z)\partial F(z),\qquad z\in\Omega,\] _such that \(F(\Omega)\) is also a bounded \(\mathcal{B}_{p}\) domain. Then,_ \[\left\|D\log\partial F\right\|_{L^{p}(\Omega)} \tag{4.8}\] \[\left\|D\log\partial F\right\|_{L^{2}(\Omega)} \tag{4.7}\] Proof.: Let \(E\mu\) be an extension of \(\mu\) from \(\Omega\) to \(\mathbb{C}\) satisfying \[\left\|E\mu\right\|_{\infty}\leq\varkappa(k)<1,\qquad\left\|E\mu\right\|_{W^{ 1,q}(\mathbb{C})}\lesssim\left\|\mu\right\|_{W^{1,q}(\Omega)},\quad 2\leq q \leq p.\] The difficulty in constructing \(E\mu\) is to guarantee the first property. To do so, one needs only to modify the parameters in the usual first order extension [14, Theorem 5.4.1]. To demonstrate, extending over the line \(x_{n}=0\), one can use \[Eu(x_{1},\ldots,x_{n})=-\varepsilon u\left(x_{1},\ldots,x_{n-1},-\tfrac{3}{2 \varepsilon}x_{n}\right)+(1+\varepsilon)u\left(x_{1},\ldots,x_{n-1},-\tfrac{1 }{2(1+\varepsilon)}x_{n}\right),\] where \(\varepsilon>0\) is chosen small enough that \(k(1+2\varepsilon)=\varkappa(k)<1\). Composing with \(\mathcal{C}^{1}\) boundary parameterizations does not affect the \(L^{\infty}\) norm of the extension and will only add a constant to the relevant Sobolev norms. Let \(G\) be the principal solution to \(\overline{\partial}G=(E\mu)\partial G\), which has the well-known solution formula \(G(z)=z+\mathcal{K}(\rho)\) where \((I-(E\mu)\mathcal{S})\rho=E\mu\). By Theorem A, \[\left\|G\right\|_{W^{2,s}(\mathbb{C})}\lesssim 1+\left\|E\mu\right\|_{W^{1,s} (\mathbb{C})}^{3}\lesssim 1+\left\|\mu\right\|_{W^{1,s}(\Omega)}^{3},\qquad s \in\{r,p\}. \tag{4.9}\] Since \(\Omega\) is a \(\mathcal{B}_{p}\) domain and \(G\in W^{2,p}(\mathbb{C})\), standard trace results [15, 30] together with (4.9) show that \(G(\Omega)\) is also a \(\mathcal{B}_{p}\) domain and moreover the estimates \[\left\|G(\Omega)\right\|_{\mathcal{B}_{2}}\lesssim\left\|\Omega\right\|_{ \mathcal{B}_{2}}\left(1+\left\|\mu\right\|_{W^{1,r}(\Omega)}^{3}\right),\qquad \left\|G(\Omega)\right\|_{\mathcal{B}_{p}}\lesssim\left\|\Omega\right\|_{ \mathcal{B}_{p}}\left(1+\left\|\mu\right\|_{W^{1,p}(\Omega)}^{3}\right) \tag{4.10}\] hold. Introducing \(g=F\circ G^{-1}\), \(g\) is conformal on \(G(\Omega)\) with \(g(G(\Omega))=F(\Omega)\). Furthermore, \[\partial\log\partial F=\frac{\partial^{2}F}{\partial F}=\frac{(g^{\prime}\circ G )\partial^{2}G}{(g^{\prime}\circ G)\partial G}+\frac{(g^{\prime\prime}\circ G) \left(\partial G\right)^{2}}{(g^{\prime}\circ G)\partial G}\eqqcolon F_{1}+F_ {2}.\] Recall that \(\partial G=\mathrm{e}^{\sigma}\), for \(\sigma\) satisfying \(\overline{\partial}\sigma=(E\mu)\partial\sigma+\partial(E\mu)\). Noticing that \(F_{1}=\partial\sigma\) and applying (3.3), we have \[\|F_{1}\|_{L^{q}(\mathbb{C})}\leq\|\sigma\|_{W^{1,q}(\mathbb{C})}\lesssim\|E\mu \|_{W^{1,q}(\mathbb{C})}\lesssim\|\mu\|_{W^{1,q}(\mathbb{O})},\qquad 2\leq q \leq p,\] which obeys the estimates (4.7) and (4.8). To estimate \(F_{2}\), change variables and use the quasiconformality of \(G\) in (2.4) to obtain for any \(2\leq q\leq p\), \[\int_{\Omega}|F_{2}|^{q}\lesssim\int_{G(\Omega)}\left[\frac{g^{\prime\prime}} {g^{\prime}}\right]^{q}\left(|JG|\circ G^{-1}\right)^{\frac{q}{2}-1}\leq\left\| \left[\log g^{\prime}\right]^{\prime}\right\|_{L^{q}(G(\Omega))}^{q}\|JG\|_{ \infty}^{\frac{q}{2}-1}. \tag{4.11}\] For \(q\in\{2,p\}\), estimate the first factor by (4.5) from Lemma 4.7 and (4.10). When \(q=2\), the \(JG\) term vanishes in (4.11), so (4.8) is established. When \(q=p\), estimate \(JG\) by the Sobolev embedding and (4.9) with \(s=r\). ### Proof of Lemma 4.2 By (1.11) and (4.3), the assumptions of this lemma imply \(f\in W^{2,p}(\Omega)\). This implies first that \(|Jf|\in L^{\infty}(\Omega)\) and second that \(f(\Omega)\) is a \(\mathcal{B}_{p}\) domain. Therefore, by Lemma 4.3.i, the conclusion will follow if we can show \(|Jf|^{-1}\sim|\partial f|^{-1}\in L^{\infty}\). To this end, by (4.7) in Proposition 4.8, \(\log\partial f\in W^{1,p}(\Omega)\) and hence \[|\partial f|^{-1}=|\exp(-\log\partial f)|\lesssim\exp\left(|\log\partial f|\right)\] belongs to \(L^{\infty}\) by Sobolev embedding. ### Proof of Theorem B Let \(g\in C_{0}^{\infty}(\mathbb{C})\) and set \(w=\mathcal{K}(1_{\overline{\Omega}}g)\) so that \(w\) satisfies the inhomogeneous Beltrami equation \(\overline{\partial}w=\mu\partial w+h\) on \(\mathbb{C}\) with \(h=(I-\mu\mathcal{S}_{\Omega})(1_{\overline{\Omega}}g)\). Notice that \(\overline{\partial}w=1_{\overline{\Omega}}g\in\mathcal{C}^{\infty}(\overline{ \Omega})\) and since \(\Omega\) is a \(\mathcal{B}_{p}\) domain, \(\partial w\) and \(h\) both belong to \(W^{1,p}(\Omega)\). The precise estimate we will prove is \[\left\|\overline{\partial}w\right\|_{W^{1,p}(\Omega)}\lesssim \operatorname{R}\operatorname{P}\left[\|\mathcal{S}_{O}\|_{\mathcal{L}(W^{1,p} (O,\omega))}+\operatorname{P}\left(1+\|\Omega\|_{\mathcal{B}_{p}}\right)\|O \|_{\mathcal{B}_{p}}\right]\|h\|_{W^{1,p}(\Omega)},\] \[\operatorname{R}=\mathcal{E}\left(\|\mu\|_{W^{1,p}(\Omega)} \right),\qquad\operatorname{P}=\|O\|_{\mathcal{B}_{p}}+\left(1+\|\Omega\|_{ \mathcal{B}_{p}}\right)\left(1+\|\mu\|_{W^{1,p}}^{3}\right). \tag{4.12}\] We will repeatedly use the facts that \(\operatorname{R}^{2}\sim\operatorname{R}\) and that \(\|O\|_{\mathcal{B}_{2}},\|O\|_{\mathcal{B}_{2}}\lesssim 1\) by virtue of the standing assumption that the domains are Dini-smooth. To establish (4.12), let us normalize so that \(\|h\|_{W^{1,p}(\Omega)}=1\). By Sobolev embedding and interpolation, since \(p>2\), \[\|h\|_{L^{s}(\Omega)}\lesssim_{p,s}1,\qquad p\leq s\leq\infty. \tag{4.13}\] Defining \(u=w\circ f^{-1}\), one can check \[\overline{\partial}w =(\partial u\circ f)\overline{\partial}f+H,\] \[\partial w =(\partial u\circ f)\partial f+\overline{\mu}H,\qquad H\coloneqq \frac{h}{1-|\mu|^{2}}=\left(\overline{\partial}u\circ f\right)\overline{ \partial f}. \tag{4.14}\] Let \(q\in\left\{\frac{pp^{\prime}}{2},p\right\}\). Setting \(\sigma=\log\partial f\), by (4.7) we have the analogous estimates to (3.6), and (3.8): \[\|H\|_{W^{1,q}(\Omega)} \lesssim 1+\|\mu\|_{W^{1,p}(\Omega)};\] \[\left\|\left(\overline{\partial f}\right)^{2}\left[D\overline{ \partial}u\circ f\right]\right\|_{L^{q}(\Omega)} \lesssim 1+\|\mu\|_{W^{1,p}(\Omega)}+\|\partial\sigma\|_{L^{p}(\Omega)} \lesssim\operatorname{R}\operatorname{P}. \tag{4.15}\] In light of the identity \[D\overline{\partial}w=D(\partial u\circ f)\overline{\partial}f+(\partial u\circ f )D\overline{\partial}f+DH, \tag{4.16}\] we want to obtain \(L^{q}(\Omega)\) bounds for the first two terms in (4.16). To handle the first term, notice that since \(h\) is supported in \(\overline{\Omega}\), (4.14) implies that \(\overline{\partial}u\) is supported in \(\overline{O}\). Therefore, \(1_{\overline{O}}\partial u=\mathcal{S}_{O}\overline{\partial}u\) whence, following the same steps as (3.10), \[\int_{\Omega}\left|D(\partial u\circ f)\overline{\partial}f\right|^{q} \lesssim\int_{O}\left|D\mathcal{S}_{O}(\overline{\partial}u)\right|^{q}\left| Jf^{-1}\right|^{1-q}.\] Setting \(\omega_{q}=\left|Jf^{-1}\right|^{1-q}\), \[\left(\int_{O}\left|D\mathcal{S}_{O}(\overline{\partial}u)\right|^{q}\left| Jf^{-1}\right|^{1-q}\right)^{1/q}\lesssim\|\mathcal{S}_{O}\|_{W^{14}(O, \omega_{q})}\left[\left\|D\overline{\partial}u\right\|_{L^{p}(O,\omega)}+ \left\|\overline{\partial}u\right\|_{L^{p}(O,\omega)}\right].\] The first summand within the bracket is bounded by (4.15). Referring to (4.14), the second summand is bounded by \(\|h\partial f\|_{L^{p}(\Omega)}\). However, \(\partial f=1+\mathcal{S}(I-\mu\mathcal{S}_{\Omega})^{-1}\mu\in L^{p}(\Omega)\) by Proposition 4.6 and the \(L^{\infty}\)-norm of \(h\) is controlled by (4.13). We will now have to consider the two cases \(q=\frac{pp^{\prime}}{2}\) and \(q=p\) separately, so we summarize the proof thus far by \[\|\overline{\partial}w\|_{W^{14}(\Omega)}\lesssim\operatorname{R}\operatorname {P}\|\mathcal{S}_{O}\|_{W^{1\nu}(O,\omega_{r})}+\|(\partial u\circ f)D \overline{\partial}f\|_{L^{q}(\Omega)}. \tag{4.17}\] Let us first consider \(q=\frac{pp^{\prime}}{2}<p\). By Lemma 4.3.iii, \[\|\mathcal{S}_{O}\|_{W^{14}(O,\omega_{q})}\lesssim\operatorname{R}\left(1+\| O\|_{\mathcal{B}_{p}}\right),\qquad q=\frac{pp^{\prime}}{2}. \tag{4.18}\] To estimate the final term in (4.17) in this case, define \(\lambda\coloneqq D\mu+\mu D\sigma\) which belongs to \(L^{p}(\Omega)\) by (4.15) and satisfies \(D\overline{\partial}f=\lambda\partial f\). Taking \(s=\frac{pp^{\prime}}{2-p^{\prime}}<\infty\), applying Holder's inequality, arguing as in the proof of Proposition 4.6 with (4.4), and recalling (4.7), \[\|(\partial u\circ f)\partial f\cdot\lambda\|_{L^{q}(\Omega)}\lesssim\|( \partial u\circ f)\partial f\|_{L^{s}(\Omega)}\|\lambda\|_{L^{p}(\Omega)} \lesssim\operatorname{R}\|h\|_{L^{s}(\Omega)}\|\lambda\|_{L^{p}(\Omega)} \lesssim\operatorname{R}\operatorname{P}.\] Inserting this estimate into (4.17), using (4.18) and (4.3), we obtain \[\|\partial w\|_{W^{14}(\Omega)}\lesssim\left(1+\|\Omega\|_{\mathcal{B}_{q}} \right)\|\overline{\partial}w\|_{W^{14}(\Omega)}\lesssim\operatorname{R} \operatorname{P}\left(1+\|\Omega\|_{\mathcal{B}_{p}}\right)\left(1+\|O\|_{ \mathcal{B}_{p}}\right),\qquad q=\frac{pp^{\prime}}{2}. \tag{4.19}\] Since \(\frac{pp^{\prime}}{2}>2\), by Sobolev embedding, (4.19) establishes an upper bound for the \(L^{\infty}(\Omega)\) norm of \(\overline{\partial}w\). Referring back to (4.14) and (4.13), (4.19) further implies \[\|(\partial u\circ f)\partial f\|_{L^{\infty}(\Omega)}\lesssim\operatorname{ R}\operatorname{P}\left(1+\|\Omega\|_{\mathcal{B}_{p}}\right)\left(1+\|O\|_{ \mathcal{B}_{p}}\right).\] Now we can pick up at (4.17) with \(r=p\) and immediately estimate \[\|(\partial u\circ f)D\partial f\|_{L^{p}(\Omega)}\lesssim\|(\partial u\circ f )\partial f\|_{L^{\infty}(\Omega)}\|\lambda\|_{L^{p}(\Omega)}\lesssim \operatorname{R}\operatorname{P}^{2}\left(1+\|\Omega\|_{\mathcal{B}_{p}}\right) \left(1+\|O\|_{\mathcal{B}_{p}}\right)\] which clearly implies (4.12), concluding the proof. ## 5. Muckenhoupt weights and Jacobians of Quasiconformal maps In this section we prove Lemmata 2.2 and 4.5 concerning the Muckenhoupt weight properties of \(|Jf|\) and \(\left|Jf^{-1}\right|\). The main tool is the following critical Sobolev embedding Lemma of Moser-Trudinger type. **Lemma 5.1**.: _There exists an absolute constant \(C\) such that for every \(\sigma:\mathbb{C}\to\mathbb{C}\) with \(D\sigma\in L^{2}(\Omega)\), \(a\in\mathbb{R}\), and \(1<p<\infty\),_ \[\sup_{Q\subset\mathbb{C}}\left\langle|\mathrm{e}^{a\sigma}|\right\rangle_{Q} \left\langle\left|\mathrm{e}^{\frac{-a\sigma}{p-1}}\right|\right\rangle_{ \frac{1}{p-1},Q}\leq C\exp\left(C\frac{|a|p}{p-1}\left\|D\sigma\right\|_{L^{2} }\right).\] Proof.: Using the Taylor formula, we can write, for any cube \(Q\subset\mathbb{C}\) and any \(z\in Q\), \[|\sigma(z)-\sigma_{Q}|\lesssim\int_{Q}\frac{|D\sigma(w)|}{|w-z|}\,\mathrm{d}w,\quad\sigma_{Q}:=\frac{1}{|Q|}\int_{Q}\sigma(w)\,\mathrm{d}w.\] For any \(p>2\), by Young's inequality \[\int_{Q}\left|\int_{Q}\frac{|D\sigma(w)|}{|w-z|}\,\mathrm{d}w\right|^{p}\,dz \leq\left\|D\sigma\right\|_{L^{2}(Q)}^{p}\left(\int_{[-\ell(Q),\ell(Q)]^{2}}| z|^{-r}\,\mathrm{d}z\right)^{p/r},\quad\frac{1}{2}+\frac{1}{r}=\frac{1}{p}+1.\] Notice that by definition, \(r<2\) so \[\int_{[-\ell(Q),\ell(Q)]^{2}}|z|^{-r}\,\mathrm{d}z\sim\frac{\ell(Q)^{2-r}}{2- r}.\] However, some calculations show that \(2-r=\frac{4}{p+2}\) and \(p/r=p/2+1\) so that \[\frac{1}{|Q|}\int_{Q}|\sigma(z)-\sigma_{Q}|^{p}\,\mathrm{d}z\lesssim\left\|D \sigma\right\|_{L^{2}(Q)}^{p}\left(\frac{p+2}{4}\right)^{p/2+1}.\] For each \(a>0\), we can now compute \[\frac{1}{|Q|}\int_{Q}\mathrm{e}^{a|\sigma(z)-\sigma_{Q}|}\,\,\mathrm{d}z=\sum _{k=0}^{\infty}\frac{a^{k}}{k!}\frac{1}{|Q|}\int_{Q}|\sigma(z)-\sigma_{Q}|^{k }\,\mathrm{d}z\lesssim\sum_{k=0}^{\infty}(a\|D\sigma\|_{L^{2}})^{k}\frac{k^{k /2}}{k!}. \tag{5.1}\] A crude estimate for the power series is given by \[\sum_{k=0}^{\infty}A^{k}\frac{k^{k/2}}{k!}\lesssim\exp(CA^{2}),\] for some absolute constant \(C\), by splitting into even and odd integers, using Stirling's approximation, and the crude estimate \((2k)!\geq(k!)^{2}\). So, for any \(a\in\mathbb{R}\) and \(p>1\), by (5.1), \[\frac{1}{|Q|}\int_{Q}|\mathrm{e}^{a\sigma}| \left(\frac{1}{|Q|}\int_{Q}\left|\mathrm{e}^{-\frac{a}{p-1}\sigma }\right|\right)^{p-1}\] \[=\frac{1}{|Q|}\int_{Q}\left|\mathrm{e}^{a(\sigma-\sigma_{Q})} \right|\left(\frac{1}{|Q|}\int_{Q}\left|\mathrm{e}^{-\frac{a}{p-1}(\sigma- \sigma_{Q})}\right|\right)^{p-1}\] \[\leq\frac{1}{|Q|}\int_{Q}\mathrm{e}^{|a|\cdot|\sigma-\sigma_{Q}| }\left(\frac{1}{|Q|}\int_{Q}\mathrm{e}^{\frac{|a|}{p-1}|\sigma-\sigma_{Q}|} \right)^{p-1}\] \[\leq C\exp\left(C|a|^{2}L^{2}\right)\exp\left(\frac{C|a|^{2}L^{2}} {p-1}\right).\] ### Proof of Lemma 2.2 As in the proof of Theorem A, \(\partial f=\mathsf{e}^{\sigma}\) where \(\sigma\) satisfies \[\overline{\partial}\sigma=\mu\partial v+\partial\mu,\qquad\text{ on }\mathbb{C}.\] Therefore, \[(I-\mu\mathcal{S}_{\Omega})\overline{\partial}\sigma=\partial\mu,\qquad\text{ on }\Omega.\] However, \(\|\mathcal{S}\|_{\mathcal{L}(L^{2}(\mathbb{C}))}=1\) hence \[\|\partial\mu\|_{L^{2}(\mathbb{C})}=\left\|(I-\mu\mathcal{S})\overline{ \partial}\sigma\right\|_{L^{2}(\mathbb{C})}\geq(1-k)\left\|\overline{\partial }\sigma\right\|_{L^{2}(\mathbb{C})}.\] Therefore, \(\|D\sigma\|_{L^{2}(\mathbb{C})}\lesssim L\). Now, the Jacobian takes the form \[|Jf|=(1-|\mu|^{2})|\,\mathsf{e}^{2\sigma}\mid\sim\left|\mathsf{e}^{2\sigma} \right|,\] so Lemma 5.1 establishes that \(|Jf|^{a}\) belongs to \(\mathrm{A}_{p}(\mathbb{C})\) with the estimate \[\left[|Jf|^{a}\right]_{\mathrm{A}_{p}(\mathbb{C})}\leq C\exp\left(\frac{Cp|a|^ {2}L^{2}}{p-1}\right). \tag{5.2}\] We next demonstrate that (5.2) implies \(|Jf|^{a}\) belongs to every \(\mathrm{RH}_{s}(\mathbb{C})\). Indeed, for each \(1<s<\infty\), by Holder's inequality, for each cube \(Q\subset\mathbb{C}\), setting \(p=\frac{s+1}{s}\), \[|Q|^{p} =\left(\int_{Q}|Jf|^{\frac{a}{p}}\,|Jf|^{-\frac{a}{p}}\right)^{p} \leq\left(\int\,|Jf|^{a}\right)\left(\int\,|Jf|^{-as}\right)^{p-1}\] \[\leq\left(\int\,|Jf|^{a}\right)\left(\left[|Jf|^{as}\right]_{ \mathrm{A}_{2}(\mathbb{C})}\,|Q|^{2}\left(\int\,|Jf|^{as}\right)\right)^{p-1}.\] Therefore, applying (5.2) and rearranging the above display, \[\left[|Jf|^{a}\right]_{\mathrm{RH}_{s}(\mathbb{C})}\coloneqq\sup_{Q\text{ cube}}\left\langle|Jf|^{a}\right\rangle_{s,Q}\left\langle|Jf|^{a}\right\rangle_{Q}^{-1} \leq C\exp\left(C|a|^{2}sL^{2}\right). \tag{5.3}\] Now, to handle \(\left|Jf^{-1}\right|^{a}\) we use the identity, for any \(t\in\mathbb{R}\), \[\int_{Q}\left|Jf^{-1}\right|^{t}=\int_{f^{-1}(Q)}|Jf^{-1}\circ f|^{t}|Jf|=\int _{f^{-1}(Q)}|Jf|^{1-t}. \tag{5.4}\] Since \(f\) is quasiconformal, \(f\) and \(f^{-1}\) are quasisymmetric (see e.g. [2, Corollary 3.10.4]). Therefore, there exist cubes \(R,P\) such that \(R\subset f^{-1}(Q)\subset P\) with \(|R|\sim|P|\). So, we claim that for every \(t\in\mathbb{R}\), \[\frac{1}{|P|^{t}}\left(\int_{P}|Jf|\right)^{t-1}\int_{P}|Jf|^{1-t}\lesssim \left\{\begin{array}{cc}\exp\left(C(1-t)^{2}L^{2}\right),&1-t>1;\\ 1,&0\leq 1-t\leq 1;\\ \exp\left(Ct(t-1)L^{2}\right),&1-t<0.\end{array}\right. \tag{5.5}\] When \(1-t\geq 1\), (5.5) is simply the \(\mathrm{RH}_{1-t}\) property of \(|Jf|\) from (5.3). If \(0\leq 1-t\leq 1\), then (5.5) follows by Holder's inequality. Finally, if \(t>1\), apply the \(\mathrm{A}_{p}(\Omega)\) condition for \(|Jf|\) with \(1-t=\frac{-1}{p-1}\) from (5.2). Thus (5.5) is established. Finally, for any \(a\in\mathbb{R}\) and \(1<p<\infty\), applying (5.4) followed by (5.5) with \(t=a\) and \(t=-\frac{a}{p-1}\), \[\left(\frac{1}{|Q|}\int_{Q}\left|Jf^{-1}\right|^{a}\right)\left(\frac{1}{|Q|} \int_{Q}\left|Jf^{-1}\right|^{-\frac{a}{p-1}}\right)^{p-1}\lesssim\frac{|P|^{a} }{|Q|^{a}}\left(\frac{|P|^{-\frac{a}{p-1}}}{|Q|^{-\frac{a}{p-1}}}\right)^{p-1}=1,\] with the appropriate implicit constant according to (5.5). ### Proof of Lemma 4.5 By Proposition 4.8\(\log\partial F\in W^{1,2}(\Omega)\). Since \(\Omega\) is \(\mathcal{B}_{p}\) it is also \(\mathcal{C}^{1}\) so there exists an extension \(\rho\in W^{1,2}(\mathbb{C})\). By Lemma 5.1\(|\epsilon^{a\rho}|\in\Lambda_{q}(\mathbb{C})\). The same argument used to extend to \(JF^{-1}\) in the proof of Lemma 2.2 in SS5.1 shows that \(\left|JF^{-1}\right|^{a}\) belongs to the following \(\Lambda_{p}(O)\) class for \(O=F(\Omega)\), defined by finiteness of the characteristic \[\left[\left|JF^{-1}\right|^{a}\right]_{\Lambda_{p}(O)}=\sup_{Q\subset\mathbb{C }}\left\langle 1_{O}\left|JF^{-1}\right|^{a}\right\rangle_{Q}\left\langle 1_{O} \left|JF^{-1}\right|^{-a}\right\rangle_{\frac{1}{p-1},Q}. \tag{5.6}\] An unpublished result of Wolff [31] states that for any measurable set \(O\), if \(\omega^{1+\varepsilon}\in\Lambda_{p}(O)\) then there exists \(v\in\Lambda_{p}(\mathbb{C})\) such that \(v=\omega\) on \(O\). See [16, Theorem IV.5.5] for a proof, [12, Lemma 3.8] for a quantitative version, and [17, 21] for two related results. Because (5.6) holds for arbitrary \(a\), we can apply Wolff's result to obtain the promised \(v\). ### Explicit dependence in special cases Let us compute this precise dependence in the relevant special cases to establish (2.5) and (2.6). Let \(1<p<\infty\) and denote by \(p^{\prime}=\frac{p}{p-1}\) the Holder conjugate. The computation \((1-\frac{p}{2})\frac{-1}{p-1}=1-\frac{p^{\prime}}{2}\) reveals the symmetry \[\left[\left|Jf^{-1}\right|^{1-\frac{p}{2}}\right]_{\Lambda_{p}(\mathbb{C})}^{ \max\{1,\frac{1}{p-1}\}}=\left[\left|Jf^{-1}\right|^{1-\frac{p^{\prime}}{2}} \right]_{\Lambda_{p^{\prime}}(\mathbb{C})}^{\max\{1,\frac{1}{p^{\prime}-1}\}}. \tag{5.7}\] Therefore, we can assume \(p>2\) so that \(\max\{1,\frac{1}{p-1}\}=1,1-(1-\frac{p}{2})>1\), and \(\frac{1}{2}\leq 1-(1-\frac{p^{\prime}}{2})\leq 1\) in order to compute \[\left[\left|Jf^{-1}\right|^{1-\frac{p}{2}}\right]_{\Lambda_{p}(\mathbb{C})}^{ \max\{1,\frac{1}{p-1}\}}\leq C\exp\left(Cp^{2}L^{2}\right).\] Therefore, (2.5) follows by the symmetry (5.7). To prove (2.6), let \(1<p<\infty\), so that \(1-(1-r)>1\), and hence \[\left[\left|Jf^{-1}\right|^{1-r}\right]_{\Lambda_{r}(\mathbb{C})}^{\max\left\{ 1,\frac{1}{r-1}\right\}}\leq C\exp\left(Cr^{2}\max\left\{1,\frac{1}{r-1} \right\}L^{2}\right)\leq C\exp\left(C_{1}\max\left\{r^{2},\frac{1}{r-1}\right\} L^{2}\right).\]
2302.12831
CDPMSR: Conditional Diffusion Probabilistic Models for Single Image Super-Resolution
Diffusion probabilistic models (DPM) have been widely adopted in image-to-image translation to generate high-quality images. Prior attempts at applying the DPM to image super-resolution (SR) have shown that iteratively refining a pure Gaussian noise with a conditional image using a U-Net trained on denoising at various-level noises can help obtain a satisfied high-resolution image for the low-resolution one. To further improve the performance and simplify current DPM-based super-resolution methods, we propose a simple but non-trivial DPM-based super-resolution post-process framework,i.e., cDPMSR. After applying a pre-trained SR model on the to-be-test LR image to provide the conditional input, we adapt the standard DPM to conduct conditional image generation and perform super-resolution through a deterministic iterative denoising process. Our method surpasses prior attempts on both qualitative and quantitative results and can generate more photo-realistic counterparts for the low-resolution images with various benchmark datasets including Set5, Set14, Urban100, BSD100, and Manga109. Code will be published after accepted.
Axi Niu, Kang Zhang, Trung X. Pham, Jinqiu Sun, Yu Zhu, In So Kweon, Yanning Zhang
2023-02-14T15:13:33Z
http://arxiv.org/abs/2302.12831v1
# CDPMSR: Conditional Diffusion Probabilistic Models for ###### Abstract Diffusion probabilistic models (DPM) have been widely adopted in image-to-image translation to generate high-quality images. Prior attempts at applying the DPM to image super-resolution (SR) have shown that iteratively refining a pure Gaussian noise with a conditional image using a U-Net trained on denoising at various-level noises can help obtain a satisfied high-resolution image for the low-resolution one. To further improve the performance and simplify current DPM-based super-resolution methods, we propose a simple but non-trivial DPM-based super-resolution post-process framework, _i.e._, cDPMSR. After applying a pre-trained SR model on the to-be-test LR image to provide the conditional input, we adapt the standard DPM to conduct conditional image generation and perform super-resolution through a deterministic iterative denoising process. Our method surpasses prior attempts on both qualitative and quantitative results and can generate more photo-realistic counterparts for the low-resolution images with various benchmark datasets including Set5, Set14, Urban100, BSD100, and Manga109. _Code will be published after accepted_. Axi Niu\({}^{1}\), Kang Zhang\({}^{2}\), Trung X. Pham\({}^{2}\), Jinqiu Sun\({}^{1}\)*, Yu Zhu\({}^{1}\), In So Kweon\({}^{2}\), Yanning Zhang\({}^{1}\)+\({}^{1}\)Northwestern Polytechnical University \({}^{2}\) Korea Advanced Institute of Science and Technology (KAIST) Diffusion Probabilistic Models, Image-to-Image Translation, Conditional Image Generation, Image Super-resolution. Footnote †: This work was funded in part by the Project of the National Natural Science Foundation of China under Grant 61871328, Natural Science Basic Research Program of Shaanxi under Grant 2021JCW-03, as well as the Joint Funds of the National Natural Science Foundation of China under Grant U19B2037.). (*Corresponding author: Jinqiu Sun.) ## 1 Introduction Over the years, single image super-resolution (SISR) has drawn active attention due to its wide applications in computer vision, such as object recognition, remote sensing, and so on. SISR aims to obtain a high-resolution (HR) image containing great details and textures from a low-resolution (LR) image by an SR method, which is a classic ill-posed inverse problem [1]. To establish the mapping between HR and LR images, various CNN-based methods had been proposed. Among them, methods based on the deep generative model have become one of the mainstream, mainly including GAN-based [2, 3, 4] and flow-based methods [5, 6, 7], which have shown convincing image generation ability. GAN-based SISR methods [2, 3, 4] used a generator and a discriminator in an adversarial way to encourage the generator to generate realistic images. Specifically, the generator generates an SR result for the input, and the discriminator is used to distinguish if the generated SR is true. It combines content losses (_e.g._, \(L_{1}\) or \(L_{2}\)) and adversarial losses to optimize the whole training process. Due to their strong learning abilities, GAN-based methods become popular for image SR tasks [4, 8, 9]. However, these methods are easy to meet mode collapse and the training process is hard to converge with complex optimization [10, 11] and adversarial losses often introduce artifacts in SR results, leading to large distortion [12, 13]. Another line of methods based on deep generative models is flow-based methods, which directly account for the ill-posed problem with an invertible encoder [14, 15, 16, 17]. It transforms a Gaussian distribution into an HR image space instead of modeling one single output and inherently resolves the pathology of the original "one-to-many" SR problem. Optimized by a negative loglikelihood loss, these methods avoid training instability but suffer from extremely large footprints and high training costs due to the strong architectural constraints to keep the bijection between latents and data [16]. Lately, the adoptions of diffusion probabilistic models (DPM) have shown promising results in image generative Figure 1: Illustration of our method. The model contains a stochastic forward diffusion process which gradually adds noise to an \(\mathbf{I}^{HR}\) image. And a deterministic denoise process is applied to recover high-resolution and realistic images \(\mathbf{I}^{SR}\) corresponding to \(\mathbf{I}^{LR}\) images.
2307.15758
Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor
Among several dark matter candidates, bosonic ultralight (sub meV) dark matter is well motivated because it could couple to the Standard Model (SM) and induce new forces. Previous MICROSCOPE and Eot Wash torsion experiments have achieved high accuracy in the sub-1 Hz region, but at higher frequencies there is still a lack of relevant experimental research. We propose an experimental scheme based on the diamagnetic levitated micromechanical oscillator, one of the most sensitive sensors for acceleration sensitivity below the kilohertz scale. In order to improve the measurement range, we used the sensor whose resonance frequency could be adjusted from 0.1Hz to 100Hz. The limits of the coupling constant are improved by more than 10 times compared to previous reports, and it may be possible to achieve higher accuracy by using the array of sensors in the future.
Rui Li, Shaochun Lin, Liang Zhang, Changkui Duan, Pu Huang, Jiangfeng Du
2023-07-10T13:22:41Z
http://arxiv.org/abs/2307.15758v2
# Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor ###### Abstract Among several dark matter candidates, bosonic ultra-light (sub-meV) dark matter is well motivated because it could couple to the Standard Model (SM) and induce new forces. Previous MICROSCOPE and Eot-Wash torsion experiments have achieved high accuracy in the sub-1 Hz region, but at higher frequencies there is still a lack of relevant experimental research. We propose an experimental scheme based on the diamagnetic levitated micromechanical oscillator, one of the most sensitive sensors for acceleration sensitivity below the kilohertz scale. In order to improve the measurement range, we used the sensor whose resonance frequency \(\omega_{0}\) could be adjusted from 0.1Hz to 100Hz. The limits of the coupling constant \(g_{{B-L}}\) are improved by more than 10 times compared to previous reports, and it may be possible to achieve higher accuracy by using the array of sensors in the future. ## I Introduction There are many astronomical [1; 2] and cosmological observations [3] that prove the existence of dark matter particles[4; 5], but the specific parameters of dark matter, especially the quality, are still highly uncertain [6]. Many direct detection studies have assumed that dark matter is composed of supersymmetric fermions, but so far there has not been enough evidence. Now the focus of research is gradually shifting to ultralight bosons and the quality range is approximately \(10^{-22}\)eV\(<\)\(m_{\phi}\)\(<\)0.1eV [7; 8]. For ultralight bosons with a mass less than 1eV, due to their high particle number density, they behave like a classical field. Due to the viral theorem, if the DM has virialized to the Galaxy, it will be moving with a typical speed \(v_{\text{\tiny DM}}\approx 10^{5}\)m/s [9; 10; 11]. This corresponds to Compton frequency \(\omega_{s}=m_{\phi}/\hbar\) and De Broglie wavelength \(\lambda_{\text{\tiny DM}}=hc^{2}/(m_{\phi}v_{\text{\tiny DM}})\). According to the previous reports, such as ADMX [12] can search for the Peccei-Quinn axion in the mass range \(10^{-6}\)eV\(<\)\(m_{\phi}\)\(<\)\(10^{-3}\)eV [13; 14]. And the pseudoscalar axion-like ULMBs with masses between \(10^{-23}\)eV and \(10^{-18}\)eV [15; 16; 17] and scalar dilaton ULMBs with masses between \(10^{-21}\)eV and \(10^{-5}\)eV by use ultrastable clocks [18; 19] and gravitation wave detectors [20] have recently been reported. When DM is a vector field couples to a conserved current, corresponding to the baryon number minus lepton number (B\(-\)L charge) in the SM. The Lagrangian in this case can be written as [21]: \[\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}m_{\phi}^{2}A^{2}+ig_{{ B-L}}A_{\mu}\hbar\gamma^{\mu}n \tag{1}\] where \(n\) is the neutron field and the DM field couples directly to the number of neutrons, \(g_{{B-L}}\) is the coupling strength. Using the Lorentz gauge and the plane wave approximation, the dark electric field can be written as: \(E\approx\sqrt{\rho_{\text{\tiny DM}}}\text{sin}(\omega_{s}t-\vec{k}\cdot\vec {x})\), where \(\rho_{\text{\tiny DM}}\approx 0.3\text{GeV}/\text{cm}^{3}\)[22] is the local DM density. In ground experiments, assume that using a magneto-gravity mechanical oscillator to measure the ultralight DM field along the Earth's axis, we can parameterize the force exerted on the sensor as: \[F_{\text{sig}}(t)=\alpha g_{{B-L}}N_{g}F_{0}\text{sin}(\omega_{s}t) \tag{2}\] because the De Broglie wavelength of DM is much larger than the size of the sensor so that we drop the \(x\) dependence. In this equation, \(\alpha=\text{sin}\theta_{N}\) denotes the component along the direction of gravity and \(\theta_{N}\) means the latitude of the location of the ground experiment system. In order to avoid the effects of the Earth's rotation under long time measurements and increase the force, experiment system is best carried out at high latitudes like in the Arctic which \(\alpha=1\). \(F_{0}=\sqrt{\rho_{\text{\tiny DM}}}\approx 10^{-15}\)N and \(N_{g}\) is the total number of neutrons in the sensor. We can approximate write it as \(N_{g}\approx\frac{1}{2}m/m_{\text{neu}}\) in a sensor with mass \(m\) and \(m_{\text{neu}}\) is the neutron mass. The force \(F_{\text{sig}}(t)\) is proportional to the mass of the sensor, so the main criterion about the sensor is acceleration sensitivity. Here we propose a experiment scheme to detect DM using a frequency adjustable diamagnetic levitated sensor. The resonance frequency could be changed by adjust the magnetic field gradient in a paramagnetic part of the oscillator and frequency range from 0.1Hz to 100Hz. This means that we have high detection accuracy to detect DM with mass in the range from \(10^{-16}\)eV to \(10^{-13}\)eV. Compare to previously reported experiments, our experiment scheme can achieve more than one order of magnitude improvement in the measurement of the coupling strength \(g_{{B-L}}\) based on the results of theoretical calculation. Theoretical calculation Under the effect of the ultralight DM field, consider thermal noise and measurement noise, the motion equation of a mechanical oscillator at resonant frequency \(\omega_{0}\) could be written as: \[m\ddot{x}+m\gamma\dot{x}+m\omega_{0}^{2}x=F_{\rm sig}(t)+F_{\rm th}+F_{\rm mea} \tag{3}\] where \(\gamma\) is damp coefficient; the \(F_{\rm sig}(t)\) is the DM field drive from equation (2); \(F_{\rm th}\) is the environmental thermal noise; and the \(F_{\rm mea}\) represents the measurement noise which is mainly composed of the detector imprecision noise and backaction of radiation pressure fluctuations. The total acceleration noise of the system is given by: \[S_{\rm aa}^{\rm tot}=S_{\rm aa}^{\rm th}+(\frac{S_{\rm xx}^{\rm imp}}{|\chi_{ \rm=}(\omega,\omega_{0})|^{2}}+\frac{S_{\rm ff}^{\rm ba}}{m^{2}}) \tag{4}\] where \(\chi_{\rm=}(\omega,\omega_{0})\) is the mechanical susceptibility given by \(|\chi_{\rm=}(\omega,\omega_{0})|^{2}=1/[(\omega^{2}-\omega_{0}^{2})^{2}+\gamma ^{2}\omega^{2}]\), and \(S_{\rm aa}^{\rm th}=4\gamma k_{B}\,{\rm T}/m\) is the thermal noise where \(k_{B}\) is Boltzmann constant and T indicates environment temperature. The detector imprecision noise \(S_{\rm xx}^{\rm imp}\) and the backaction noise \(S_{\rm ff}^{\rm ba}\) make up the total measurement noise \(S_{\rm aa}^{\rm mea}=S_{\rm xx}^{\rm imp}/|\chi_{\rm=}(\omega,\omega_{0})|^{ 2}+S_{\rm ff}^{\rm ba}/m^{2}\), and \(S_{\rm xx}^{\rm imp}\cdot S_{\rm ff}^{\rm ba}=(1/\eta)\hbar^{2}\) meanwhile. Here \(\eta\leqslant 1\) is the measurement efficiency, and \(\eta=1\) corresponding to standard quantum limit (SQL). The total measurement noise \(S_{\rm aa}^{\rm mea}\) for the sensor operating at SQL condition at resonance frequency \(\omega_{0}\) could be given by the simple formula [23]: \[S_{\rm aa}^{\rm mea,SQL}=\frac{2\sqrt{(\omega_{0}^{2}-\omega^{2})^{2}+\gamma ^{2}\omega^{2}}}{m} \tag{5}\] And achieving the SQL in a frequency range need to optimize the measurement parameters frequency by frequency as the range is scanned. We use the total acceleratioon noise \(S_{\rm aa}^{\rm tot}\) as the acceleration measurement sensitivity of the system. From the equations (2)-(4), consider the optimal case of \(\alpha=1\), we obtain the relationship between coupling strength \(g_{{}_{B-L}}\) and the acceleration measurement sensitivity \(S_{\rm aa}^{\rm tot}\) by: \[g_{{}_{B-L}}=\frac{2m_{neu}}{F_{0}}\sqrt{\frac{S_{\rm aa}^{\rm tot}}{T_{\rm tot }}} \tag{6}\] where \(T_{\rm tot}\) denotes the effective total integration time. The DM signal is essentially a coherent force and the timescales \(T_{\rm coh}\approx 10^{6}/\omega_{s}\). When the DM frequency \(\omega_{s}\) is lower to satisfy \(T_{\rm coh}\)\(>\)\(T_{\rm mea}\), all the measurement time \(T_{\rm mea}\) contributes to the coherent DM signal. And as the DM frequency \(\omega_{s}\) increases, when \(T_{\rm coh}\)\(<\)\(T_{\rm mea}\), only the proportion of \(T_{\rm coh}\)/\(T_{\rm mea}\) in the measurement time contributes to the coherent signal. So we define the effective integration time: \[T_{\rm tot}=\left\{\begin{array}{ll}T_{\rm mea}&\mbox{if $T_{\rm coh}<T_{\rm mea}$ }\\ \sqrt{T_{\rm mea}\cdot T_{\rm coh}}&\mbox{if $T_{\rm coh}>T_{\rm mea}$}\end{array}\right.\] ## III Experimental scheme The levitated micromechanical and nanomechanical oscillators have been demonstrated as one of the ultrasensitive acceleration sensors due to its ultralow dissipation [24; 25]. We propose a reasonable scheme by our calculation as shown in Fig.1(a). A diamagnetic sphere made by PMMA with radius \(r_{1}\)=0.5mm(corresponding volume \(V_{1}\)), density \(\rho_{1}\) and magnetic susceptibility \(\chi_{1}\) is levitated in the upper magnet (name as \(Magnet\)-\(A\)) center region, and the oscillator signal is detected through the fibre on both sides. A paramagnetic microsphere made by Tb\({}_{2}\)O\({}_{3}\) with radius \(r_{2}=11\mu\)m(corresponding volume \(V_{2}\)), density \(\rho_{2}\) and magnetic susceptibility \(\chi_{2}\) is connected to the upper diamagnetic sphere through a thin glass rod. And another combined magnets (name as \(Magnet\)-\(B\)) is placed under the paramagnetic microsphere. The whole magnet assembly is placed in a multi-stage suspension system, and uses active vibration isolation devices to further improve the isolation effect[26; 27]. Figure 1: (a) Schematic diagram of the experimental setup. A diamagnetic sphere of 0.5 mm radius is levitated in the magnetic gravity trap, and a paramagnetic microsphere of 11 \(\mu m\) radius is connected to the upper diamagnetic sphere by a thin glass rod. A 1550 nm laser is transmitted through the left fibre to the right fibre, passing the transparent diamagnetic sphere. (b) The magnetic field gradient \(\partial B_{B}/\partial z\) and the resonance frequency \(\omega_{0}^{\prime}\) changes by the relative distance \(d\), expressed by the blue and red lines respectively. \(Magnet\)-\(A\) is constructed in a similar way to our previous articles[28]. And need to use high remanence magnetic material with two different magnetisation direction to generate enough magnetic force. The red express the direction point to the centre, and the blue express the direction out to the centre. In addition, using a less remanence magnetic material to build the upper layer of \(Magnet\)-\(B\) and high magnetic material to build the lower layer. The combination of two different remanence magnetic materials allows \(Magnet\)-\(B\) to have a higher magnetic field gradient while reducing the magnetic field strength. And the direction of magnetisation is also indicated by red and blue colours. The magnetic field energy of the upper paramagnetic sphere can be written as: \[U_{1}=-\int_{V_{1}}\frac{\chi_{1}}{2\mu_{0}}B_{\text{\tiny A}}^{2}dV \tag{7}\] where \(B_{\text{\tiny A}}\) represents the magnetic field created by \(Magnet\)-\(A\). Assuming that the \(Magnet\)-\(B\) is far away at beginning, the \(z\) direction equilibrium position \(z_{0}\) of the oscillator in the magnetic-gravity trap satisfies: \(\partial U_{1}/\partial z|_{z=z_{0}}=(\rho_{1}V_{1}+\rho_{2}V_{2})g\). And the resonance frequency in \(z\) direction is: \[\omega_{0}=\sqrt{\frac{1}{\rho_{1}V_{1}+\rho_{2}V_{2}}\cdot\frac{\partial^{2} U_{1}}{\partial z^{2}}}\bigg{|}_{z=z_{0}} \tag{8}\] Then we make the \(Magnet\)-\(B\) rise, the magnetic field \(B_{\text{\tiny B}}\) from \(Magnet\)-\(B\) in the lower paramagnetic microsphere will become larger. And because of \(V_{2}\ll V_{1}\), we can simplify the magnetic field energy of the paramagnetic microspheres as \(U_{2}=-\chi_{2}B_{\text{\tiny B}}^{2}V_{2}/2\mu_{0}\). Now the resonance frequency along \(z\) direction of the oscillator change as: \[\omega_{0}^{\prime}=\sqrt{\omega_{0}^{2}-\frac{\chi_{2}V_{2}}{\mu_{0}(\rho_{1 }V_{1}+\rho_{2}V_{2})}\left(\frac{\partial B_{\text{\tiny B}}}{\partial z} \right)^{2}}\bigg{|}_{z=z_{0}} \tag{9}\] where \(\chi_{2}\)\(>\)0 and \(\omega_{0}^{\prime}\)\(<\)\(\omega_{0}\). We ignore the second order gradient term because of \((\partial B_{\text{\tiny B}}/\partial z)^{2}\gg B_{\text{\tiny B}}(\partial^{2}B_ {\text{\tiny B}}/\partial z^{2})\). And the magnetic force from \(Magnet\)-\(B\) on the paramagnetic microsphere is much lower than the total gravity of oscillator since \(B_{\text{\tiny B}}\) and \(V_{2}\) are very small, the equilibrium position \(z_{0}\) will not be changed therefore. We use finite element method to simulate the magnetic field gradient \(\partial B_{\text{\tiny B}}/\partial z\) changes by the distance between the paramagnetic microsphere and \(Magnet\)-\(B\) expressed by \(d\) range from 50\(\mu\)m to 100 \(\mu\)m, then use equation (9) to calculate the corresponding resonance frequency \(\omega_{0}^{\prime}\), as shown in Fig.1(b). It is theoretically possible to bring the resonance frequency \(\omega_{0}^{\prime}\) close to zero by reducing the distance \(d\). But in order to improve the stability of the oscillator and reduce the requirement for the isolation system, we select resonance frequency \(\omega_{0}^{\prime}\) variation range from 0.1Hz to 100Hz. ## IV Experimental Result Estimate Now we calculate the acceleration measurement sensitivity of this system. In order to improve the acceleration sensitivity, the whole system was placed in a low temperature environment which T=30mK, and estimate the damp coefficient \(\gamma=10^{-4}\)Hz [29; 24]. In the Supplementary material, we calculate the dependence of the total measurement noise \(S_{\text{aa}}^{\text{mea}}\) on the laser input power \(P_{\text{in}}\) and obtained the optimized laser input power \(P_{\text{opt}}(\omega,\omega_{0})\) to minimised the total measurement noise. Figure 2: Acceleration power spectral density \(S_{\text{aa}}\) (a) Resonance frequency \(\omega_{0}\)=10Hz, the grey dashed line indicates the thermal noise \(S_{\text{aa}}^{\text{th}}\); the red line indicates the acceleration detection noise \(S_{\text{aa}}^{\text{mea,SQL}}\); the blue dashed line indicates the \(S_{\text{aa}}^{\text{mea}}\) with the optimal light intensity \(P_{\text{opt}}(\omega,\omega_{0})\) in each frequency between 8Hz to 12Hz and the measurement efficiency \(\eta\)=1; the green dashed line indicates the same \(P_{\text{opt}}(\omega,\omega_{0})\) as the blue dashed line but the \(\eta\)=0.1; the purple line indicates the light intensity \(P_{\text{opt}}(\omega_{0},\omega_{0})\) and \(\eta\)=1; the yellow line indicates the same \(P_{\text{opt}}(\omega_{0},\omega_{0})\) and \(\eta\)=0.1; (b) Resonance frequency \(\omega_{0}\)=100Hz, and the others are the same as (a); (c) Adjust resonance frequency \(\omega_{0}\) from 0.1Hz to 100Hz, the grey dashed line indicates the thermal noise \(S_{\text{aa}}^{\text{th}}\); the yellow line indicates the acceleration measurement noise \(S_{\text{aa}}^{\text{mea}}\) with \(\eta\)=0.1, and here the scan step \(\Delta\omega_{s}\)=10Hz it is only used to show the measurement scheme; the green line indicates the envelope of the yellow line in the diagram and write it as \(S_{\text{aa}}^{\text{mea}^{\prime}}\); the red line is the acceleration measurement sensitivity \(S_{\text{aa}}^{\text{tot}}=S_{\text{aa}}^{\text{th}}+S_{\text{aa}}^{\text{mea }^{\prime}}\). In the cases of the oscillator resonance frequency \(\omega_{0}\) equal to 10Hz and 100Hz, we calculate the corresponding acceleration noise and the results are shown in Fig.2(a) and Fig.2(b). When resonance frequency \(\omega_{0}=10\)Hz, assuming measurement efficiency \(\eta=1\) and we set the laser input power to optimal laser power for each point as \(P_{\rm opt}(\omega,\omega_{0})\), the measurement noise \(S_{\rm aa}^{\rm mea}\) can almost reach the SQL at this time. With the measurement efficiency \(\eta\) reduce to 0.1, the measurement noise is slightly increased. But actually, to simplify the experiment, the laser input power need to choose near the resonance frequency \(\omega_{0}\) by \(P_{\rm opt}(\omega_{0},\omega_{0})\), it will make the measurement noise \(S_{\rm aa}^{\rm mea}\) increase rapidly. In Fig.2(a), in the frequency range from 9Hz to 11Hz, the measurement noise \(S_{\rm aa}^{\rm mea}\) is always below the thermal noise \(S_{\rm aa}^{\rm th}\) with \(\eta=0.1\). When the resonance frequency \(\omega_{0}\) is adjusted to 100Hz, the range of measurement noise \(S_{\rm aa}^{\rm mea}\) below thermal noise \(S_{\rm aa}^{\rm th}\) is reduced to 99.6Hz to 100.4Hz in Fig.2(b). We choose the appropriate oscillator resonance frequency scan step \(\Delta\omega_{0}\) from this. According to the calculation results from Fig.2(a) and Fig.2(b), we choose the scan step \(\Delta\omega_{0}=1\)Hz in the region resonance frequency \(\omega_{0}\) range from 0.1Hz to 100Hz, each scan cover the frequency range from \(\omega_{0}-\Delta\omega_{0}/2\) to \(\omega_{0}+\Delta\omega_{0}/2\), and fix the laser input power \(P_{\rm in}=P_{\rm opt}(\omega_{0},\omega_{0})\) in each scan meanwhile. We calculate the acceleration measurement noise \(S_{\rm aa}^{\rm mea}\) with \(\eta=0.1\) in each scan, and calculate the envelope of these series \(S_{\rm aa}^{\rm mea}\) written as \(S_{\rm aa}^{\rm mea}\). The acceleration measurement sensitivity \(S_{\rm aa}^{\rm tot}=S_{\rm aa}^{\rm th}+S_{\rm aa}^{\rm mea^{\prime}}\), and these results are presented in Fig.2(c). According to the previous discussion on the effective integration time \(T_{\rm tot}\), we fix the measurement time of each scan as \(T_{\rm mea}=10^{5}\)s. When DM frequency \(\omega_{s}\)\(<\)10Hz, \(T_{\rm tot}=T_{\rm mea}\); and when \(\omega_{s}\)\(>\)10Hz, \(T_{\rm tot}=\sqrt{T_{\rm mea}\cdot 10^{6}/\omega_{s}}\). Combining previous discussion of the scan step, we estimate that about one hundred times adjustments and measurements will be required in total, corresponding to a total time of \(1\times 10^{7}\) seconds. The final result of coupling strength \(g_{{B-L}}\) from equation (6) is shown in Fig.3. In the region of \(\omega_{s}\)\(<\)100Hz, this system always has high acceleration sensitivity by adjusting the resonance frequency of the mechanical oscillator. And we achieve more than an order of magnitude improvement in the measurement of \(g_{{B-L}}\) compare to the MICRO-SCOPE and the Eot-Wash torsion experiment. And in the region of \(\omega_{s}\)\(>\)100Hz, the measurement accuracy of \(g_{{B-L}}\) decreases rapidly, due to the increase in measurement noise \(S_{\rm aa}^{\rm mea}\). Finally, we estimated the minimum \(g_{{B-L}}\) that this system can detect. Assume that the DM frequency \(\omega_{s}\) is 1Hz, 10Hz and 100Hz respectively. From the equation (6) and the measurement time \(T_{\rm mea}\) range from \(10^{3}\)s to \(10^{7}\)s, the results are shown in Fig.4. When \(T_{\rm mea}\) is less than the coherent time \(T_{\rm coh}\), \(g_{{B-L}}\) decreases rapidly as \(T_{\rm mea}\) increases; and when \(T_{\rm mea}\) is greater than \(T_{\rm coh}\), \(g_{{B-L}}\) decreases more slowly. If the final measurement time is about \(10^{7}s\), the minimum \(g_{{B-L}}\) that can be measured scale is about \(10^{-26}\). ## V Conclusion We propose an experimental scheme to detect ultra-light dark matter using a frequency adjustable diamagnetic levitated microsphere sensor which can theoretically approach the standard quantum limit. We change the resonance frequency by adjusting the distance between the paramagnetic microsphere and the lower combined magnets, and to obtain a lager range that maintains high acceleration measurement sensitivity. Compared to the existing system, our method can achieve at least one order of magnitude improvement in the coupling constant \(g_{{B-L}}\), especially in the frequencies from 0.1Hz to 100Hz. And it may be possible to achieve higher accuracy by using the array of sensors in the future. In this article, we consider only the effects of thermal noise and quantum measurement noise on the acceleration measurement sensitivity of the system. In fact, there are many low frequency noises such as seismic waves and Earth tidal forces which also have a great impact on the accuracy of the experiment, and that cannot be shielded by the suspension system. This poses a great challenge to the actual measurement. Reducing the frequency scan step according to the accuracy of the active vibration isolation device may make the effect of other noise lower Figure 3: Ultra-light Dark Matter search range. The top axis represents the DM mass \(m_{\phi}\) corresponding to the frequency \(\omega_{s}\). The upper grey and yellow regions are excluded by Eöt-Wash torsion balance [30; 31; 32] and MICROSCOPE experiments [33; 34], and the red region is the range that this system can cover. In torsion balance system, they use a pair of accelerometers (Beryllium and Titanium) with a differential neutron/nucleon ratio \(\Delta=\rm{N_{1}/A_{1}-N_{2}/A_{2}}=0.037\), where N and A are the neutron and nucleon numbers of Beryllium and Titanium respectively. From the equation(2), \(N_{g}\) can be approximated as \(N_{g}=\Delta\cdot m/m_{\rm neu}\) at this time. than thermal noise, and this needs to be verified by further experiments. In general, the current ground-based precision measurement system may have a broader prospect in terms of dark matter measurement compared to the previous astronomical observation methods. In the future, with the development of measurement sensitivity and measurement range of mechanical sensors, especially with the improvement quantum sensing technology, the measurement sensitivity may break through the standard quantum limit. It will open up more possibilities in terms of dark matter measurement. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Grants No.12205291, No. 12075115, No. 12075116, No. 11890702 and No. 12150011), the Fundamental Research Funds for the Central Universities, and Anhui Provincial Natural Science Foundation (Grant No. 2208085QA16).
2305.03743
Learning Sentinel-2 reflectance dynamics for data-driven assimilation and forecasting
Over the last few years, massive amounts of satellite multispectral and hyperspectral images covering the Earth's surface have been made publicly available for scientific purpose, for example through the European Copernicus project. Simultaneously, the development of self-supervised learning (SSL) methods has sparked great interest in the remote sensing community, enabling to learn latent representations from unlabeled data to help treating downstream tasks for which there is few annotated examples, such as interpolation, forecasting or unmixing. Following this line, we train a deep learning model inspired from the Koopman operator theory to model long-term reflectance dynamics in an unsupervised way. We show that this trained model, being differentiable, can be used as a prior for data assimilation in a straightforward way. Our datasets, which are composed of Sentinel-2 multispectral image time series, are publicly released with several levels of treatment.
Anthony Frion, Lucas Drumetz, Guillaume Tochon, Mauro Dalla Mura, Abdeldjalil Aïssa El Bey
2023-05-05T10:04:03Z
http://arxiv.org/abs/2305.03743v1
# Learning Sentinel-2 reflectance dynamics for data-driven assimilation and forecasting ###### Abstract Over the last few years, massive amounts of satellite multispectral and hyperspectral images covering the Earth's surface have been made publicly available for scientific purpose, for example through the European Copernicus project. Simultaneously, the development of self-supervised learning (SSL) methods has sparked great interest in the remote sensing community, enabling to learn latent representations from unlabeled data to help treating downstream tasks for which there is few annotated examples, such as interpolation, forecasting or unmixing. Following this line, we train a deep learning model inspired from the Koopman operator theory to model long-term reflectance dynamics in an unsupervised way. We show that this trained model, being differentiable, can be used as a prior for data assimilation in a straightforward way. Our datasets, which are composed of Sentinel-2 multispectral image time series, are publicly released with several levels of treatment. Self-supervised learning, Sentinel-2, satellite image time series, Koopman operator, Data assimilation ## I Introduction Longstanding problems in satellite image time series processing include change detection [1], content classification [2], semantic segmentation [3] and spectral unmixing [4]. In this paper, we approach these issues in a holistic way, in a self-supervised learning (SSL) context. Indeed, we design a machine learning model first trained on a pretext task without using any annotations, and _in fine_ use its learnt latent representation to handle downstream tasks, possibly with some labels. Our pretext task is to predict the long-term reflectance of a pixel using a given initial condition. We aim at learning discrete dynamical systems written in a generic way as \[x_{t+1}=f(x_{t};\theta) \tag{1}\] where \(x\) is an observed time series and \(\theta\) represents underlying parameters. While SSL has been extensively studied for remote sensing [5], to our knowledge, our work is the first to use temporal prediction as a pretext task. Our resulting model is well aware of the reflectance dynamics and can serve multiple time-related purposes, like interpolation, denoising or forecasting. Its differentiability and small number of parameters makes it more versatile than many model-driven priors for downstream tasks that can be formulated as optimization problems. In spirit, our learning approach is related to recent advances in natural language processing, e.g. [6], where a large language model is simply trained to predict the data and can then be asked to perform a variety of tasks. Our contributions include: (1) we adapt a neural architecture that we previously introduced in [7], which learns the behavior of dynamical systems from observation data, to real-world satellite image time series and study tools to leverage the spatial structure of these data, (2) we show how to use such a trained model for data assimilation, in settings with sparse and irregular available data, showing promising potential to design efficient gap-filling algorithms for such remote sensing datasets, (3) we collect, clean and interpolate two long Sentinel-2 time series, which we publicly share ([https://github.com/anthony-frion/Sentinel2TS](https://github.com/anthony-frion/Sentinel2TS)) to make it easier for the interested community to work on similar tasks and compare their results to ours. ## II Our methods Our approach to learning time series dynamics is based on the Koopman operator theory [8]. In short, this theory states that any given dynamical system can be described by a linear operator which is applied to observation functions of the system. However, this operator, which is called the Koopman operator, is generally infinite dimensional. We refer the reader to [9] for a recent review on this theory. Our method follows a line opened by [10] which aims at finding a Koopman Invariant Subspace, i.e. a set of observation functions on which the restriction of the Koopman operator is finite-dimensional, and which gives a good view of the general dynamical system. We use the neural Koopman architecture from [7], which we represent graphically in Figure 1. In short, this architecture has 2 components: a deep autoencoder \((\phi,\psi)\) and a Koopman matrix \(\mathbf{K}\). The matrix \(\mathbf{K}\), whose entries are trainable parameters, multiplies vectors from the latent space obtained by training the encoder \(\phi\) and the decoder \(\psi\). It has the effect of advancing time. In terms of equations, this could be written as \[\psi(\mathbf{K}^{r}\phi(\mathbf{x}(t)))=\mathbf{x}(t+\tau) \tag{2}\] for a given variable \(\mathbf{x}\) of time evaluated at a specific time \(t\) and advanced by a time \(\tau\). Note that a time of 1 classically corresponds to a time step from the time series which is considered (assuming it is regularly sampled). In the case of satellite image time series, as a first approach, we treat pixels independently from one another. Thus, given a time series of \(T\) images each containing \(N=H\times W\) pixels we denote our state variable as \(\mathbf{x}_{i,t}\), where \(1\leq i\leq N\) is the spatial index and \(1\leq t\leq T\) is the temporal index. Note that, in our case, \(\mathbf{x}_{i,t}\) is not a scalar value but a multispectral pixel, i.e. a \(L\)-dimensional vector, where each of the \(L=10\) dimensions corresponds to the reflectance measured for one of the Sentinel-2 spectral bands. We augment the observation space with the local discrete temporal derivatives of \(\mathbf{x}\), which means that we work on data \(\mathbf{y}\) defined by \[\mathbf{y}_{i,t}=\left(\mathbf{x}_{i,t+1}\quad\mathbf{x}_{i,t+1}-\mathbf{x}_{ i,t}\right)^{T}. \tag{3}\] This is equivalent to the knowledge of the last 2 states of \(\mathbf{x}\), and it can therefore be motivated by Takens' embedding theorem [11], which roughly states that the state space gets more predictable when augmented with lagged states. Intuitively, it seems much easier to estimate the next step of x when one knows both the current state and its derivative. \(\mathbf{y}\) is now of dimension \(2L=20\): 10 dimensions for the covered spectral bands and 10 for their derivatives, as shown on Figure 1. As in [7], we train a prediction model in two stages: first we train it to short-term prediction of the dynamics, i.e. up to 5 time steps ahead, and then to long-term prediction, i.e. up to 100 steps ahead. It is crucial to obtain a model that is able to make good predictions over several years, yet the long-term optimisation problem is highly nonconvex, usually leading to a poor local minimum. Therefore, the easier short-term prediction task provides a warm-start initialization, avoiding bad local minima. Such a procedure is related to curriculum learning [12], which we believe to be crucial when learning difficult physics-related tasks (see [13] for a recent survey). We use 3 different types of loss terms during our training. The main one is the prediction loss \(L_{pred}\), which directly represents the \(L_{2}\) distance between the model predictions and the groundtruth. The linearity loss \(L_{lin}\) is the \(L_{2}\) distance between the predicted latent vector and the encoding of the actual future state: it ensures that the dynamics is linear in the latent space. The orthogonality loss \(L_{orth}\) is a regularization term which encourages \(\mathbf{K}\) to be close to an orthogonal matrix, which favors long-term stability as explained in [7]. Denoting \(\Theta\) the set of parameters of our model, i.e. the concatenation of 1) the coefficients of \(\mathbf{K}\), 2) the parameters of \(\phi\) and 3) the parameters of \(\psi\), these loss terms can be written as: \[L_{pred,\tau}(\Theta)=\sum_{\begin{subarray}{c}1\leq i\leq N\\ 1\leq t\leq T-\tau-1\end{subarray}}||\mathbf{y}_{i,t+\tau}-\psi(\mathbf{K}^{r} \phi(\mathbf{y}_{i,t}))||^{2} \tag{4}\] \[L_{lin,\tau}(\Theta)=\sum_{\begin{subarray}{c}1\leq i\leq N\\ 1\leq t\leq T-\tau-1\end{subarray}}||\phi(\mathbf{y}_{i,t+\tau})-\mathbf{K}^{r }\phi(\mathbf{y}_{i,t})||^{2}\] (5) \[L_{orth}(\mathbf{K})=||\mathbf{K}\mathbf{K}^{T}-\mathbf{I}||_{F}^ {2} \tag{6}\] where \(||.||_{F}\) is the Frobenius norm. Note that \(L_{pred,0}\) is a classical auto-encoding or reconstruction loss. Using these basic bricks and setting \(\tau_{1}=5\), \(\tau_{2}=100\), we build our short-term and long-term loss functions as: \[L_{short}(\Theta)=\beta_{1}L_{orth}(\mathbf{K})+L_{pred,0}(\Theta)\] \[+L_{pred,1}(\Theta)+L_{pred,\tau_{1}}(\Theta)+L_{lin,1}(\Theta)+ L_{lin,\tau_{1}}(\Theta) \tag{7}\] \[L_{long}(\Theta)=\beta_{2}L_{orth}(\mathbf{K})+\sum_{\tau=0}^{ \tau_{2}}(L_{pred,\tau}(\Theta)+L_{lin,\tau}(\Theta)) \tag{8}\] One could want to just learn to predict from time 0, which is what is done by the \(L_{2}\) loss in [7]. This approach results in a non-robust model which makes good predictions from time 0 but struggles to make predictions from a different initial time. So far, we only treated the pixels independently from each other. We now present a simple method that enables Figure 1: Schematic view of our architecture. Though we precisely represent the number and size of the linear layers from the network on which we experiment, those characteristics could change as long as \(\phi\), \(\mathbf{K}\) and \(\psi\) keep their respective roles. The observation state is of dimension 20 since it contains the reflectances of 10 spectral bands along with their respective derivatives. to exploit the spatial information of the data. We use a trained model with frozen parameters to make long-term predictions from \(\mathbf{y}_{\cdot,1}\) using (2), and assemble the pixel predictions into image predictions \(\hat{X}_{t}\in\mathbb{R}^{H\times W\times L}\) for time t. Using the groundtruth images \(X_{t}\), one can train a convolutional neural network (CNN) to learn the residual function \(r:\mathbb{R}^{H\times W\times L}\rightarrow\mathbb{R}^{H\times W\times L}\) such that \(r(\hat{X}_{t})=X_{t}-\hat{X}_{t}\). Then, one can add the output of this CNN to a test predicted image to get it closer to the groundtruth. The convolutional layers are expected to partially correct the spatial imperfections made by the pixelwise model. ## III Presentation of the datasets We selected two areas of interest in France: the forest of Fontainebleau and the forest of Orleans, which are large forestial areas in a region which is moderately cloudy. The forest of Fontainebleau in particular has already been studied in remote sensing [14][15]. Also, since the two sites are separated by about 60 kilometers, one can test a model's transferability by predicting the dynamics of one area after having been trained only on the other one. The pre-processing steps are largely inspired from the previous work of [16], although we gathered much more data, both in the spatial and temporal dimensions. We retrieve the 10m and 20m resolution bands from the Sentinel-2 images with L2A (Bottom Of Atmosphere) correction and perform an imagewise bicubic interpolation on each of the 20m resolution bands to bring all the data to a 10m resolution. Although the revisit time is only 5 days, we identify the images that feature too many clouds and remove elements from the time series accordingly. This results in an incomplete time series, where about three quarters of the images have been rejected. To obtain complete time series, we performed temporal Cressman interpolation [17] with Gaussian weights of radius (i.e. standard deviation) \(R\) = 15 days. In the end, we find ourselves with 2 image time series, each of length \(T=343\) and image size \(500\times 500\). Given the temporal and spatial resolution of the Sentinel-2 satellites, this corresponds to a a time span of nearly 5 years and to an area of 25 km\({}^{2}\) each. We also extracted irregular versions of these datasets where no temporal Cressman interpolation has been performed. We show sample images in Figure 2. ## IV Experiments We use a subcrop of \(150\times 150\) pixels from the Fontainebleau image time series. The first \(T_{train}=242\) images are used for training and the last \(T_{val}=100\) ones are kept for validation. We extract another \(150\times 150\) subcrop from the Orleans time series and use it as a test set. We train a Koopman autoencoder using successively (7) and (8). As shown on figure 1, the latent dimension of our network is \(k=32\). ### _Temporal extrapolation on the training area_ We first check the ability of our model to extrapolate in time on the Fontainebleau area. We use the first element of the augmented time series \(\mathbf{y}\) from (3) to make a \((T_{train}+T_{val})\)-time steps prediction, from which the first \(T_{train}\) elements correspond to training data while the last \(T_{val}\) ones correspond to frames unseen during training. We measure the mean squared error (MSE) between the last \(T_{val}\) predicted states and the actual validation data, averaged over all frames, pixels and spectral bands. We show an example of such prediction for a random pixel in figure 3. We now train a CNN on top of our Koopman model as described in Section II. We use predictions up to time span \(T_{train}\) to train the CNN and then test it on the last \(T_{val}\) time steps. The CNN architecture is very basic, with just 5 convolutional layers and no pooling. The filter sizes are all \(3\times 3\) and the numbers of filters of the successive layers are 64, 64, 32, 32 and 10, totaling 79114 parameters. As reported in table I, the CNN correction results in a significant improvement. This can be best visualised when plotting images of the entire predictions, as in Fig. 4. One can see that the pixelwise predictions have spatial artifacts in the form of a weaker spatial structure, which is not the case after the CNN correction. Notably, the small area which always appears green in the top row of Figure 4, corresponding to a clearing in the forest, is not well reconstructed by the pixelwise prediction, but this problem is partially addressed by the CNN. ### _Data assimilation on training data_ The experiment presented in the last subsection shows that our model is indeed able to reconstitute an entire pixel's dynamics from only an initial condition. However, this intuitively seems like a difficult task, while using multiple data points to understand a pixel's dynamics seems easier. We confirm this intuition by a new experiment: using a learned model, we look for the latent initial condition Fig. 2: Left: a temporally interpolated Fontainebleau image. Right: a non-interpolated Orleans image. The date for both images is 20/06/2018. Those are RGB compositions with saturated colors. The red squares indicate the \(150\times 150\) pixel subcrops on which we experiment in Section IV, and the red dots mark the pixels involved in figures 3 and 5. from which the propagation by the model best corresponds to the training data. Formally, for a given spatial index \(i\), we seek \[\mathbf{z}_{1}^{*}=\underset{z_{1}\in\mathbb{R}^{k}}{\arg\min}\sum_{t=1}^{T_{train }}\ ||\mathbf{y}_{i,t}-\psi(\mathbf{K}^{t-1}\mathbf{z}_{1})||^{2}. \tag{9}\] We emphasize that, here, only the latent initial condition varies while the model parameters remain fixed. This is a kind of variational data assimilation [18] where everything is based on the data, since the model itself has been trained fully from the data. Finding the best initial condition is done by a gradient descent which backpropagates into the whole pretrained model. This optimisation problem is not convex, yet starting from a null initial latent state gives satisfactory results, and starting from the encoding of the actual initial state gives even better ones. When making predictions using the result of the gradient descent as the initial latent state, not only do we fit the assimilated data very well, but we also obtain excellent extrapolations. As can be seen in Table I, the MSE is far lower than when predicting from only one data point. ### _Data assimilation on test data_ We now move on to the Orleans site, from which no data has been seen during training, and we aim at transfering the knowledge of the Fontainebleau area without training a new model. The change of area results in a data shift, to which the task of prediction from a single reflectance vector (like in subsection IV-A) is very sensitive, leading to relatively poor results with our model trained on Fontainebleau. However, when performing variational data assimilation as in section IV-B, one can perform a good prediction without even needing a complete time series to do so. Indeed, our model can easily handle irregular data, and in our tests it has even been more effective to do so than to assimilate on an interpolated time series. The only difference is that one should only compute the prediction error on the time indexes from the set \(S\subset\{1,2,...,342\}\) of available data, i.e. rewrite (9) as \[\mathbf{z}_{1}^{*}=\underset{\mathbf{z}_{1}\in\mathbb{R}^{k}}{\arg\min}\sum_{ t\in S}||\mathbf{y}_{i,t}-\psi(\mathbf{K}^{t-1}\mathbf{z}_{1})||^{2}. \tag{10}\] We consider a set of 94 irregularly sampled images from the forest of Orleans, each with its associated timestamp, over the same time interval as the training and validationova. We intentionally kept some partially cloudy data in this set. First, we test our model in a classical data assimilation setting, where we check that it is able to interpolate from some of the data to recover the part of the data that was kept aside. We check that our method does better than a well-parameterized Cressman interpolation. The setup is the following: for each image, we keep it with a probability 0.5. We then interpolate on the retained images and use the MSE on the removed images as the performance measure. We perform a Gaussian Cressman interpolation with radius \(0.5,1,...,6.5,7\) time steps (i.e. 2.5 to 35 days) and compare the best result to the data assimilation method with our model. We repeat this experiment with 6 different sets of retained images, looking for the best performing Cressman parameter at each iteration, and average the results. Our method always outperformed the best Cressman interpolation by a margin of at least 25%. The average MSE obtained by the Cressman interpolation was \(5.72\times 10^{-3}\), and the one from our model was \(3.36\times 10^{-3}\). One can visually assess the quality of our interpolation on figure 5, and see that the model was able to combine the information from different years to recover the correct periodic pattern, ignoring the noisiest data points. We now perform forecasting using the same method as in Section IV-B. We keep the last 31 images to test the Fig. 4: Top: groundtruth images of Fontainebleau, corresponding to test times. Middle: predictions made by our model from state at day 5. Bottom: correction of the middle images by a CNN trained on the pixelwise predictions up to day 1200. The colors result from a 3-dimensional principal component analysis (PCA) of the 10 spectral bands performed globally on all the Fontainebleau data. This is much more informative than an RGB composition. Fig. 3: Long-term prediction of reflectances from time 0 for a single pixel from the forest of Fontainebleau, along with the groundtruth. Blue, orange and green respectively denote the B6, B7 and B8A bands. The vertical line marks the separation between the training and validation data. prediction performance, and perform data assimilation on the remaining images. Some results can be observed on Figure 6. ### _Discussion of the results_ Our prediction performances are synthesized in Table I. Note that the Fontainebleau data is an interpolated regular time series while the Orleans data corresponds to irregularly-spaced data points with no temporal interpolation. One can observe that performing data assimilation with several data points is generally far more effective than performing a prediction from a single data point at time 0. Although all of our methods perform far worse on the data from the forest of Orleans than on the training area in the forest of Fontainebleau, the usage of data assimilation partially mitigates the shift in the data. One can conjecture that, although the pseudo-periodic pattern of the reflectance dynamics does not depend on the initial condition in the same way in the Orleans data than in the Fontainebleau data, the model can still identify a known pattern when fed with more data from an Orleans time series. Overall, backpropagating through a long time series prediction is easy because of the simplicity of our model: predicting one step ahead only costs one matrix-vector multiplication, and the most computationally intensive part of the prediction is actually the encoding and decoding of data. ## V Conclusion We showed an adaptation of the previously introduced method from [7] to real satellite image time series, in order to learn an unsupervised model which is able to perform several downstream tasks even using irregular data. Note that our assimilation experiment was a very simple proof of concept since only the initial latent state was optimized using a frozen model, yet one could also imagine a variational data assimilation procedure in which the model parameters are allowed to vary. More generally, there are many downstream tasks in which our model might be of use, e.g. classification tasks in few-shot settings. A natural extension to this work would be to show the model ability to learn from more difficult data, for example with a higher diversity of images, e.g. different crop types and urban environments, with diverse underlying dynamic patterns. One could also test the ability of our model to handle complex spatio-temporal missing data patterns. In particular, although we demonstrated the ability of our trained model to handle irregular test data, the training was still performed on regular data. A weakness of our method is that most of the computation is done pixelwise, and the spatial structure of the data is only used a posteriori through a CNN model. It might be of interest to encode some spatial information directly in the Koopman autoencoder. Other possible extensions include the ability to exploit a control variable or to provide uncertainties along with the predictions.
2301.00965
OccluMix: Towards De-Occlusion Virtual Try-on by Semantically-Guided Mixup
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
Zhijing Yang, Junyang Chen, Yukai Shi, Hao Li, Tianshui Chen, Liang Lin
2023-01-03T06:29:11Z
http://arxiv.org/abs/2301.00965v1
# OccluMix: Towards De-Occlusion Virtual Try-on by Semantically-Guided Mixup ###### Abstract Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects. Deep Learning, Virtual Try-on, Occlusion Handling, Data Augmentation. ## I Introduction Virtual try-on is a popular application by transferring a desired in-shop clothing onto a reference person. With the demand of e-business, virtual try-on has been attracted rising attention. Although recent developments in the try-on network have helped to perform realistic cloth warping by generating fitness shape and realistic visual quality try-on images [1, 2, 3, 4, 5, 6, 7], it remains a big challenge to locate and resolve the occlusion effect in the distorted try-on image. To investigate the occlusion effect in existing visual try-on methods, we follow the pipeline of CP-VTON+ [8] to explore the results of two representative methods [2, 7]. In our experiments, ACGPN [2] stands for the parser-based methods and PF-AFN [7] stands for the parser-free methods. As shown in Fig. 1, in the first and the second rows, the occlusion in the try-on image is represented as the ghost of the previous garment. We denote this occlusion as Inherent-Occlusion. Typically, inherent-occlusion is caused by poor generalization and wrong human parsing. As shown in the third and the fourth rows in Fig. 1, the occlusion effect is caused by the wrong shape warping of new clothes. We denote it as Acquired-Occlusion. Hence, we mainly demonstrate the occlusion problems into two forms (_i.e.,_ Inherent- and Acquired- Occlusion). The cloth warping module in the try-on workflow is easily misled by spatial transformation, and the occlusion effect becomes an obvious degradation when the human pose exhibits large variance. To this end, some pioneering synthesized-based models [1, 3, 9, 10] lock into the above drawbacks and demonstrate limited image quality. In Fig. 2, we conducted a human annotation to investigate the occlusion effect in virtual try-on methods. Specifically, 4064 images [8] are labeled with the existence as well as location of occlusion by manual effort. The statistical results show that occlusion is still the main challenge in the try-on task with 20% percent occurrence. We further calculate the mean Frechet Inception Distance (FID) value of each body part in the results. The arm and clothing parts account for a large FID score and spatial variance, which motivates us to improve the image quality by focusing on these challenging human parts. To tackle the aforementioned issues, we present a robust De-OCclusion framework for Virtual Try-on (DOC-VTON), which fully applies the semantic layout of the try-on image. DOC-VTON adaptively performs a crop-and-paste operation [11, 12] for the generative module (GM) to implement de-occlusion. Specifically, the DOC-VTON consists of three Fig. 1: On Viton [1] dataset, the try-on results of the state-of-the-art models [2, 7] appear undesired occlusion from former and new clothes. To address the occlusion phenomenon, we attempt to categorize the majority of occlusion into two types (_i.e.,_ Inherent-Occlusion and Acquired-Occlusion). In the supplementary file, we provide massive occlusion samples to verify Inherent- and Acquired- types. modules: i) Cloth Warping Module (CWM), which warps in-shop clothing and its corresponding mask into the fitting shape by using appearance flow; ii) Occlusion Mixup Module (OccluMix), which simulates different occlusion cases as OccluMix samples based on the semantic layout of Sharpened Parsing Network (SPN); iii) Generative Module (GM) is applied to transfer the clothes on the OccluMix sample to the real person image, enabling final try-on image generation and de-occlusion jointly. With the above modules, DOC-VTON learns de-occlusion by adopting a semantically-guided mixup strategy in virtual try-on. In summary, the main contributions of our paper are as follows: * We present a comprehensive analysis of the occlusion effect of current visual try-on algorithms. This is the first attempt to analyze this point, and it can facilitate further research on de-occlusion virtual try-on. * We propose a simple yet non-trivial occlusion mixup strategy for virtual try-on (OccluMix), which obtains the challenging occluded try-on person by blending various complexity of texture with semantic and posture guidance. * Compared with the general parsing pipeline, we investigate a Sharpened Parsing Network (SPN) to parse try-on images iteratively. SPN not only handles the irrational warping parts surgically, but also affords regions for OccluMix. * Extensive experiments and evaluations demonstrate that our method can achieve the best state-of-the-art results in the VITON task qualitatively and quantitatively. In the remaining parts of this paper, Section II briefly surveys the occlusion phenomenon in existing virtual try-on approaches, and the derivatives of data augmentation. Section III presents a comprehensive occlusion analysis of state-of-the-art methods. Section IV presents DOC-VTON pipeline and detailed explanations of OccluMix. In Section V, we discuss the comparison between OccluMix and other data augmentation methods. In Section VI, we perform experiments to verify the effectiveness and efficiency of DOC-VTON by comparing it with existing state-of-the-arts. Finally, we conclude our work with future research directions in Section VII. ## II Related Work **Virtual Try-on**. Existing deep learning-based methods on virtual try-on can be mainly categorized as 3D model based approaches [13, 14, 15, 16] and 2D image-based ones [1, 2, 7, 9, 17, 18, 19]. As the former methods require 3D measurements, which bring extra computation resources. Instead, 2D image-based approaches are more feasible to real-world scenarios. For example, VITON [1], CP-VTON [9], ACGPN [2], ClothFlow [3], DCTON [20] and RT-VTON [21] use human representation as the input to generate a clothed person. Besides, WUTON [10] and PF-AFN [7] employ a parser-free approach. In garment deformation methods, the VITON [1], CP-VTON [9], ACGPN [2], DCTON [20] and VITON-HD [22] use thin-plate-spline (TPS) [23] transformation to warp target cloth into fitness shape. However, TPS transformation exhibits limited deformation ability. To this end, RT-VTON [21] proposed a semi-rigid deformation to align the warped cloth with the predicted semantics. ClothFlow [3] and PF-AFN [7] use appearance flow [24], which easily warps the target clothes smoothly onto the target person. As typical try-on pipelines, VITON and CP-VTON use rough shapes and pose maps to ensure the generalization of arbitrary clothes. However, parser-based methods [1, 2, 9, 25] generate poor quality try-on images when parsing results become inaccurate. Recently, PFAFN [7] proposes a pioneering parser-free knowledge distilling approach that gets rid of the interference from inaccurate segmentation. Nevertheless, the generated images of PFAFN still encounter the Inherent-Occlusion and Acquired-Occlusion. To this end, we propose a novel De-occlusion method for virtual try-on (DOC-VTON). DOC-VTON handles the misalignment between target clothes and the reference person, and reduces the ghost effect caused by previous clothes. **De-Occlusion**. Removing the partial occlusion from the target object is a crucial computer vision task [26, 27, 28, 29, 30, 31]. Sail [32] proposes a novel self-supervised framework that tackles scene de-occlusion on real-world data without manual Fig. 2: (a) To investigate the proportion of occlusion samples, we count occlusion examples of PF-AFN and ACGPN results on CP-VTON+ [8] dataset aided by human annotation. (b) The Frechet Inception Distance (FID) scores of different body parts generated by PF-AFN [7] and ACGPN [2]. annotations. GAN-based [29] methods are used to inpaint the occluded region on the face. In the virtual try-on task, the occlusion problem also exists by covering the clothes and human body up. To avoid the occlusion phenomenon, the parser-based methods [1, 2, 9, 25] use a human parser to understand the spatial layout of the body part. Nevertheless, parser-based methods tend to generate poor quality try-on images with noticeable occlusion. Recently, PFAFN [7] proposes a pioneering parser-free approach, however, the image quality still suffers from unsuitable shape-warping clothes and general model generation capabilities. To address the above problems, we introduce the OccluMix strategy into the Virtual Try-On task. **Data augmentation**. Recently, several studies employ data augmentation (DA) [33, 34, 35, 36, 37, 38, 39] to enhance generalization ability. Mixup [12, 40] uses convex linear interpolation on the image level for data augmentation. CutMix [11] proposes to cut and paste a cropped area from an input image to other images for data augmentation. Although MixUp and CutMix demonstrate practical improvements, they do not utilize image prior knowledge, such as saliency, semantics and optical flow, for guidance. Random Crop [39] crops the images into a particular dimension and creates synthetic data. Compared with Mixup and Cutmix, it flexibly preserves the prior knowledge and solves the distortion problem caused by different scale try-on images [38]. However, it still remains a big challenge to solve the occlusion phenomenon (_i.e.,_ Inherent- and Acquired- Occlusion) on the simple scale image. Inspired by Supermix [36], we investigate a goal-oriented data augmentation method by using human parsing priors [41] for the try-on generation model. ## III Occlusion Analysis Existing state-of-the-art methods on 2D virtual try-on can be classified as a parser-based approach and a parser-free approach. However, both of them still remain the occlusion phenomenon in the try-on images. To investigate the occlusion effects in both approaches, we follow the pipeline of CP-VTON+ to explore the occlusion phenomenon on ACGPN and PF-AFN. As the former represents the parser-based approach, and PF-AFN represents the parser-free approach. We first explore the occlusion effect caused by twisted warped cloth. As shown in Fig. 4, if the warped results remain twisted pattern, the generated results will remain in this pattern, and express it in the form of occlusion. We denoted this occlu Fig. 4: Try-on images inherit the twisted pattern of the twisted warped cloth. Fig. 3: Analysis of Inherent-Occlusion. For parser-free try-on results, the ghost of previous garment will remain no matter try on arbitrary garments. Compared to parser-free method, the parser-based method will generate twisted images with failed parsing results. sion as acquired-occlusion, which was caused by the twisted warped results. It seems that the generator will generate clean try-on images with proper warping results, however, the occlusion effect also exists in the generative stage. As shown in Fig. 3, try-on results will remain the ghost of the previous garment. For a parser-free generator, we find that the ghost will remain no matter the try-on arbitrary garments. Since the parser-free generator needs to classify the try-on region of the reference images, we assume that the ghost is caused by poor generalization ability. In comparison with the parser-free generator, the parser-based generator will generate twisted images with failed parsing results. We denote this occlusion effect caused by previous garments as inherent-occlusion. Hence, we mainly demonstrate the occlusion problems in two forms. (_i.e._, Inherent- and Acquired- Occlusion). ## IV Methodology The proposed DOC-VTON is composed of three modules, as shown in Fig. 5. First, the Clothes Warping Module is designed to warp the target clothing image onto a real image. Second, the OccluMix module progressively generates the mask of body parts of the try-on image via semantic information, yielding occlusion effect by utilizing crop-and-paste strategy on try-on image to generate OccluMix samples. Finally, the generative module synthesizes the try-on image. ### _Cloth Warping Module (CWM)_ Following the training pipeline of PF-AFN [7], we use the second-order constraint to better preserve the cloth characteristics, and the constraint is defined as follows: \[L_{sec}=\sum_{i=1}^{N}\sum_{p}\sum_{\pi\in N_{p}}Char(f_{i}^{p-\pi}+f_{i}^{p+ \pi}-2f_{i}^{p}), \tag{1}\] where \(f_{i}^{p}\) denotes the p-th point on the flow maps of i-th scale. \(Char\) is the generalized charbonnier loss function [43]. \(N_{p}\) consists of the set of vertical, horizontal, and both diagonal neighborhoods around the p-th point. CWM warps the target cloth into a fit shape, which also maintains the details of the cloth. It performs well when the reference person stands simply. When a reference person stands in a complex posture, such as both torso twisting and two hands blocking in front of the body, the warping cloth may cover part of the human body. ### _OccluMix_ #### Iv-B1 Sharpened Parsing Network (SPN) The sharpened parsing network (SPN) is proposed to refine the warping clothes as well as to generate the body parts (e.g., arms) of the person. Many previous works neglect the fact that accurate parsing results can correct the unreasonable warping process. To address this issue, a sharpened parsing mechanism is adopted to refine the detail distortion of human parsing during try-on transformation. Specifically, suppose a person tries on new clothes, we define the masks of clothing and body parts as \(M_{c}\) and \(M_{w}\) (including head, arms and pants) in the original image, \(M_{c}^{s}\) and \(M_{w}^{s}\) are the masks of clothing and body parts in the try-on image, and \(M_{p}\) is the skeleton information of the reference person. Since \(M_{w}^{s}\) is absent, we use cloth and torso prior with Fig. 5: An illustration of our proposed method. OccluMix generates an augmented image by crop-and-pasting different clothing textures onto the challenging region of the try-on image. To simulate a challenging occlusion emergence on the try-on image, we need to identify the human component of the try-on image. We first obtain parser-based input image \(X_{A}\), then use the Cloth Warping Module (CWM) to predict the warping cloth \(T_{C}^{W}\) and warping clothing mask \(M_{C}^{W}\); In Step I, the Sharpened Parsing Generator (\(G\)) first multiplies the Clothing Mask \(M_{C}\) and \(\tilde{M}_{C}^{W}\) to get the strange area \(M_{e}\), then combine it with the body parts \(M_{w}\) to produce the potential location of body parts \(M_{w}^{p}\) including head, arms, and pants). Then, \(G\) generates the rough mask of body parts \(M_{w}^{g}\) by using Pose Map \(M_{p}\) and the potential location of body parts \(M_{w}^{p}\); In step II, the Sharpened Parsing Restorer (\(R\)) refines the rough mask of torso \(M_{t}^{o}\) to get the complete torso mask \(M_{t}^{c}\). In step III, we use DensePose [42] to select a challenging segment \(Y_{A}^{r}\) to multiply with an auxiliary cloth \(X_{B}\) to obtain the texture occlusion \(C_{B}\). And we mix \(C_{B}\) with the try-on image \(X_{A}^{s}\) to get the OccluMix sample \(\tilde{X}\). Finally, we exploit a generative module to generate the try-on images \(\tilde{X}_{A}\) by utilizing the warped cloth information \(\hat{I}_{c}\),\(\hat{M}_{c}\) and \(\tilde{X}\). warping cloth to obtain it. The formulation of \(M_{w}^{s}\) is defined as follows: \[M_{w}^{s}=M_{w}\odot(1-M_{c}^{s}). \tag{2}\] where \(\odot\) indicates element-wise multiplication. And \(M_{e}\) is the mask of strange fabric between \(M_{c}^{s}\) and \(M_{c}\), which indicates the potential location for generating torso region. The formulation of \(M_{e}\) is: \[M_{e}=M_{c}^{s}\odot(1-M_{c}). \tag{3}\] Then, we get the potential location of body parts \(M_{w}^{p}\) in try-on image by combining \(M_{e}\) and \(M_{w}^{s}\) as follows: \[M_{w}^{p}=M_{w}^{s}+M_{e}. \tag{4}\] As shown in Fig. 6 (a), since \(M_{w}^{p}\) and \(M_{w}\) are obtained, we can use the process of \((M_{p},M_{w}^{p})\xrightarrow{G}\!M_{w}\) to realize \(G\) to generate the rest masks of torso from the potential location \(M_{w}^{p}\) under the supervision of the label \(M_{w}\). However, when the reference people stand in a twisted posture, the pernicious distortion of the target cloth may destroy the body details in \(M_{w}\). To address this problem, we introduce \(M_{d}\) from the Irregular Mask Dataset [44] and merge it with \(M_{c}\) to simulate the failure case \(M_{t}^{o}\), where some details of the body are lost. The formulation is defined as follows: \[M_{t}^{o}=M_{w}\odot(1-M_{d})+M_{c}\cup M_{d}. \tag{5}\] We then feed it into restorer \(R\), and \((M_{t}^{o},M_{p})\xrightarrow{R}\!M_{w}\) is the process to refine the details of body parts. Note that training details of the restorer are presented in Fig. 6 (b). As shown in Fig. 7, the detail distortion of the body parts in \(M_{o}\) is enhanced by Restorer. After refining the parsing mask of the try-on image, we can use it to mine the regions to crop-and-paste the texture occlusion, and model the OccluMix data. #### Iii-B2 Occlusion Mixup In this section, we describe OccluMix, a data augmentation (DA) strategy that is designed for the try-on task. A practical DA method for try-on needs to simulate the challenging occlusion and serve as a good regularizer for the try-on model. Literally, OccluMix overlays different clothes on the try-on images, enforcing the network to restore realistic details during the try-on transformation. In the top right of Fig. 1, the personal image will retain the ghost of the complex texture when a person wearing complex clothes wants to put on new cloth. To this end, we ensure a certain percentage of complex textures existing in OccluMix. In our experiment, we use Gray-Level Co-occurrence Matrix [45] to estimate the entropy of the clothing complexity. Besides, we divide the clothes into two categories Fig. 6: (a) Generator is fed with Pose Map \(M_{p}\) and the potential location \(M_{w}^{p}\) to generate the coarse mask of body. (b) Restorer is trained to refine the body mask. At this stage, we divide two training cases. Case 1 is encouraged to partially complete the mask of body parts. Case 2 prevents Restorer from over-completing. Fig. 7: Effect of semantic restoration component. When the reference people stands in a twisted posture, Sharpened Parsing Generator provides wrong semantic information on try-on images, the restoration step can fixed this question. (_i.e.,_ simple and complex) based on their texture complexities: \[ENT=-\sum_{i=1}^{k}\sum_{j=1}^{k}G(i,j)logG(i,j), \tag{6}\] where \(G(i,j)\) represents the normalized occurrence of different gray scale values. As shown in Fig. 8, the left clothes are categorized into complex textures, and the right are simple textures. Then we use texture categorization to divide the training pairs of the reference person. \[T=\left\{\begin{array}{ll}1,\ ENT>=2.5,\\ 0,\ ENT<2.5,\end{array}\right. \tag{7}\] where \(T=1\) indicates that the clothes belong to the complex texture category and the other to the simple texture category. Let \(x\in\mathbb{R}^{W\times H\times C}\) and \(y\) denote a random sampled image and corresponding label, respectively. Example \(A\) (_i.e.,_\((x_{A},y_{A})\)) is the training sample. And example \(B\) (_i.e.,_\((x_{B},y_{B})\)) is chosen from the complex texture set or the simple texture set by complex coefficient \(\lambda\). As shown in Fig. 9, the goal of try-on mixup is to generate a new training sample \(\tilde{x}\) by combining two training samples \((x_{A},y_{A})\) and \((x_{B},y_{B})\). The formulations are as follows: \[\begin{split} C=y_{A}^{r}\odot y_{B}^{c},\\ \tilde{x}=C\odot x_{B}+(1-C)\odot x_{A},\end{split} \tag{8}\] where \(y_{A}^{r}\) is the selected region for occlusion mixing. We first count the _occlusion distribution_ of each human part in try-on images. DensePose [42] is then used to forward the results of Sharpened Parsing Network, and select \(y_{A}^{r}\) out w.r.t the _occlusion distribution_. Besides, \(y_{B}^{c}\) is the clothing mask of the person \(x_{B}\) and \(C\) is the mask of the texture occlusion. As shown in Fig. 5, we mix the cropped clothes into the input data to generate the augmented images. Finally, we feed the augmented images into the generator to obtain the try-on images with clothes on the real images. ### _Generative Module_ In this module, we adopt Res-UNet [3] as the backbone architecture of the generative module (GM). It can not only retain the characteristics of warped clothes, but also keep the details of human body parts. In the training phase, the parameters of GM are optimized by minimizing \(L_{g}\), as follows: \[L_{g}=\alpha_{l}L_{euc}+\alpha_{p}L_{per}, \tag{9}\] where \(L_{euc}\) is the pixel-wise L1 loss and \(L_{per}\) is the perceptual loss [46] to encourage the improvement of the try-on image visual quality. The formulations are as below: \[L_{euc}=\|I^{G}-I\|_{1}, \tag{10}\] \[L_{per}=\sum_{m}\|\phi_{m}(I^{G})-\phi_{m}(I)\|_{1}, \tag{11}\] where \(I^{G}\) and \(I\) are the generated and real image, respectively. And \(\phi_{m}\) indicates the \(m\)-th feature map in a VGG-19 [47] network pre-trained on ImageNet [48]. ## V Discussion ### _Differences from Mixup and its derivatives._ Mixup and its derivatives mix image contents within a random image patch, which does not consider both the spatial as well as semantic information of the image patch carefully. From Table I, we find that these random patch can degrade the performance on FID. In comparison with the traditional Mixup pipelines, OccluMix performs cut-and-paste between fake images and other reference images under semantic guidance, which breaks the defect of random patch mix. Fig. 8: The left clothes are complicated by involving dots, stripes, and various other textures. And the right clothes are more simple with less texture. Fig. 9: Process of Occlusion Mixup. We attempt to model occlusion in try-on images, however, it is hard to enforce texture occlusion to be gathered in a typical region to model real occlusion. To tackle this challenge, we utilize Sharpened Parsing Network (SPN) in OccluMix. ### _What does the model learn with Occlusion Mixup?_ Similar to the other DA methods that prevent the models from making a prediction over-confidently, OccluMix prevents the model from having no distinction between simple images or distorted images and helps it to alleviate the occlusion effect. This can be demonstrated in Fig. 10. Note that the superiority of OccluMix is not only reflected in the visual coherence of the try-on images, but also in the decreasing of the residual intensity map. We hypothesize that this enhancement is due to the balanced distribution of fake images. Now the model has learned to distinguish between simple and distorted data, and this leads the model to learn "where" and "how" it should address the occlusion effects. ## VI Experiments In this section, we first compare the DOC-VTON with state-of-the-art methods. Then, a detailed ablation study is made to analyze each component. Finally, we provide massive occlusion samples to verify Inherent- and Acquired-Occlusion and further demonstrate the negative effects of occlusion on the human visual system. Besides, to further validate the performance of the proposed DOC-VTON, we perform the visual comparison of the occlusion images (Generated by PF-AFN) and clean images (Generated by our DOC-VTON). ### _Dataset_ Experiments are conducted on the VITON dataset [1] that used in CP-VTON [9], CP-VTON+ [8], ACGPN [2] and PF-AFN [7]. VITON contains a training set of 14221 image pairs and a testing set of 2032 image pairs, each of which has a target clothing image and woman photo with the resolution of 256 \(\times\) 192. VITON-HD [22] consists of 11,647 groups for training set and 2032 groups for testing set, each image with the resolution of 1024 \(\times\) 768. All of our evaluations and visualizations are performed on the testing set. ### _Implementation Details_ #### Vi-B1 Architecture DOC-VTON contains CWM, OccluMix and GM. The structure of CWM consists of a dual pyramid feature extraction network (PFEN) [50] and a progressive appearance flow estimation network (AFEN) [7]. The generators of Sharpened Parsing Network in OccluMix have the same structure of U-Net [51]. And the structure of GM is Res-UNet [3]. In our experiments, the resolution of images is 256 \(\times\) 192 for VITON, and 1024 \(\times\) 768 for VITON-HD dataset. #### Vi-B2 Training We train Sharpened Parsing Generator (\(G\)) and Restorer (\(R\)) with 200 epochs. Due to the large spatial transformation between arms and clothes, we use a hard-example mining strategy. In the first 100 training epochs, we selectively train our model with challenging body parts (e.g., arms). In the last 100 epochs, we train our model on the whole body parts (including head, arms and pants) together. Then, we use OccluMix to generate samples for GM. All of them use the initial learning rate as 0.0005 and the network is optimized by Adam optimizer with the hyper-parameter \(\beta_{1}\) = 0.5, and \(\beta_{2}\) = 0.999. #### Vi-B3 Testing The testing process follows the same procedure as training. The reference person images, target clothes, human parsing results, and human pose estimations are given as input of DOC-VTON to generate the try-on image. ### _Qualitative Results_ To further validate the performance of the proposed DOC-VTON framework, we perform the visual comparison of \begin{table} \begin{tabular}{c|c} \hline Method & FID \(\downarrow\) (\(\delta\)) \\ \hline PF-AFN & 10.09 (+0.00) \\ \hline Cutout [49] & 10.36 (-0.27) \\ CutMix [11] & 10.31 (-0.22) \\ Mixup [12] & 10.17 (-0.08) \\ CutBlur [35] & 9.94 (+0.15) \\ \hline OccluMix & 9.66 (+0.43) \\ \hline \end{tabular} \end{table} TABLE I: FID comparison with mixup and its derivatives. We report the baseline model (PF-AFN [7]) performance that is trained on the VITON [1] dataset. \(\delta\) denotes the performance gap between with and without augmentation. Fig. 10: Qualitative effects of OccluMix during virtual try-on inference. (a) OccluMix successfully generates the clean try-on images while the baseline generates the try-on images with the occlusion from previous clothes. (b) \(\bigtriangleup\) is the absolute residual intensity map between the try-on image and the ground truth. From the comparison on \(\bigtriangleup\), it can be demonstrated that the OccluMix resolves the ghost of clothing texture. In both (a) and (b), OccluMix generates the try-on images with better visual coherence. Zooming up for better view. our proposed method with CP-VTON [9], ClothFlow [3], ACGPN [2] and PF-AFN [7]. #### Iv-C1 Results on VITON As shown in the first and the second rows of Fig. 11, when a person strikes a complex posture, such as standing with one arm raised around the face, the occlusion of target cloth occurs on the arms. In such cases, baseline models all fail to handle the warping process, leading to distorted arm images or broken sleeves. The warping methods of the baseline model fail to restore the severe non-rigid deformation, due to the limited degrees of freedom in TPS [23] or misalignment in the Appearance Flow [3]. In the last row, images generated by baseline methods contain the obvious artifacts, as CP-VTON [9], ClothFlow [3] and ACGPN [2] own cluttered texture, boundary-blurring and color mixture problems. To this end, they are vulnerable to segmentation errors as they heavily rely on parsing results to drive image generation. Although PF-AFN [7] drives image generation without using parsing results, it has not performed better on generating the fake body parts. Furthermore, when a huge displacement exists between the target clothes and the original clothes (e.g., the person wears long sleeve clothes while the target clothes are short sleeves), PF-AFN [7] fails to generate arms at the cuffs of long sleeves since the model does not understand the shape of the clothes thoroughly. To this end, state-of-the-art models are less robust to handle some special cases (e.g., cuffs of long sleeves, complex clothes texture). In comparison, the proposed DOC-VTON performs a realistic virtual try-on, which simultaneously handles the warping process to avoid the pernicious occlusion, and preserves the details of both target clothes and human body parts. Benefiting from the OccluMix, our model is robust to generate non-label body parts (e.g., arms, hands, and fingers) from the original cloth. All qualitative results of DOC-VTON clearly verify the superiority against CP-VTON, ClothFlow, ACGPN and PF-AFN. #### Iv-C2 Results on VITON-HD To verify the generalization of our method, we visualize the results on VITON-HD [22] dataset. Since the artifacts in higher resolution are more obvious to be observed, it is more challenging to generate highly-realistic try-on results. As shown in Fig. 12, the results demonstrate that our DOC-VTON is effective to generate high-quality images on VITON-HD dataset. ### _Quantitative Evaluation_ For virtual try-on, the try-on image is generated by a target cloth and a reference person image. Since the ground truth of the try-on image is absent, we can not use point-to-point indicators (e.g., SSIM, PSNR and LPIPS) that require computation with labels. In this paper, we adopt the Frechet Inception Distance (FID) [52] to measure the diversity of the try-on images. The lower score of FID indicates a higher quality of the results. Besides, the Inception Score (IS) [53] will not be used, owing to Rosca et.al [54] have pointed out that applying the IS to the models trained on datasets other than ImageNet will give misleading results. Table II lists the FID score of the try-on results for CP-VTON [9], CP-VTON+ [8], ClothFlow [3], ACGPN [2], Fig. 11: Visual comparison of VITON dataset. Compared with four state-of-the-art try-on methods [1, 2, 7, 9], our model generates more realistic try-on images. **With the proposed De-occlusion strategy, our approach not only processes the irrational parts of the warping clothes, but also clearly avoids the ghost of former clothes.** Fig. 12: Extensive visual comparisons on VITON-HD dataset. From diverse perspectives, our approach generates high-quality images. DCTON [20], RT-VTON [21], PF-AFN [7] and our DOC-VTON on the VITON dataset. Our proposed DOC-VTON outperforms other methods, which indicates that DOC-VTON can improve the perceptual quality of try-on images, handle large misalignment between clothes and person, and synthesize realistic try-on results. ### _Human Perception Study_ The subtle changes (e.g. removing partial occlusion or ghost) on try-on images play an important role in human visual coherence. Because FID is insensitive to subtle changes on synthetic images, which can not demonstrate the effectiveness of our method. We further conduct two user study by recruiting 25 volunteers. For the first task, our goal is to verify the superiority of the individual methods. Specifically, CP-VTON [9], ClothFlow [3], ACGPN [2], PF-AFN [7], and DOC-VTON generate the try-on images from 300 specified reference images respectively. Each volunteer is asked to rank the methods in each group images. The left image of Fig. 13 demonstrate a clear advantage. Our method ranks the first 61.83 \(\%\). The second task is to compare our method with other methods one by one. We divide the previous synthetic images into five a / b (_i.e.,_ a is our methods and b is other methods) groups. Each volunteer is asked to choose the one with better visual quality. As shown in the right image of Fig. 13, our DOC-VTON is always rated better than the other methods. As shown in Fig. 13, our DOC-VTON achieves the highest voter turnout in both tasks. It verifies the great superiority of DOC-VTON over the other methods. The human perception study demonstrates the effectiveness of the proposed method in removing pernicious occlusions and enjoying the human visual system in the try-on task. ### _Ablations_ In this section, we analyze each component in OccluMix which brings to the robustness of the model. _Analysis of Complex Coefficient \(\lambda\)_. As discussed above, we introduce the OccluMix scheme in the training stage. To validate the effectiveness of OccluMix, we design a baseline method (_i.e.,_ 'w/o OccluMix') that is finetuned on the original model. As shown in Fig. 14, 'w/o OccluMix' meets a 0.08 FID drop when removing OccluMix scheme. To analyze the OccluMix scheme continuously, we conduct experiments by setting \(\lambda\) from 0 to 1, with intervals of 0.1. The continuous sampling of \(\lambda\) indicates the mixup paradigm from simple to complex. As shown in Fig. 14, we find that the model shows the trivial result when the simplest mixup pattern is used (\(\lambda=0.0\) indicates the DOC-VTON only use simple textures for OccluMix). We also conduct a balance distribution with \(\lambda=0.5\), which indicates two well-proportioned texture complexity are used for OccluMix. As shown in Fig. 14, the balance distribution shows the best performance than others. Since the unbalanced mixup distribution lead to the performance suffering from an obvious drop, we adopt a balanced complex coefficient in OccluMix. _Ablations of OccluMix._ We show the ablation studies on the effects of using the OccluMix scheme in the training stage. Meanwhile, the 'w/o OccluMix' owns the same network architecture as 'w/ OccluMix', other than the training process abandons the OccluMix \begin{table} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Method} & Region of Arm & Warped Clothes & Try-on Results \\ \cline{2-4} & \multicolumn{3}{c}{FID \(\downarrow\)} \\ \hline CP-VTON & 24.42 & 33.17 & 24.43 \\ CP-VTON+ & 22.36 & 30.21 & 21.08 \\ ACGPN & 16.62 & 24.95 & 15.67 \\ ClothFlow & 18.42 & 22.50 & 14.43 \\ DCTON & 13.27 & 30.18 & 14.82 \\ RT-VTON & 12.70 & 20.59 & 11.66 \\ PF-AFN & 12.86 & 18.67 & 10.09 \\ \hline Ours & **11.14** & **18.18** & **9.54** \\ \hline \end{tabular} \end{table} TABLE II: The FID score among different methods on the VITON dataset. Fig. 14: Ablation study of complex coefficient \(\lambda\). Fig. 13: We make an in-depth quality assessment on user study by recruiting 25 volunteers to complete two tasks. The left image is the quality ranking of each method. The right image is the comparison between DOC-VTON and other methods by a A/B test. In the right image, the label A denotes the percentage where our DOC-VTON is considered better over the compared method, and the label b denotes the percentage where the compared method is considered better over our DOC-VTON. strategy and SPN. As shown in Fig. 10, we notice that the try-on image generated by the model trained with the OccluMix scheme outperforms the plain model. With the OccluMix, the model enables to learn where to generate realistic body parts. As depicted in Table III, 'w/ OccluMix' achieves a 0.14 FID gains compared with 'w/o OccluMix'. _Sharpened Parsing Network._ We show the ablation studies on the effects of the 'Sharpened Parsing Network (SPN)'. Since the result of SPN includes semantic information about the body parts of a try-on image, we can apply them to guide the warping process when the target clothes produce unreasonable distortion on the human body (e.g., arms, hands). As shown in Fig. 15, with the guidance of the SPN, the warping clothes overcome the complex spatial interactions, and the distortion between the rough warping clothes and the body part. Otherwise, the rough shape of warping clothes will cause body distortion. As depicted in Table III, 'w/ SPN' achieves a 0.12 FID gains compared with 'w/o SPN'. ## VII Conclusion and limitation In this paper, we have introduced Occlusion Mixup (OccluMix), a new DA method and strategy for training a stronger try-on model. We proposed a novel De-Occlusion approach by a data augmentation manner, which enables our model to remove the partial occlusion to produce realistic try-on images. Extensive evaluations clearly verify the obvious superiority of DOC-VTON over the state-of-the-art methods with less occlusion effect. Though DOC-VTON addresses the occlusion problem in specific try-on datasets, it still shows limited performance on out-of-distribution (OOD) images. Thus, DOC-VTON may suffer restrictions of light, posture, and background conditions. In the following work, we will continue to apply unsupervised 2D to 3D transformation into DOC-VTON to develop an ODD virtual try-on framework. In addition, since our method can be used not only to remove occlusions, but also to expose body parts, we have declared BSD license in the open source code to avoid this potential social implications.
2306.15471
Dynamical realization of the small field inflation of Coleman-Weinberg type in the post supercooled universe
The small field inflation (SFI) of Coleman-Weinberg (CW) type suffers from precise tuning of the initial inflaton field value to be away from the true vacuum one. We propose a dynamical trapping mechanism to solve this problem: an ultra-supercooling caused by an almost scale-invariant CW potential traps the inflaton at the false vacuum, far away from the true vacuum dominantly created by the quantum scale anomaly, and allows the inflaton to dynamically start the slow-roll down due to a classical explicit-scale breaking effect. To be concrete, we employ a successful CW-SFI model and show that the proposed mechanism works consistently with the observed bounds on the inflation parameters. The proposed new mechanism thus provides new insights for developing small field inflation models.
He-Xu Zhang, Hiroyuki Ishida, Shinya Matsuzaki
2023-06-27T13:45:44Z
http://arxiv.org/abs/2306.15471v2
# Dynamical realization of the small field inflation in the post supercooled universe ###### Abstract The small field inflation (SFI) of Coleman-Weinberg (CW) type suffers from precise tuning of the initial inflaton field value to be away from the true vacuum one. We propose a dynamical trapping mechanism to solve this problem: an ultra-supercooling caused by an almost scale-invariant CW potential traps the inflaton at the false vacuum, far away from the true vacuum dominantly created by the quantum scale anomaly, and allows the inflaton to dynamically start the slow-roll down due to a classical explicit-scale breaking effect. To be concrete, we employ a successful CW-SFI model and show that the proposed mechanism works consistently with the observed bounds on the inflation parameters. The proposed new mechanism thus provides new insights for developing small field inflation models. ## I Introduction Inflationary cosmology provides an elegant solution to the horizon and flatness problems, while also offering a mechanism for the generation of primordial density perturbations that seed the formation of structures. Among various inflation models, a class of small field inflation (SFI) based on the potential of Coleman-Weinberg (CW) type [1], called the CW-SFI [2], would be an attractive scenario because the related quantum scale anomaly could also be linked to the scale generation mechanism for the Standard Model possibly with the beyond the Standard Model sectors. However, the CW-SFI possesses an intrinsic problem: in order to yield a sufficiently large e-folding number consistently with the observed cosmic microwave background fluctuations, the inflaton field is required to start the slow-roll, away from the true vacuum, close to the top of the potential at or around the false vacuum. This is sort of as a fine-tuning problem, which implies necessity of a proposal for a convincing mechanism to trap the inflaton at or around the false vacuum and trigger dynamically starting the slow-roll down to the true vacuum. The problem is simply linked to the scale invariance around the origin (the false vacuum) of the CW type potential, which is necessarily far away from the true vacuum created by the quantum scale anomaly. In the literature [3], this intrinsic fine-tuning problem has been recapped, and a mechanism to trap the inflaton around the false vacuum has been proposed, in which the trapping dynamically works due to the particle number density (like plasma or a medium) created by the preheating [4; 5; 6; 7; 8]. (for reviews, see, e.g., [8; 9; 10]). In this paper, we propose an alternative dynamical trapping mechanism. It is triggered by the ultra-supercooling intrinsic to the classical scale invariance and a possible explicit scale-breaking effect on the CW-SFI, in which the latter also plays a crucial role to be fully consistent with the observational bounds on the cosmological inflation parameters, as discussed in [11; 12; 13]. An ultra-supercooling takes place due to the delayed decay of the false vacuum, equivalently, the late tunneling interfered by the Hubble friction. Thus even much below the critical temperature of the CW phase transition of the first-order type, the inflaton field keeps being trapped around the false vacuum until the thermally created potential barrier gets ineffective. As the universe cools, the additional explicit scale-breaking term, linear in the inflaton field (with a negative slope at the origin), shifts the trapping place close to the true vacuum with holding the inflection point. Immediately after the inflection point goes away, the inflaton is allowed to start the slow-roll down to the true vacuum, which is driven by the linear term of the explicit scale breaking. See Fig. 2. To demonstrate how the proposed trapping mechanism works practically, we employ a referenced CW-SFI model [14; 13] which can be thought of as a low-energy description of many flavor QCD with a composite dilaton as a scalon [15], where the thermal phase transition and the bounce solution relevant to the supercooling are explicitly evaluated. We then show that the trapping mechanism is indeed operative consistently with the observational bounds on the cosmological inflation parameters. ## II Scale invariant linear sigma model As discussed in [14], the CW-type potential can be realized in a view of a linear sigma model with the classical scale symmetry along the flat direction [15]. This is thought of as an effective theory of an underlying large \(N_{f}\) QCD (so-called the large \(N_{f}\) walking gauge theory), and is compatible with the CW-SFI as shown in [13]. We start with a review of the literature [14] and momentarily employ the linear sigma model based on the chiral \(U(N_{f})_{L}\times U(N_{f})_{R}\) symmetry to derive the CW-type potential for the scalon [15] arising as the \(U(N_{f})\) singlet scalar meson. The linear sigma model Lagrangian with the classical scale invariance takes the form \[{\cal L}={\rm Tr}\left[\partial_{\mu}M^{\dagger}\partial^{\mu}M\right]-\lambda _{1}({\rm Tr}\left[M^{\dagger}M\right])^{2}-\lambda_{2}{\rm Tr}\left[(M^{ \dagger}M)^{2}\right]\,. \tag{1}\] The linear sigma filed \(M\) is decomposed into \(N_{f}^{2}\) scalar mesons and \(N_{f}^{2}\) pseudoscalar mesons (denoted as \(s^{a}\) and \(p^{a}\), respectively): \[M=\sum_{a=0}^{N_{f}^{2}-1}\,(s^{a}+ip^{a})\,T^{a}\,, \tag{2}\] with \(T^{0}=\frac{1}{\sqrt{2N_{f}}}\mathbb{I}_{N_{f}\times N_{f}}\) and \(T^{i}\) being generators of \(SU(N_{f})\,(i=1,\cdots,N_{f}^{2}-1)\) normalized as \({\rm Tr}[T^{a}T^{b}]=\delta^{ab}/2\). The Lagrangian is invariant under \(U(N_{f})_{L}\times U(N_{f})_{R}\) chiral transformation for \(M\) as \[M\to g_{L}\cdot M\cdot g_{R}^{\dagger},\quad g_{L},g_{R}\in U(N_{f})\,. \tag{3}\] The \(M\) is assumed to develop the vacuum expectation value (VEV) along the \(U(N_{f})\) singlet direction, i.e., \(s^{0}\), which reflects the underlying large \(N_{f}\) QCD nature as the vectorlike gauge theory. Through the analysis of the renormalization group (RG) equations, the Gildener-Weinberg (GW) mechanism [15] tells us that if one takes the condition \(\lambda_{1}=-\lambda_{2}/N_{f}\) at some RG scale \(\mu_{\rm GW}\)[14; 16], there exists a flat direction in the tree-level potential for which \(V_{0}\) identically vanishes and a massless scalar emerges (dubbed the scalon), along which perturbation theory can be used. Thus, the radiative corrections along the flat direction develop a nontrivial vacuum away from the origin, a false vacuum as the consequence of the scale anomaly associated with the introduced RG scale. With a suitable renormalization condition, the one-loop potential \(V_{1}\) in the present linear sigma model can thus be calculated as [14; 16] \[V_{1}(M)=\frac{1}{64\pi^{2}}\sum_{a=0}^{N_{f}^{2}-1}\left(m_{s^{a}}^{4}(M) \left(\ln\frac{m_{s^{a}}^{2}(M)}{\mu_{{}_{\rm GW}}^{2}}-\frac{3}{2}\right)+m _{p^{a}}^{4}(M)\left(\ln\frac{m_{p^{a}}^{2}(M)}{\mu_{{}_{\rm GW}}^{2}}-\frac{ 3}{2}\right)\right)+\epsilon_{0}\,, \tag{4}\] where \(\epsilon_{0}\) is a constant vacuum energy. \(m_{s^{a}}^{2}\) and \(m_{p^{a}}^{2}\) are the mass functions for scalars and pseudoscalars: \[m_{s^{a}}^{2}=\frac{\partial^{2}V_{0}(M)}{\partial(s^{a})^{2}}\,,\qquad m_{p^ {a}}^{2}=\frac{\partial^{2}V_{0}(M)}{\partial(p^{a})^{2}}\,. \tag{5}\] By means of the chiral rotation, it is possible to choose \(s^{0}\) to be the flat direction as \[\langle M\rangle=T^{0}\langle s^{0}\rangle=\frac{1}{\sqrt{2N_{f}}}\mathbb{I} \cdot\langle s^{0}\rangle\,. \tag{6}\] Then \(m_{s^{a}}^{2}\) and \(m_{p^{a}}^{2}\) can be expressed as \[m_{s^{0}}^{2}(s^{0})=0\,, m_{s^{a}}^{2}(s^{0})=\left(\lambda_{1}+\lambda_{2}\frac{3}{N_{f}} \right)(s^{0})^{2}=\frac{2\lambda_{2}}{N_{f}}(s^{0})^{2}\,,\] \[m_{p^{a}}^{2}(s^{0})=0\,, m_{p^{a}}^{2}(s^{0})=0\,, \tag{7}\] where the flat direction condition \(\lambda_{1}+\lambda_{2}/N_{f}=0\) has been used. It is clear to see two types of the Nambu Goldstone (NG) bosons at this moment, where one is the scalon, \(s^{0}\), associated with the spontaneous breaking of the scale symmetry along the flat direction, while the other corresponds to the NG bosons, \(p^{a}\), for the spontaneous chiral breaking. Accordingly, the effective potential for the scalon \(s^{0}\) is given by \[V_{\rm eff}(s^{0})=\frac{N_{f}^{2}-1}{64\pi^{2}}m_{s^{a}}^{4}(s^{0})\left(\ln \frac{m_{s^{a}}^{2}(s^{0})}{\mu_{{}_{\rm GW}}^{2}}-\frac{3}{2}\right)+\epsilon _{0}\,. \tag{8}\] We introduce an explicit chiral and scale-breaking term to the potential, \[-cs^{0}\,, \tag{9}\] which, in a sense of the underlying large \(N_{f}\) QCD, corresponds to the current mass term for the hidden/dark quarks, hence makes the chiral NG bosons (\(p^{a}\)) pseudo. Then the potential of \(s^{0}\) in Eq.(8) gets shifted as \[V_{\rm eff}(s^{0})=-c\,s^{0}+\frac{N_{f}^{2}-1}{64\pi^{2}}m_{s^{a}}^{4}(s^{0}) \left(\ln\frac{m_{s^{a}}^{2}(s^{0})}{\mu_{{}_{\rm GW}}^{2}}-\frac{3}{2} \right)+\epsilon_{0}\,, \tag{10}\] where we have kept the leading order terms in perturbation series of small \(c\), so that the \(p^{a}\) and \(s^{0}\) loop contributions have been dropped. The stationary condition for this modified effective potential is as follows: \[0=\left.\frac{\partial V_{\rm eff}(s^{0})}{\partial s^{0}}\right|_{s^{0} \rightarrow\langle s^{0}\rangle}\] \[=-c+\frac{\lambda_{2}^{2}}{4\pi^{2}}\frac{N_{f}^{2}-1}{N_{f}^{2}}(s^{0})^{3} \left(\ln\frac{m_{s^{i}}^{2}(\langle s^{0}\rangle)}{\mu_{{}_{\rm GW}}^{2}}-1 \right)\,. \tag{11}\] Then the VEV of \(s^{0}\) is related to the RG scale \(\mu_{GW}\) as the consequence of the dimensional transmutation: \[\mu_{{}_{GW}}=\sqrt{\frac{2\lambda_{2}}{N_{f}}}\langle s^{0}\, \rangle\,e^{-\frac{2\pi^{2}c-N_{f}^{2}}{\lambda_{2}^{2}(1-N_{f}^{2})^{3}}+1}\,. \tag{12}\] ## III Matching with the Walking-Dilaton Inflaton Potential As argued in the literature [14], the scalon potential in Eq.(11) can be regarded as the composite dilaton potential arising as the nonperturbative scale anomaly in the underlying walking (almost scale-invariant) gauge theory as large \(N_{f}\) QCD. In that case, the mesonic loop corrections (of \({\cal O}(1/N_{c})\) in the large \(N_{c}\) expansion) along the flat direction are matched with the nonperturbative scale anomaly term (\(\sim g^{2}G_{\mu\nu}^{2}={\cal O}(1/N_{c})\)) which, in terms of the walking dilaton effective theory [17], takes the CW-type potential form as well. Including the explicit chiral-scale breaking term, the potential of the walking dilaton inflaton (\(\chi\)) takes the form [13] \[V(\chi)=-\frac{C}{2N_{f}}\chi\,{\rm Tr}\left[U+U^{\dagger}\right]+\frac{ \lambda_{\chi}}{4}\chi^{4}\left(\ln\frac{\chi}{v_{\chi}}+A\right)+V_{0}\,. \tag{13}\] with [13] \[U=e^{2{\rm i}\pi^{i}T^{i}/f_{\pi}}\,,\quad C=\frac{N_{c}N_{f}m_{ \pi}^{2}m_{F}^{2}}{8\pi^{2}v_{\chi}}\,, \tag{14}\] \[\lambda_{\chi}\simeq\frac{16N_{c}N_{f}}{\pi^{4}}\left(\frac{m_{F} }{v_{\chi}}\right)^{4}\,,\quad A=-\frac{1}{4}+\frac{C}{\lambda_{\chi}v_{\chi} ^{3}}\,, \tag{15}\] where \(V_{0}\) denotes a constant vacuum energy; \(m_{\pi}\) and \(f_{\pi}\) are the pion mass and the pion decay constant in the large \(N_{f}\) walking gauge theory; \(m_{F}\) is the fermion dynamical mass; \(v_{\chi}\) stands for the walking dilaton inflaton VEV. The quartic coupling \(\lambda_{\chi}\) for the inflaton \(\chi\) is required to be extremely tiny so as to realize the observed amplitude of the scalar perturbation. As was stressed in [13], this tiny quartic coupling can naturally be realized due to the intrinsic walking nature yielding a large enough scale hierarchy between \(m_{F}\) and \(v_{\chi}\). Matching Eq.(11) with the above \(V(\chi)\), we find the correspondence \[s^{0}=\chi\,,\qquad\langle s^{0}\rangle=\langle\chi\rangle=v_{ \chi}\,,\qquad c=C\,,\] \[\lambda_{2}^{2}=\frac{2\pi^{2}N_{f}^{2}}{N_{f}^{2}-1}\lambda_{ \chi}\,,\qquad\epsilon_{0}=V_{0}\,. \tag{16}\] Thus the free parameters in the linear sigma model can be evaluated in terms of the underlying large \(N_{f}\) walking gauge theory, which makes it possible to incorporate the thermal corrections into the walking dilaton inflation potential from the linear sigma model description, as noted in [14]. The slow roll parameters (\(\eta\) and \(\epsilon\)), the e-folding number (\(N\)) and the magnitude of the scalar perturbation (\(\Delta_{R}^{2}\)) are respectively defined as \[\eta=M_{\rm pl}^{2}\left(\frac{V^{\prime\prime}(\chi)}{V(\chi)} \right)\,,\] \[\epsilon=\frac{M_{\rm pl}^{2}}{2}\left(\frac{V^{\prime}(\chi)}{V( \chi)}\right)^{2}\,,\] \[N=\frac{1}{M_{\rm pl}^{2}}\int_{\chi_{\rm end}}^{\chi_{\rm ini}} d\chi\left(\frac{V(\chi)}{V^{\prime}(\chi)}\right)\,,\] \[\Delta_{R}^{2}=\frac{V(\chi)}{24\pi^{2}M_{\rm pl}^{4}\epsilon}\,, \tag{17}\] with \(M_{\rm pl}\) being the reduced Planck mass \(\simeq 2.4\times 10^{18}\) GeV. The SFI with the extremely small chiral-scale breaking by the \(m_{\pi}\) will give an overall scaling for \(\epsilon/\eta\) with the small expansion factors as \(\frac{\epsilon}{\eta}\sim\left(\frac{m_{\pi}}{m_{F}}\right)^{4}\left(\frac{v_ {\chi}}{\chi}\right)^{2}\). Hence the inflation would be ended by reaching \(\eta=1\), as long as \(\chi/v_{\chi}>(m_{\pi}/m_{F})^{2}\), as in the CW-SFI case. In the case with \(m_{\pi}\ll\chi\ll m_{F}\ll v_{\chi}\), which is naturally realized in the present model, the \(\eta\) and \(\epsilon\) as well as the \(\Delta_{R}^{2}\) and \(N\) can further be approximated to be [13] \[\eta\simeq 24\frac{M_{\rm pl}^{2}}{v_{\chi}^{2}}\frac{\chi^{2}}{v_{ \chi}^{2}}\ln\frac{\chi^{2}}{v_{\chi}^{2}}\,,\] \[\epsilon\simeq\frac{\pi^{4}}{2}\left(\frac{M_{\rm pl}}{v_{\chi}} \right)^{2}\left(\frac{m_{\pi}}{m_{F}}\right)^{4}\,,\] \[\Delta_{R}^{2}\simeq\frac{2}{\pi^{10}}\left(\frac{m_{F}}{v_{\chi}} \right)^{4}\cdot\left(\frac{v_{\chi}}{M_{\rm pl}}\right)^{6}\left(\frac{m_{F}} {m_{\pi}}\right)^{4}\,,\] \[N\simeq\frac{(\chi_{\rm end}-\chi_{\rm ini})}{\sqrt{2\epsilon}M_{ \rm pl}}\simeq\frac{(\chi_{\rm end}-\chi_{\rm ini})v_{\chi}}{6\pi^{2}M_{\rm pl }^{2}}\left(\frac{m_{F}}{m_{\pi}}\right)^{2}\,. \tag{18}\] The conventional CW-SFI scenario sets the e-folding number \(N\) by by \(\eta\), which leads to the incompatibility between \(N\) and the spectral index \(n_{s}=1+2\eta\) in comparison with the observational values [2; 18]. As discussed in the literature [11; 12; 13], this problem can be resolved by a small enough tadpole term corresponding to the \(C\)-term in Eq.(16), where \(N\) is determined by \(\epsilon\). ## IV Walking dilaton Inflaton Potential at Finite Temperature As noted above, along the flat direction, the thermal corrections to the walking dilaton potential can be evaluated by computing the thermal loops in which only the heavy scalar mesons \(s^{i}\) flow. Taking into account also the higher loop corrections via the so-called daisy resummation, we thus get where \(J_{B}(X^{2})\equiv\sum_{a=0}^{N_{f}^{2}-1}\int_{0}^{\infty}x^{2}\ln\Big{(}1-e^{- \sqrt{x^{2}+X^{2}}}\Big{)}dx\), and the \(s^{i}\) scalar meson masses have been dressed as \({\cal M}_{s^{i}}^{2}(s^{0},T)=m_{s^{i}}^{2}(s^{0})+\Pi(T)\) with \[\Pi(T)=\frac{T^{2}}{6}\big{(}(N_{f}^{2}+1)\lambda_{1}+2N_{f}\lambda_{2}\big{)} \Big{|}_{\lambda_{1}=-\lambda_{2}/N_{f}}\,. \tag{20}\] For the walking dilaton inflaton to be consistent with the observation on cosmological inflation parameters, we take a benchmark parameter setting which satisfies various phenomenological and cosmological constraints [13] \[N_{c}=3\,,\qquad N_{f}=8\,,\qquad v_{\chi}=1.7\times 10^{15}\,{ \rm GeV}\,,\] \[m_{F}=6.2\times 10^{11}\,{\rm GeV}\,,\qquad m_{\pi}=1.1\times 10^{5} \,{\rm GeV}\,, \tag{21}\] which completely fixes the potential parameters in Eq.(19) through Eq.(16). Figure 1 shows the \(T\)-dependence of the walking dilaton inflaton VEV \(\langle s^{0}\rangle\) which plays the role of the chiral order parameter in the underlying walking gauge theory. The extremely strong first-order phase transition is observed at \[T_{c}\simeq 3\times 10^{11}\,{\rm GeV}\,, \tag{22}\] due to the thermally developed wide potential barrier between the false vacuum and the true vacuum at \(\langle s^{0}\rangle=v_{\chi}\). ## V Trapping by Supercooling and Driving Slow Roll Even when the temperature cools down to \(T_{c}\), however, the thermal chiral phase transition is not completed in the universe due to the existence of the wide barrier, hence the universe enters into a (ultra) supercooled state until the bubble nucleation takes place, in which epoch the potential no longer gets significant thermal corrections and almost takes the same form as the one at \(T=0\). This ultra-supercooling traps the inflaton VEV at the false vacuum until the inflection point of the potential goes away, that is the same timing as the false vacuum decay is allowed to happen. Accordingly, the slow-roll starts to realize the inflation. In this section, we explicitly evaluate the bubble nucleation rate and observe this scenario in details. The bubble nucleation rate per unit volume at high temperature is given by \[\Gamma(T)\simeq T^{4}\left(\frac{S_{3}/T}{2\pi}\right)^{3/2}\,\exp\left(-\frac {S_{3}(T)}{T}\right)\,, \tag{23}\] where \(S_{3}(T)\) is the \(O(3)\) symmetric bounce action determined by the following equation of motion: \[\frac{d^{2}s_{b}^{0}(r,T)}{dr^{2}}+\frac{2}{r}\frac{ds_{b}^{0}(r,T)}{dr}-\frac {dV_{\rm eff}(s_{b}^{0},T))}{ds_{b}^{0}}=0\,, \tag{24}\] with the boundary conditions \[\frac{2}{r}\frac{ds_{b}^{0}(r)}{dr}\bigg{|}_{r=0}=0\,,\quad s_{b}^{0}(r)|_{r= \infty}=s_{{}_{\rm PV}}^{0}\,. \tag{25}\] Here \(r=0\) corresponds to the center of the bubble and \(s_{{}_{\rm PV}}^{0}\) is the location of the false vacuum. The bubble nucleation temperature \(T_{n}\) is defined at the moment when the bubble nucleation rate first catches up with the Hubble expansion rate \[\frac{\Gamma(T_{n})}{H(T_{n})}\sim 1\,. \tag{26}\] This turns out to be amount to \(S_{3}(T_{n})/T_{n}\simeq 100\) via Eq.(23) with the value of \(T_{n}\) to be fixed later. As noted by Witten in [19], near the origin and the barrier, the effective potential in Eq.(19) can be well approximated to be \[V_{\rm eff}(s^{0},T)\simeq\frac{\lambda_{2}(N_{f}^{2}-1)T^{2}}{12N_{f}}(s^{0} )^{2}-\frac{N_{f}^{2}-1}{8\pi^{2}}\frac{\lambda_{2}^{2}}{N_{f}^{2}}\ln\frac{ \mu_{{}_{\rm CW}}}{T}(s^{0})^{4}\,, \tag{27}\] Figure 1: The inflaton VEV evolution with respect to temperature \(T\) along the flat direction in the walking gauge theory with \(N_{c}=3\) and \(N_{f}=8\). where we have ignored the tadpole term just for convenience without loss of generality. The tunneling rate can then analytically be evaluated as [19] \[\frac{S_{3}(T_{n})}{T_{n}}\simeq\frac{37.794\pi^{2}}{\sqrt{6}}\frac{N_{f}^{3/2} }{\lambda_{2}^{3/2}(N_{f}^{2}-1)^{1/2}}\frac{1}{\ln(\mu_{{}_{\rm GW}}/T_{n})}\,. \tag{28}\] This implies that for small enough \(\lambda_{2}\) as in the present almost scale-invariant model (\(\lambda_{2}\sim 10^{-6}\)), the bubble nucleation temperature \(T_{n}\) will become \(\ll T_{c}\) and the universe experiences an ultra-supercooling #1 Footnote #1: Since the tunneling definitely happens before the barrier vanishes, it is also reasonable to identify the nucleation temperature \(T_{n}\) as the temperature \(T_{\rm van}\) at which the barrier becomes vanishing, namely, like \(T_{n}\simeq T_{\rm van}\). The inflaton is trapped in the false vacuum created by the explicit-scale breaking \(C\) term linear in \(s^{0}\), in Eq.(19), and the term quadratic in \(s^{0}\), in Eq.(27), which is hence shifted close to the true vacuum as \(T\) gets lower. The barrier, hence the inflection (stationary) point does not go away until the temperature reaches \(T_{n}(\ll T_{c})\). When the barrier and inflection point are gone, the inflaton starts to slowly roll down from the inflection point and the inflation of the universe begins. It is a slow enough roll, which is guaranteed by the approximate scale invariance. See also Fig. 2. Thus the starting point of the slow-roll inflation is dynamically determined by the disappearance of the inflection point, when the false vacuum becomes no longer the inflection or stationary point: \[V^{\prime}(s^{0}_{\rm FV}(T_{n}))\neq 0\,,\qquad V^{\prime\prime}(s^{0}_{\rm FV} (T_{n}))\neq 0\,. \tag{29}\] Actually, the thus driven SFI epoch would be in details more involved at the instant timing when the inflation starts the slow roll: Since the trapped \(s^{0}_{\rm FV}\) starts to roll with a tiny velocity and curvature \(V^{\prime}\sim 0\) and \(V^{\prime\prime}\sim 0\), the conventional slow-roll argument, on which the Hubble is perfectly governed by the inflaton potential energy, does not simply work (See also Eq.(17)). Thus the inflaton actually undergoes an extremely slow-roll phase, which is called the ultra slow roll or constant roll [20; 21; 22; 23; 24; 25; 26], and then turns to a normal slow-roll phase with finite (but still small) velocity. More precise estimate of the inflation parameters would therefore be dedicate, however, the normal slow-roll phase will dominate since the ultra and/or constant slow-roll phase will be presumably so short. The detailed analysis is to be worth pursuing in another publication. Given the parameter setting in Eq.(21), we get \[T_{n}\sim 10^{7}\,{\rm GeV}\,,\qquad s^{0}_{\rm FV}(T_{n})\sim 7\times 10^{9} \,{\rm GeV}\,. \tag{30}\] Remarkably, the starting point of the slow-roll \(s^{0}_{\rm FV}\) coincides with the initial place of the successful walking-dilaton SFI [13] with \(\chi_{\rm ini}\sim 7\times 10^{9}\) GeV. We have also checked that the potential at \(T=T_{n}\) does not substantially differ from the one at \(T=0\) (both are still within the same order of magnitude all the way, in terms of \(\overline{V}_{\rm eff}\)). This implies that all the successful results on the SFI scenario in [13] on the base at \(T=0\) can simply be applied based on the conventional SFI formulae in Eq.(17). The e-folding number is also accumulated during the ultra-supercooling when the universe cools from \(T_{c}\) (in Eq.(22)) to \(T_{n}\) (in Eq.(30)), in addition to \(N\sim 46\) during the slow-roll inflation epoch which is yielded from the formula in Eq.(17). If it is simply summed up, the total amount \(N\sim 10+46=56\) is still in good agreement with the desired e-folding to explain the universe today. Thus the currently proposed mechanism for trapping and driving the inflaton to the SFI works, which dynamically solve the fine-tuning problem on the starting place of the inflaton slow-roll (\(\chi_{\rm ini}/v_{\chi}\sim 10^{-5}\) in the present reference model), and is shown also to be consistent with the observation of the cosmological inflation parameters. ## VI Conclusion We have proposed a dynamical trapping mechanism to solve the problem on the fine-tuning of the starting place for the slow-roll inflation, that the CW-SFI inflation intrinsically possesses. The mechanism is essentially constructed from two ingredients: one is an ultra-supercooling caused by an almost scale-invariant potential of CW type, which traps the inflaton at around the false vacuum, far away from the true vacuum dominantly Figure 2: The plot of the inflaton potential versus the inflaton field \(s^{0}\) around the temperature \(T_{n}=10^{7}\) GeV for the false vacuum decay, until when the inflaton keeps being trapped at the false vacuum \(s^{0}_{\rm FV}\) depending on \(T\). The blob put on the curve at \(T=T_{n}\) corresponds to the starting place for the inflaton to slow-roll, where the barrier and the saddle (inflection) point created by the barrier and the tadpole term at \(s^{0}=0\) vanish. The inflaton potential has been normalized as \(\overline{V}_{\rm eff}\equiv V_{\rm eff}(s^{0},T)-V_{\rm eff}(s_{0}=0,T)\) with the vacuum energy \(V_{0}\) and a scale factor of \(10^{-20}\). The horizontal axis has also been normalized by the inflaton VEV \(v_{\chi}\) at the true vacuum with a scale factor of \(10^{-6}\). created by the quantum scale anomaly, while the other is a classical explicit-scale breaking effect, which allows the inflaton to dynamically start the slow-roll. We have demonstrated how the mechanism works by employing a successful CW-SFI model and also shown the consistency with the observed bounds on the cosmological inflation parameters. The proposed new mechanism is straightforwardly applicable to other models of the CW type, and thus provides new insights for developing small field inflation models, which would also involve rich cosmological issues as noted in the text. ###### Acknowledgements. This work was supported in part by the National Science Foundation of China (NSFC) under Grant No.11747308, 11975108, 12047569, and the Seeds Funding of Jilin University (S.M.), and Toyama First Bank, Ltd (H.I.).
2308.03967
The Widths of Strict Outerconfluent Graphs
Strict outerconfluent drawing is a style of graph drawing in which vertices are drawn on the boundary of a disk, adjacencies are indicated by the existence of smooth curves through a system of tracks within the disk, and no two adjacent vertices are connected by more than one of these smooth tracks. We investigate graph width parameters on the graphs that have drawings in this style. We prove that the clique-width of these graphs is unbounded, but their twin-width is bounded.
David Eppstein
2023-08-08T00:34:22Z
http://arxiv.org/abs/2308.03967v1
# The Widths of Strict Outerconfluent Graphs ###### Abstract Strict outerconfluent drawing is a style of graph drawing in which vertices are drawn on the boundary of a disk, adjacencies are indicated by the existence of smooth curves through a system of tracks within the disk, and no two adjacent vertices are connected by more than one of these smooth tracks. We investigate graph width parameters on the graphs that have drawings in this style. We prove that the clique-width of these graphs is unbounded, but their twin-width is bounded. ## 1 Introduction _Confluent drawing_ is a powerful style of graph drawing that permits many non-planar and dense graphs to be drawn without crossings [8, 9, 10, 11, 16, 17, 19]. A confluent drawing consists of a system of non-crossing smooth curves in the plane, called _tracks_, whose endpoints are either vertices of the graph or _junctions_ where several tracks meet, all having the same slope at that point. Two vertices are adjacent whenever the union of some of the tracks forms a smooth curve connecting them. In this way, each confluent drawing represents unambiguously a unique graph, unlike the _bundled drawings_ which they otherwise resemble. Applications of confluent drawing include the automated layout of syntax diagrams [1], and the simplification of the Hasse diagrams of partially ordered sets [13]. A constrained version of confluent drawing, called _strict confluent drawing_, requires that each adjacency be represented by only one smooth curve [12, 14]. In _outerconfluent drawings_, the tracks are interior to a disk whose boundary contains the vertices. In this work we study _strict outerconfluent graphs_, the graphs that have strict outerconfluent drawings.1 If the vertex ordering along the drawing boundary is given, these graphs may be recognized in polynomial time [12], but their recognition without this information, and other algorithmic problems concerning them, remain mysterious. Footnote 1: For the full definition of strict outerconfluent graphs, see Definition 1. In this work, following Forster et al. [14], we study the width of strict outerconfluent graphs. There are many graph width parameters, of which treewidth is perhaps the most famous. Treewidth is bounded for some types of graph drawing with vertices on the boundary of a disk (outerplanar and outer-\(k\)-planar drawings [21]), suggesting that, analogously, strict outerconfluent graphs might have bounded width of some sort. However, graphs of bounded treewidth are sparse, and strict outerconfluent graphs can be dense: for instance they include the complete graphs and complete bipartite graphs. Therefore, a different concept of width is needed, one that can be bounded for dense graphs. Among these widths, we focus on two, _clique-width_ and _twin-width_.2 Footnote 2: For definitions of these two width parameters, see Definition 2 and Definition 3. For sparse graphs, clique-width is equivalent to treewidth, in the sense that if one of these two width parameters is bounded, the other one is also bounded [15], but graphs of bounded clique-width can also be dense. The strict outerconfluent graphs include the distance-hereditary graphs, which are known to have bounded clique-width [10]. Forster et al. [14] defined a sub-class of strict outerconfluent drawings, the _tree-like_ outerconfluent drawings, in which the tracks that are incident to junctions must form a single topological tree within the drawing, and proved that their graphs also have bounded clique-width [14]. We prove that, in contrast, there exist strict outerconfluent graphs with unbounded clique-width. To complement this result, we prove that another width parameter of these graphs, their _twin-width_, is bounded. Twin-width is bounded for many classes of graphs of interest in graph drawing, including the planar and \(k\)-planar graphs, and the graphs of bounded genus. It is also bounded for graphs of bounded clique-width [3]. The algorithmic consequences of bounded twin-width include the existence of a fixed-parameter tractable algorithm for testing whether a given graph models a given formula of first-order logic, parameterized by the size of the formula (including as a special case subgraph isomorphism) [7], and better approximation algorithms for dominating set, independent set, and graph coloring than the best approximations known for more general families [2, 4]. We prove the following results: * The strict outerconfluent graphs do not have bounded clique-width (Theorem 1). * The strict outerconfluent graphs have bounded twin-width. A twin-width decomposition of bounded width can be constructed for these graphs in polynomial time, given their vertex ordering around the boundary of a strict outerconfluent drawing. The main idea of the first result is to find a recursive construction of a family of strict outerconfluent drawings for which we can prove unbounded _rank-width_, a graph width parameter closely related to clique-width. The main idea of the second result is to harness known results relating the growth rate of a family of _ordered graphs_ (pairs of a graph and a linear ordering on its vertices) to the twin-width of the family, and to use the fact that strict confluent drawings have only linearly many junctions [12] to show that they have a small growth rate. ## 2 Definitions For completeness we repeat the following definitions, from previous work, of the main concepts considered in our results. We assume familiarity with the basic concepts of graph theory and of two-dimensional topology. By a _graph_ we always mean a finite undirected graph, without multiple adjacencies or loops. **Definition 1**.: A _strict outerconfluent drawing_ of a graph \(G\) consists of a system of finitely many smooth curves in a topological disk, which we call _tracks_,3 disjoint except for shared endpoints. These endpoints have two types: some are identified one-for-one with vertices of \(G\), while others are called _junctions_. Each vertex must lie on the boundary of the disk. At a junction, three or more tracks must meet, all having the same slope. A smooth curve within the union of tracks, starting and ending at vertices and otherwise passing only through tracks and junctions, is called an _edge curve_. Each two adjacent vertices of \(G\) must be the endpoints of a unique edge curve. No edge curve may connect a vertex to itself, or connect non-adjacent vertices. Each track must be part of at least one edge curve. A _strict outerconfluent graph_ is a graph that has a strict outerconfluent drawing. Footnote 3: In some past work on confluent drawings these curves have been called _arcs_, but that terminology conflicts with standard graph-theoretic terminology for directed edges. **Definition 2**.: The _clique-width_ of an undirected graph is the minimum number of colors needed to construct the graph by a sequence of the following four operations on (improperly) colored graphs: * Create a single-vertex graph, with its vertex given any of the available colors. * Take the disjoint union of two colored graphs. * Recolor all vertices of one color to another color (possibly one that is already used by other vertices). * Perform a _color join_ operation that adds edges between all pairs of vertices of two specified colors. **Definition 3**.: _Twin-width_ is defined through a type of graph decomposition in which clusters of vertices are merged in pairs, starting with one cluster per vertex, until only one cluster is left. At each step of the decomposition, two clusters are connected by a _red edge_ if some but not all adjacencies exist between vertices of one cluster and vertices of the other. The goal is to find a decomposition sequence that minimizes the maximum degree of the resulting sequence of red graphs. The twin-width of a graph \(G\) is the minimum value of \(d\) such that there exists a decomposition of \(G\) for which, after each pairwise merge, the red graph has maximum degree at most \(d\)[7]. ## 3 Unbounded clique-width In this section we prove that strict outerconfluent graphs can have unbounded clique-width. Our proof is based on a family of drawings depicted in Fig. 1, which we construct as follows: **Definition 4**.: Let \(G_{k}\) be the graph represented by a confluent drawing constructed as follows. Figure 1: Recursively constructed non-tree-like strict outerconfluent graph \(G_{4}\) * It is convenient to shape the disk on which the graph is drawn as a half-plane above a horizontal bounding line, to match the depiction in the figure. (This is merely a convention for describing the drawing and does not affect its combinatorial structure.) * On the boundary line of the half-plane, place \(3^{k}\) vertices (the alternating blue and yellow vertices of the figure), connected by tracks that directly connect consecutive pairs of vertices (drawn along the boundary line). These are the only tracks incident to the \(\lfloor 3^{k}/2\rfloor\) yellow vertices. Additional tracks will extend vertically from the \(\lceil 3^{k}/2\rceil\) blue vertices. * The remaining tracks of the drawing are arranged into \(k\) levels, each of which is drawn within a slab of the half-plane bounded between two horizontal lines. Number these levels from \(0\) to \(k-1\), bottom to top. The bottom line of the \(i\)th level contains \(\lceil 3^{k-i}/2\rceil\) points (vertices on level \(0\), junctions at higher levels) at which tracks extend with a vertical tangent into that level; number these points as \(p_{i,j}\) with \(0\leq j<3^{k-i}/2\). * Within level \(i\), connect each two consecutive points \(p_{i,j}\) and \(p_{i,j+1}\) by a semicircle. If \(j\) is a multiple of three, subdivide this semicircle by two junctions into three circular arc tracks; for other values of \(j\), this semicircle is itself a track. As a special case, for the top level (level \(k-1\)) the single semicircle connecting points \(p_{i,0}\) and \(p_{i,1}\) is not subdivided, and forms a track. In the figure, the arcs into which the semicircles are subdivided span angles of \(\pi/3\). * For each level \(i\) except the top level, and each subdivided semicircle connecting points \(p_{i,j}\) and \(p_{i,j+1}\) where \(j\) is a multiple of three, add tracks connecting the two subdivision points to the point \(p_{i+1,j/3}\) on the upper boundary line of the level. At the two junctions on the semicircle, these tracks should be oriented so that each one connects \(p_{i+1,j/3}\) downward by a smooth curve through the semicircle to the two points \(p_{i,j}\) and \(p_{i,j+1}\). In the figure, these upward tracks are also arcs of circles, congruent to the arcs of the subdivided semicircle. For instance, the figure depicts \(G_{4}\). By construction, \(G_{k}\) has exactly \(n=3^{k}\) vertices. **Observation 5**.: _Any semicircular track at level \(i\) of \(G_{k}\) has smooth paths connecting it to \(2^{i}\) vertices, \(2^{i-1}\) on its left and \(2^{i-1}\) on its right. The track is used by edges of \(G_{k}\) that connect each of the vertices in the left subset to each of the vertices in the right subset. These two subsets are separated by a gap of \(3^{i-1}\) vertices, wide enough that it cannot be spanned by any semicircular track at a lower level of \(G_{k}\)._ It follows that \(G_{k}\) has \(\Theta(4^{k})\) edges, enough to make it not sparse. More precisely, the number of edges can be calculated as \[\frac{8\cdot 4^{k}-3\cdot 3^{k}-5}{6}.\] We omit the details as this calculation is not important for our results. **Lemma 6**.: \(G_{k}\) _is strict outerconfluent._ Proof.: Each smooth curve from vertex to vertex must go upward through the levels of the track, follow a single semicircular track at some level, and then go back downwards through the levels, because there are no tracks that smoothly connect downward-going curves to upward-going curves. A smooth curve from vertex to vertex that uses a semicircular track at level \(i\) must connect two vertices that are at least \(3^{i-1}+1\) steps apart and at most \(3^{i}-1\) steps apart. Because these numbers of steps form disjoint ranges for disjoint levels, no two curves using semicircles from different levels can connect the same two vertices. Two semicircular tracks at the same level that do not share a confluent junction have disjoint subsets of vertices that they can reach. Two semicircular tracks at the same level that do share a confluent junction cannot provide two paths between any pair of vertices, because one of the tracks connects vertices that can reach the shared junction to other vertices to the left of the junction, while the other track connects only to the right. Rather than working directly with clique-width, it is convenient to use _rank-width_, a closely related quantity derived from hierarchical clusterings of the vertices of a given graph. **Definition 7**.: Define a _hierarchical clustering_ of a graph to be a ternary tree having the graph's vertices as its leaves. For each edge \(e\) of such a tree, removing \(e\) from the tree partitions it into two subtrees, and thus defines a partition of the vertices into two subsets; call this partition the _cut_ associated with \(e\), and call the two subsets the _sides_ of the cut. For any of these cuts, we can form a binary _biodacency matrix_ whose rows correspond to the vertices on one side of the cut, and whose columns correspond to the vertices on the other side (choosing arbitrarily which side to use for which role). The coefficient of this matrix in a given row and column is one if the corresponding two vertices are adjacent, and zero otherwise. (For the purposes of defining rank-width, these coefficients are defined within the finite field \(\mathbb{Z}_{2}\), rather than as real numbers, but that makes little difference for our purposes.) The rank-width of the graph is the maximum rank of any of the biadjacency matrices of these cuts, for a hierarchical clustering chosen to minimize this maximum rank. **Lemma 8** (Oum and Seymour [18]).: _Let \(G\) be any graph, let \(r\) be its rank-width and let \(c\) be its clique-width. Then_ \[r\leq c\leq 2^{r+1}-1.\] Thus, the rank-width of a family of graphs is bounded if and only if the clique-width is bounded. **Definition 9**.: A _balanced cut_ of an \(n\)-vertex graph is a partition of its vertices into two subsets that each have at least \(n/3\) vertices. **Lemma 10**.: _Any graph of rank-width \(r\) has a balanced cut whose biadjacency matrix has rank \(\leq r\)._ Proof.: Let \(G\) be the given graph, and let \(n\) be its number of vertices. Let \(T\) be a ternary tree with the vertices of \(G\) as its leaves, and with cuts whose biadjacency matrices have rank \(\leq r\), which exists by the definition of rank-width. As in Definition 7, define the two sides of an edge \(e\) of \(T\) to be the two subsets of vertices of \(G\) separated in \(T\) by \(e\). Define a side to be _small_ if it consists of fewer than \(n/3\) vertices of \(G\), and _large_ otherwise. If we can find an edge with no small side, it will define a balanced cut, which by construction will have rank \(\leq r\). To find an edge of \(T\) with no small side, start at any edge of \(T\), and then construct a walk as follows. As long as the walk has reached an edge \(e\) with a small side, consider the two edges of \(T\) that are incident to \(e\) on its large side. The large side of \(e\) includes \(>2n/3\) vertices of \(G\) (because the other side is small), so at least one of these two edges, \(e^{\prime}\), separates \(e\) from \(>n/3\) vertices of \(G\). Select \(e^{\prime}\) as the next edge in the walk. The walk terminates when it reaches an edge of \(T\) that defines a balanced cut, but it remains to prove that this always happens. After each step of the walk from an edge \(e\) to an edge \(e^{\prime}\), one of the two sides of \(e^{\prime}\) (the side that \(e^{\prime}\) separates from \(e\)) includes \(>n/3\) vertices of \(G\), by construction. The other side of \(e^{\prime}\) can be small, but if it is, it is a strict superset of the small side of \(e\). Because the numbers of vertices on the small sides of the edges in this walk form a strictly increasing sequence of integers, they must eventually reach a number that is at least \(n/3\), at which point the walk terminates. We will prove our result by showing that, for every fixed \(r\), \(G_{k}\) has no such low-rank balanced cut, contradicting Lemma 10. **Definition 11**.: Given any partition of the vertices of \(G_{k}\) into two sides, define a _block_ of the partition to be a contiguous subsequence of the vertices (as ordered along the boundary line of the drawing of \(G_{k}\)) that belongs to one of the two sides, and is not part of any larger such contiguous subsequence. **Lemma 12**.: _If a partition of the vertices of \(G_{k}\) into two sides has a biadjacency matrix of rank \(\leq r\), it has \(\leq 4r+1\) blocks._ Proof.: Each two consecutive blocks contain two consecutive vertices in the ordering of \(G_{k}\), one yellow and one blue in the alternating coloring of \(G_{k}\) from the figure. Assume for a contradiction that there are \(\geq 4r+2\) blocks. These would lead to \(\geq 4r+1\) yellow-blue edges between consecutive vertices, crossing from block to block and from one side of the partition to the other. Among these, some subsequence of \(\geq 2r+1\) edges all have their yellow vertices on the same side of the partition as each other and their blue vertices on the other side. Select the edges in odd positions of this subsequence. This gives a subsequence of \(\geq r+1\) edges of \(G_{k}\), connecting vertices that are all consistently colored yellow on one side of the partition and blue on the other. Moreover, because we selected only the edges in odd positions from a longer sequence, no two of these edges have endpoints that are consecutive in the vertex ordering of \(G_{k}\). Because the yellow vertices of \(G_{k}\) are adjacent only to the two consecutive blue vertices, the only yellow-blue edges connecting the selected vertices are the ones in the selected subsequence. That is, this subsequence forms an induced matching within the subgraph of edges of \(G_{k}\) that cross the given partition. The submatrix of the biadjacency matrix, corresponding to this induced matching, is a permutation matrix of rank \(\geq r+1\). This contradicts the assumption of the lemma that the whole biadjacency matrix has rank \(r\). This contradiction proves that our other assumption, that there are \(4r+2\) or more blocks, cannot be true. **Definition 13**.: As is standard in asymptotic analysis, the "big Omega notation" \(f(x)=\Omega\big{(}g(x)\big{)}\), where \(f(x)\) and \(g(x)\) are expressions with a single free variable \(x\), indicates that there exist constants \(x_{0}\) and \(\alpha>0\) such that, for all \(x>x_{0}\), \(f(x)\geq\alpha\,g(x)\). More intuitively, \(f\) grows at least proportionally to \(g\). We also use \(\Omega\big{(}g(x)\big{)}\), separated from the "\(f(x)=\)" part of this notation, to denote a quantity \(q(x)\) such that \(q(x)=\Omega\big{(}g(x)\big{)}\). However, the meaning of this notation becomes unclear when there is more than one free variable: even if all variables are assumed to grow unboundedly, there may be different relations between \(f\) and \(g\) depending on the relative growth rates of the variables. To sidestep these issues we only use \(\Omega\)-notation for a single free variable. We use the modified notation \(\Omega_{c}\), as in \(f(x,c)=\Omega_{c}\big{(}g(x,c)\big{)}\), to indicate that \(c\) is _not_ the free variable in the relation between \(f\) and \(g\). Instead, this notation is a shorthand for \(\forall c\,f(x,c)=\Omega\big{(}g(x,c)\big{)}\): for each \(c\), there must exist \(x_{0}\) and \(\alpha\) (both possibly depending on \(c\)), such that, for all \(x>x_{0}\), \(f(x,c)\geq\alpha\,g(x,c)\). **Lemma 14**.: _For any constant \(c\), and any balanced partition of \(G_{k}\) forming at most \(c\) blocks on each side of the partition, there exist two blocks on opposite sides of the partition such that the smaller of their two lengths is \(\Omega_{c}(3^{k/c^{2}})\) times larger than \(1+\ell\), where \(\ell\) is the number of vertices of \(G_{k}\) that lie between the two blocks in the vertex ordering of \(G_{k}\)._ Proof.: Label the two sides of the partition as "red" and "blue". Let \(R\) be the largest red block, consisting of \(\Omega_{c}(3^{k})\) vertices. Form a sequence of blue blocks \(B_{1},B_{2},\dots\) where each successive \(B_{i}\) is the closest blue block to \(R\) that is larger than \(B_{i-1}\). The sequence \(1,|B_{1}|,|B_{2}|,\dots\) of these blue block sizes is short (it has length at most \(c+1\)) and has a large final value \(\Omega_{c}(3^{k})\), so it must sometimes increase rapidly: there must be some \(i\) for which \(|B_{i}|\) is larger than the previous value in the sequence by a factor of \(\Omega_{c}(3^{k/c})\). The blue blocks between \(R\) and \(B_{i}\) are all short: they are smaller than \(J_{i}\) by a factor of \(\Omega_{c}(3^{k/c})\). If the red blocks between \(R\) and \(B_{i}\) are also all short then \(R\) and \(B_{i}\) are close together and they together supply the desired pair of blocks. Otherwise, form another sequence of red blocks \(R_{i}\) between \(B_{i}\) and \(R\), where each \(R_{i}\) is the closest red block to \(B_{i}\) that is larger than \(R_{i-1}\). The sequence \(|B_{i}|/3^{k/c},|R_{1}|,|R_{2}|,\dots\) is short, and starts off a factor of at least \(3^{k/c}\) smaller than its final value \(|R|\), so it must again sometimes increase rapidly: there must be some \(j\) for which \(R_{j}\) is larger than the previous value in the sequence by a factor of \(\Omega_{c}(3^{k/c^{2}})\). All blocks between \(R_{j}\) and \(B_{i}\) are shorter than both \(R_{j}\) and \(B_{i}\) by this factor, so \(R_{j}\) and \(B_{i}\) are close together and they together supply the desired pair of blocks. **Lemma 15**.: _Let \(M\) be a square matrix over any field with the following structure: its diagonal entries are all nonzero, and for each diagonal entry either all entries above it in the same column or all entries to the left of it in the same row are zero. The remaining entries can be arbitrary. (See Fig. 2.) Then \(M\) has full rank._ Proof.: This follows by induction on the size of \(M\). The submatrix formed by removing the last row and last column has full rank by induction. If the last column is zero except on the diagonal, the last row does not belong to the row space of the earlier rows; symmetrically, if the last row is zero except on the diagonal, the last column does not belong to the column space of the earlier columns. In either case, including this row or column adds one to the rank. **Observation 16**.: _Directly above any yellow-blue edge of \(G_{k}\) there exists a sequence of nested semicircular tracks. If the given yellow-blue edge has at least \(x\) vertices of \(G\) on each of its two sides, then this sequence extends upwards for \(\Omega(\log x)\) levels of the drawing, and includes at least one track in each two consecutive levels, comprising \(\Omega(\log x)\) tracks in the nested sequence. Because each of these tracks connects left and right subsets of vertices with a gap that cannot be spanned by lower-level tracks (Observation 5), only one of the left or right sides of the track at level \(i\) can reach vertices that are also reached by lower-level tracks from the same sequence._ **Theorem 1**.: _The graphs \(G_{k}\) have unbounded clique-width._ Proof.: We prove that, for any constant \(r\), these graphs do not have rank-width \(\leq r\). Assuming for a contradiction that they do all have rank-width \(\leq r\), they would have a balanced partition whose Figure 2: Schematic view of a matrix described by Lemma 15. The nonzero main diagonal entries are red. Each diagonal entry has zeroes either above it or to the left of it, shown in pale yellow. The remaining blue squares mark entries whose value can be arbitrary. biadjacency matrix has low rank, by Lemma 10. By Lemma 12 we may assume that this partition forms \(O(r)\) blocks in the vertex sequence of \(G_{k}\). By Lemma 14 we may assume that some two of these blocks \(R\) and \(B\), on different sides of the partition, are larger than the gap \(\ell\) between them by a factor of \(3^{k/O(r^{2})}\) Now choose any yellow-blue edge between the two blocks and apply Observation 16 to find a nested sequence of semicircular tracks above this edge. Each track on level \(i\) lies above a sub-drawing of a graph isomorphic to \(G_{i}\), spanning a subsequence of \(3^{i}\) vertices of the overall graph, of which it can reach \(2^{i}\). Only \(O(\log\ell)\) of these nested semicircular tracks can be at such a low level that they fail to reach both \(R\) and \(B\). Another \(\Omega(\log 3^{k/O(r^{2})})=\Omega_{r}(k)\) of them span subsequences of vertices that lie entirely within \(R\), \(B\), and the gap between them, reaching both \(R\) and \(B\). For each one of these tracks, at level \(i\) of the drawing, find vertices \(r_{i}\in R\) and \(b_{i}\in B\) that are connected to this track but not to any of the lower-level nested semicircular tracks. The chosen vertices \(r_{i}\) and \(b_{i}\), for all of these nested semicircular tracks, induce a biadjacency matrix in which each main diagonal coefficient, corresponding to the pair \((r_{i},b_{i})\), is a one. By Observation 16, for each such coefficient, either the coefficients above it in the same column of the matrix are all zero, or the coefficients to the left of it in the same row of the matrix are all zero, forming a matrix of the form described by Lemma 15. By Lemma 15, its rank equals the length of the sequence of semicircular tracks. But this is \(\Omega_{r}(k)\), unbounded, contradicting the assumption that the rank is \(\leq r\). ## 4 Bounded twin-width ### Small hereditary classes of ordered graphs As Bonnet et al. [6] showed, for hereditary families of _ordered_ graphs, twin-width is intimately related to the growth rate of the class. Their _small conjecture_, that the same relation held more strongly for unordered graphs, was later falsified [5]. As it may be somewhat unfamiliar to treat ordered graphs as standalone objects, we provide definitions here. **Definition 17**.: An _ordered graph_ is a triple \(G=(V,E,<)\) where \((V,E)\) are the sets of vertices and edges of an undirected graph and \((V,<)\) is a total ordering. Its _number of vertices_ is \(|V|\) (this is commonly called the _order_ of a graph, to distinguish it from the _size_\(|E|\), but that would obviously cause confusion with respect to \(<\), so we avoid this terminology.) Two ordered graphs are _isomorphic_ if there is a bijection on their vertex sets that is simultaneously a graph isomorphism on their undirected graphs and an order isomorphism on their vertex orders. Isomorphism is an equivalence relation; we call its equivalence classes _isomorphism classes_. All members of an isomorphism class must have the same number of vertices, so we can talk about this number for an isomorphism class rather than an individual graph. An _induced subgraph_\(G[S]\) of an ordered graph \(G=(V,E,<)\), defined by a subset \(S\subseteq V\), is another ordered graph \((S,E_{S},<_{S})\) where \(E_{S}\) is the subset of \(E\) consisting of edges with both endpoints in \(S\), and \(<_{S}\) is the restriction of \(<\) to \(S\). We consider here only finite graphs: \(V\) and \(E\) must be finite sets. With that assumption, there can be only finitely many isomorphism classes that have a given number of vertices. However, we still speak about _classes_ of graphs, rather than _sets_ of graphs, to emphasize that we are not restricting the vertices of the graphs to belong to any specific universal set, such as the natural numbers or the points of the plane. **Definition 18**.: A class of ordered graphs is _hereditary_ if it contains every induced subgraph of a graph in the class. It is _small_ if there exists a number \(c\) such that the number of isomorphism classes of \(n\)-vertex graphs in the class is \(O(c^{n})\). The following is central to our proof that strict outerconfluent graphs have bounded twin-width. Although it is one of the key results in the theory of twin-width, we are not aware of previous uses of this lemma to prove bounded twin-width of natural classes of graphs, rather than using more direct constructions. **Lemma 19** (Bonnet et al. [6]).: _Every small hereditary class of ordered graphs has bounded twin-width. A hereditary class of (unordered) undirected graphs has bounded twin-width if and only if its graphs can be ordered to form a small hereditary class of ordered graphs._ Additionally, we need an algorithmic version of this result: **Lemma 20** (Bonnet et al. [6]).: _For every small hereditary class of ordered graphs there is a polynomial-time algorithm for constructing twin-width decompositions of bounded width for the ordered graphs in the class._ ### Ordering outerconfluent graphs To apply Lemma 19 and Lemma 20 to outerconfluent graphs, we need to describe these as ordered graphs, rather than merely as graphs. There is an obvious ordering to use for their vertices, the ordering around the boundary of the disk on which these graphs are drawn; this is a cyclic ordering rather than a linear ordering, but that is a mere technicality. **Definition 21**.: Define an _ordered strict outerconfluent drawing_ to be a strict outerconfluent drawing within a specified oriented topological disk, together with a choice of one vertex of the drawing to be the start of its linear order. For technical reasons we require all tracks and junctions of the drawing to be interior to the disk, rather than touching its boundary at non-vertex points as some tracks from Fig. 1 do; this does not restrict the class of graphs that may be drawn in this way. As a special case we allow drawings with no vertices, tracks, or junctions, representing the empty graph, despite the inability of choosing a starting vertex in this case. The _ordered graph_ of an ordered strict outerconfluent drawing is the ordered graph \(G=(V,E,<)\) for which \((G,E)\) is the undirected graph depicted in the drawing, and \(<\) is the clockwise ordering of vertices around the boundary of the disk of the drawing, starting from the designated starting vertex. A _strict outerconfluent ordered graph_ is any graph that is the ordered graph of an ordered strict outerconfluent drawing. Two ordered strict outerconfluent drawings are _topologically equivalent_ if there is a smooth homeomorphism of the plane that maps each vertex, track and junction of one drawing to a corresponding vertex, track, or junction of the other drawing, preserving the clockwise orientation of the vertices around the disk and preserving the choice of starting vertices. **Observation 22**.: _The ordered graphs of topologically equivalent ordered strict outerconfluent drawings are isomorphic ordered graphs._ **Definition 23**.: If \(D\) is an ordered strict outerconfluent drawing, an _induced subdrawing_\(D[S]\), for a given subset \(S\) of the vertices of the drawing, is obtained from \(D\) by the following steps: * Remove from \(D\) all vertices that do not belong to \(S\). * Remove from \(D\) all tracks and junctions that do not belong to smooth curves connecting pairs of vertices in \(S\). * While any remaining junction has exactly two tracks meeting at it (necessarily forming a locally smooth curve at that junction), remove the junction and replace the two tracks by their union, so that all remaining junctions form the meeting point of three or more tracks. * If \(S\) is non-empty, select the starting vertex of the ordered induced drawing to be the vertex of \(S\) that appears earliest in the vertex ordering of \(D\). **Lemma 24**.: _If \(G\) is the ordered graph of an ordered strict outerconfluent drawing \(D\), and \(S\) is any subset of the vertices of \(G\), then the induced subgraph \(G[S]\) is the ordered graph of the induced subdrawing \(D[S]\)._ Proof.: The removal of vertices, tracks, and junctions from \(D\) leaves only the vertices in \(G[S]\), and is defined in a way that does not change the existence of smooth curves between these vertices. The replacement of pairs of tracks by their union also does not affect the existence of smooth curves between pairs of vertices, as each such curve can only use both replaced tracks, or neither. The choice of starting vertex in the induced subdrawing is made in a way that causes the vertex ordering of the subdrawing to be the induced ordering of the vertex ordering of \(D\). Because \(D\) is assumed to be strict, it has no multiple adjacencies between vertices, nor loops from a vertex to itself. Removing tracks from \(D\) to form \(D[S]\) cannot create new multiple adjacencies or loops, so this remains true in \(D[S]\). Additionally, the removal of unused tracks and junctions from \(D\) to form \(D[S]\), and the merger of tracks at two-track junctions, ensures that in \(D[S]\) the technical requirements of having no unused tracks or junctions, and having at least three tracks at each junction, are maintained. **Corollary 25**.: _The ordered strict outerconfluent graphs are a hereditary class of ordered graphs._ Proof.: Each such graph has an ordered outerconfluent drawing representing it, and each of its induced subgraphs comes from the corresponding induced subdrawing by Lemma 24. Therefore, each induced subgraph has a drawing representing it, and remains in the same class of graphs. ### Smallness To prove that the ordered strict outerconfluent graphs form a small class, we combine a general principle (beyond the scope of this paper to formalize) that the number of planar diagrams with a given number of features is singly exponential in the number of features, and a result from Eppstein et al. [12] that certain strict confluent drawings have a linear number of features. We begin with a bound on the number of planar diagrams, as we need it here. **Definition 26**.: A _plane graph_ is a graph together with a non-crossing drawing in the plane: a point for each vertex, a smooth curve connecting the two endpoints of each edge, and no points of intersection between edge curves other than at their shared endpoints. Two plane graphs are _topologically equivalent_ if there is a homeomorphism of the plane mapping each feature of one to a corresponding feature of the other. A plane graph is _maximal_ if it is not possible to add any more edges to the graph and corresponding edge curves to the drawing, connecting pairs of existing vertices that were not previously connected. A _face_ of a plane graph is a connected component of the topological space formed by removing the vertices and edge curves from the plane. The following facts are standard in topological graph theory: **Lemma 27**.: _Every planar graph has a plane drawing. The maximal plane graphs on \(n\) vertices, for \(n\geq 3\), have exactly \(3n-6\) edges and exactly \(2n-4\) faces, each bounded by three edges of the graph. \(2n-3\) of these faces are bounded, and one is unbounded. Their graphs, the maximal planar graphs, each have exactly \(4n-8\) equivalence classes of drawings, under topological equivalence, where an equivalence class is determined by the choice of which triangle in the graph is to be the outer face and how it is to be oriented._ **Lemma 28** (Turan [20]).: _The number of isomorphism classes of planar graphs with \(n\) vertices is at most \(2^{12n}\)._ **Corollary 29**.: _The number of equivalence classes of plane graphs, under topological equivalence, is at most \(c^{n}\) for some \(c>0\)._ Proof.: By Turan's lemma, the number of maximal planar graphs is at most \(2^{12n}\), from which it follows that the number of topological equivalence classes of maximal plane graphs is at most \((4n-8)2^{12n}\). Every plane graph can be obtained by removing some subset of the \(3n-6\) edges of a maximal plane graph, so the number of topological equivalence classes of plane graphs is at most \((4n-8)2^{12n}2^{3n-6}\). The bounds stated above are far from tight, but this is unimportant for our results. To apply Turan's lemma to strict outerconfluent drawings, we need to transform them into plane graphs. **Definition 30**.: A _face-vertex incidence_ of a plane graph \(G\) is a pair of a vertex of \(G\) and a face of \(G\) that has that vertex on its boundary. Given an ordered strict outerconfluent drawing \(D\), we define a _planification_ of \(D\) to be a tuple \((G,o,s,S)\), where \(G\) is a plane graph, \(o\) and \(s\) are vertices of \(G\), and \(S\) is a subset of the face-vertex incidences of \(G\), constructed as follows: * The vertices of \(G\) consist of the vertices and junctions of \(D\), and an additional vertex, \(o\), placed outside the disk in which \(D\) is drawn. * Vertex \(s\) is the start vertex in the ordering of \(D\). * The edge curves of \(G\) consist of the tracks of \(D\), together with additional curves, drawn outside the disk in which \(D\) is drawn and disjoint from each other, connecting \(o\) to each vertex of \(D\). Two vertices are adjacent in \(G\) exactly when they are connected by one of these curves. (Note that, in a strict outerconfluent drawing, it is impossible for two or more tracks to connect the same two vertices or junctions, as this would in all cases result in a loop or an unused track.) * \(S\) consists of the _sharp angles_ of \(D\), in the sense described by Eppstein et al. [12]: they are the vertex-face incidences where the vertex of \(G\) is a junction of \(D\), at which the two tracks on either side of the face at that vertex do not have a smooth union. Necessarily, \(S\) includes all but two of the vertex-face incidences at each junction. Two tracks at any junction have a smooth union if and only if, in the cyclic ordering of tracks at that junction, the two non-sharp angles separate the two tracks from each other. As we show, this completely encodes the combinatorial information in a strict outerconfluent drawing, in the following sense: **Lemma 31**.: _Let \((G,o,s,S)\) be any planification of any ordered strict outerconfluent drawing \(D\), and let \((G^{\prime},o^{\prime},s^{\prime},S^{\prime})\) be any tuple of a plane graph topologically equivalent to \(G\), the vertices corresponding to \(o\) and \(s\) in \(G^{\prime}\) under the topological equivalence, and the subset of vertex-face incidences corresponding to \(S\) under the topological equivalence. Then from \((G^{\prime},o^{\prime},s^{\prime},S^{\prime})\) we can construct a strict outerconfluent drawing \(D^{\prime}\) that is topologically equivalent to \(D\)._ Proof.: Define a junction of \(G^{\prime}\) to be any of its vertices that is not a neighbor of \(o\); these are the vertices that correspond to junctions of \(D\) under the topological equivalence of \(G\) and \(G^{\prime}\). Find disjoint neighborhoods of each junction, interior to the disk of the drawing; these exist because of the restriction that \(D\) cannot touch the boundary of the disk except at its vertices. Within each neighborhood, modify \(G^{\prime}\) (preserving topological equivalence as a plane drawing) so that the edge-curves meeting at each junction form sharp or smooth angles according to the information given in \(S^{\prime}\). (Because of the technical restriction on our drawings, each junction has a neighborhood interior to the disk of the drawing, within which this modification may be performed.) Consider the result as a confluent drawing, with \(o\) and its incident edges removed, neighbors of \(o\) as its vertices, and non-neighbors of \(o\) as junctions, embedded in a disk whose boundary lies in the faces of \(G^{\prime}\) incident with \(o\). Order the vertices of the drawing clockwise around this disk starting with \(s\). Then the result is a confluent drawing, mapped from drawing \(D\) and its planification \(G\) by a homeomorphism of the plane (the homeomorphism that maps \(G\) to \(G^{\prime}\), composed with its local modifications at the junctions). This homeomorphism takes each vertex, track, or junction of \(D\) to a corresponding vertex, track, or junction of the resulting confluent drawing, and preserves the smoothness or lack thereof of unions of tracks. Therefore, it is a topological equivalence of confluent drawings. Thus, we can count topological equivalence classes of strict outerconfluent drawings by using Turan's lemma to count their planifications. However, to apply Turan's lemma, we need to know how many junctions and tracks there can be. Fortunately, this has already been bounded: **Lemma 32** (Eppstein et al. [12]).: _Every strict outerconfluent graph with \(n\) vertices has a strict outerconfluent drawing with at most \(n-3\) junctions and at most \(3n-6\) tracks._ Putting these results together we have the main result of this section: **Lemma 33**.: _The ordered strict outerconfluent graphs form a small class of ordered graphs._ Proof.: By Lemma 31, the number of equivalence classes of \(n\)-vertex ordered strict outerconfluent graphs under isomorphism of ordered graphs is at most the number of equivalence classes of planifications of drawings of these graphs, under topological equivalence of planifications. Given an \(n\)-vertex ordered strict outerconfluent graph, let \(D\) be a strict outerconfluent drawing of it with at most \(n-3\) junctions and at most \(3n-6\) tracks, known to exist by Lemma 32, and let \((G,o,s,S)\) be its planification. Each junction has as many vertex-face incidences as it has track-junction incidences; each track contributes two such incidences, one at each endpoint, but in the worst case each vertex of the given graph takes up at least one track endpoint (if it were an isolated vertex the number of junctions and tracks would be smaller) so the number of vertex-face incidences at junctions is at most \(5n-12\). The number of vertices in \(G\) is one plus the number of vertices and junctions in \(D\), at most \(2n-2\), so by Corollary 29 the number of choices for the plane graph \(G\) (under topological equivalence) is singly exponential in \(n\). The number of choices for \(o\) and \(s\) is at most \(2n-2\). The number of choices for \(S\) is at most \(2^{5n-12}\). Multiplying these numbers of choices together gives a singly exponential number of planifications, under topological equivalence, and therefore a singly-exponential number of ordered strict outerconfluent graphs, under ordered isomorphism. ### Twin-width **Theorem 2**.: _The strict outerconfluent graphs have bounded twin-width. If the ordering of the vertices along the boundary of a strict outerconfluent drawing of one of these graphs is given, a twin-width decomposition for it of bounded width can be constructed in polynomial time._ Proof.: This follows from Corollary 25 and Lemma 33, under which assigning them their boundary orderings produces a hereditary small class of ordered graphs, Lemma 19, under which hereditary small class of ordered graphs have bounded twin-width, and Lemma 20, under which twin-width decompositions of bounded width for hereditary small class of ordered graphs can be found in polynomial time. Discussion We have shown that the clique-width of strict outerconfluent graphs is unbounded, but our lower bound proves only sublogarithmic clique-width. It would be of interest to determine how quickly the clique-width can grow, as a function of the number of vertices. In the other direction, we have shown that the twin-width of these graphs is bounded, but because our proof goes through a counting argument it does not provide a direct construction of a low-twin-width decomposition, and the bound that it provides on twin-width is large. It would be of interest to find an alternate proof with a better bound on twin-width. It is natural to try to extend our twin-width bound to more general classes of confluent graphs. The full class of all confluent graphs is out of reach, because it includes the interval graphs [9], and these do not form a small class: even when counting the \(n\)-vertex interval graphs as unordered, undirected graphs their number is exponential in \(n\log n\)[22]. We remark that this bound, together with our methods for converting counting problems on confluent drawings to plane graphs, can be used to show that some confluent drawings require \(\Omega(n\log n)\) tracks; we omit the details. The strict confluent drawings produce a small class of unordered graphs by the same reasoning as Lemma 33, but we do not know of a natural ordering for these graphs under which they are hereditary. On the other hand, the (non-strict) outerconfluent drawings naturally form a hereditary class of ordered graphs, by the same reasoning as Corollary 25, but we do not know whether they are small. Many of the other known width parameters are either unbounded when clique-width is unbounded, or bounded when twin-width is bounded. Thus, our results settle whether these parameters are bounded on the strict outerconfluent graphs. However, it may be of interest to consider other width parameters in connection with other forms of confluent drawing. ## Acknowledgements This research was supported in part by NSF grant CCF-2212129.
2306.05662
Consensus ALADIN: A Framework for Distributed Optimization and Its Application in Federated Learning
This paper investigates algorithms for solving distributed consensus optimization problems that are non-convex. Since Typical ALADIN (Typical Augmented Lagrangian based Alternating Direction Inexact Newton Method, T-ALADIN for short) [1] is a well-performed algorithm treating distributed optimization problems that are non-convex, directly adopting T-ALADIN to those of consensus is a natural approach. However, T-ALADIN typically results in high communication and computation overhead, which makes such an approach far from efficient. In this paper, we propose a new variant of the ALADIN family, coined consensus ALADIN (C-ALADIN for short). C-ALADIN inherits all the good properties of T-ALADIN, such as the local linear or super-linear convergence rate and the local convergence guarantees for non-convex optimization problems; besides, C-ALADIN offers unique improvements in terms of communication efficiency and computational efficiency. Moreover, C-ALADIN involves a reduced version, in comparison with Consensus ADMM (Alternating Direction Method of Multipliers) [3], showing significant convergence performance, even without the help of second-order information. We also propose a practical version of C-ALADIN, named FedALADIN, that seamlessly serves the emerging federated learning applications, which expands the reach of our proposed C-ALADIN. We provide numerical experiments to demonstrate the effectiveness of C-ALADIN. The results show that C-ALADIN has significant improvements in convergence performance.
Xu Du, Jingzhe Wang
2023-06-09T04:08:26Z
http://arxiv.org/abs/2306.05662v1
Consensus ALADIN: A Framework for Distributed Optimization and Its Application in Federated Learning ###### Abstract This paper investigates algorithms for solving distributed consensus optimization problems that are non-convex. Since Typical ALADIN (Typical Augmented Lagrangian based Alternating Direction Inexact Newton Method, T-ALADIN for short) [1] is a well-performed algorithm treating distributed optimization problems that are non-convex, directly adopting T-ALADIN to those of consensus is a natural approach. However, T-ALADIN typically results in high communication and computation overhead, which makes such an approach far from efficient. In this paper, we propose a new variant of the ALADIN family, coined consensus ALADIN (C-ALADIN for short). C-ALADIN inherits all the good properties of T-ALADIN, such as the local linear or super-linear convergence rate and the local convergence guarantees for non-convex optimization problems; besides, C-ALADIN offers unique improvements in terms of _communication efficiency_ and _computational efficiency_. Moreover, C-ALADIN involves a reduced version, in comparison with Consensus ADMM (Alternating Direction Method of Multipliers) [3], showing significant convergence performance, even without the help of second-order information. We also propose a practical version of C-ALADIN, named FedALADIN, that seamlessly serves the emerging federated learning applications, which expands the reach of our proposed C-ALADIN. We provide numerical experiments to demonstrate the effectiveness of C-ALADIN. The results show that C-ALADIN has significant improvements in convergence performance. Distributed Consensus Optimization, Algorithm Efficiency, Convergence Analysis, Federated Learning ## 1 Introduction In recent years, distributed optimization algorithms have received a lot of attention due to developments in numerical optimal control [4], smart grid [5], wireless communication [6], game theory [7], and machine learning [8]. In the field of distributed optimization algorithm design, solving distributed non-convex problems efficiently has always been the direction of people's efforts. To deal with non-convexity, in this paper, we follow this direction and propose a novel algorithmic framework for distributed non-convex consensus optimization. ### _The Road to Consensus ALADIN_ We start with introducing distributed optimization (DO for short) problems. DO problems are generally formulated in the fashion of mathematical programming, where separable objectives are linearly coupled by \(m\) equality constraints. Formally, it can be described as follows: \[\begin{split}\min_{\xi_{i}\in\mathbb{R}^{n_{i}}}& \sum_{i=1}^{N}f_{i}(\xi_{i})\\ {\rm s.t.}&\sum_{i=1}^{N}A_{i}\xi_{i}=b\ |\mu.\end{split} \tag{1}\] Here, the coupling matrices \(A_{i}\in\mathbb{R}^{n_{i}\times m}\) and the coupling parameter \(b\in\mathbb{R}^{m}\) are given. The dimension \(n_{i}\) of private variables \(\xi_{i}\)s are potentially different. \(\mu\in\mathbb{R}^{m}\) indicates the corresponding dual variable of the coupling constraints. When the objective \(f_{i}\)s are convex, there are some classical algorithms that can be used to solve it, such as dual decomposition (DD) [9], [10], and ADMM [3]. As a special case of DO, distributed consensus (DC) optimization problems meet all the challenges such as convergence theory for non-convex cases. The main difference of DO and DC is that DC has a global variable to which all the private variables will converge (detailed descriptions will be shown in Section 2.1). As milestone research of DC, [11] shows that Consensus ADMM [3], under some assumptions, has a linear convergence rate for DC problems that are strongly convex. We refer [12] as a survey paper for more details. Notice that, similar to Consensus ADMM, current algorithms such as DGD [13], EXTRA [14] only have convergence guarantees for convex problems in the area of DC. However, many practical problems [4], especially those met in federated learning (FL) [15], are non-convex. When meeting such problems, the above algorithms cannot provide satisfactory solutions in general. To solve such non-convex problems, in literature, T-ALADIN is a state-of-the-art algorithm that can provide theoretical local convergence guarantee. Technically, T-ALADIN can be regarded as a successful combination of ADMM and sequential quadratic programming (SQP). Details can be found in Subsection 2.2. To date, T-ALADIN has several elegant successors [16], [17], [18] and already shown effectiveness in many applications [16, 17, 20, 21]. From the above facts, to solve distributed consensus problems that are non-convex, directly adopting T-ALADIN seems very natural. However, such a trivial approach meets the following challenges: _first_, T-ALADIN will bring a large number of constraints in the coupled QP step (details can be found in (8)), and the dimension of the corresponding dual variable \(\mu\) will be extremely large; _second_, the T-ALADIN structure inevitably depends on the uploading of the first and second order information (details can be found in(7)) from the agents and downloading of the updated primal and dual variables, which incurs huge communication complexity; _third_, in T-ALADIN, a large-scale coupled QP has to be solved exactly, which results in heavy computation workload. In this paper, we propose C-ALADIN for meeting the aforementioned challenges. C-ALADIN addresses the challenges as follows: _First_, instead of solving a coupled QP in T-ALADIN, a consensus QP is solved in C-ALADIN; _second_, we improve the upload and download communication efficiency by designing decoding strategies on both sides of the agents and the master. In detail, on the uploading side, we find that the local optimizer dominates such parameters. Such an observation, in conjunction with the approximation techniques of Broyden-Fletcher-Goldfarb-Shanno (BFGS) [2], enable the master to recover the Hessian approximation matrices. It avoids uploading Hessian matrices directly from agents. We name the above techniques _Consensus BFGS ALADIN_. Later, in a reduced version named _Reduced Consensus ALADIN_, we simply use an identity matrix for large-scale computation problems. On the downloading side, inspired by the KKT (Karush-Kuhn-Tucker) optimality condition of the coupled QP, we allow the agents to recover the dual variables that are not urgently broadcast. It can be realized by decoding such variables with the help of the global variable; _third_, in C-ALADIN, the computational bottlenecks come from solving a large-scale sparse consensus QP that plays a key role in coordinating information. Inspired by the technique of KKT mentioned above, we find an equivalent form of the large-scale sparse consensus QP in C-ALADIN. Such a form can significantly release the burden on computing the corresponding KKT matrix. Based on our proposed C-ALADIN, we then propose a theory of convergence analysis, which works for both convex and non-convex cases. In order to expand the application scope of our proposed C-ALADIN algorithm family, we next show how FL can benefit from our proposed C-ALADIN. ### _Federated Learning via Consensus ALADIN_ FL, as a framework that aims to train a relatively universal model with data from different devices without transmitting the original data directly, involves many DC problems that are in general either convex or non-convex. In fact, there are several existing efforts [8, 15, 22, 23, 24, 25] on solving DC problems in FL. However, such algorithms typically suffer the bottleneck of lacking theoretical non-convex analysis and unsatisfactory convergence rate. To achieve both, we start adopting C-ALADIN in FL. Though C-ALADIN shows promising convergence performance, directly integrating C-ALADIN with FL is non-trivial. Specifically, all current members in C-ALADIN rely on uploading the local primal variables to the master, which is not secure. Then, because of the high-dimension property of FL, second order information can not be used, which limits the power of _Consensus BFGS ALADIN_. By observing such challenges, based on _Reduced Consensus ALADIN_, we design a novel variant member of C-ALADIN, named FedALADIN, that seamlessly meets the requirements of FL. In summary, our key contributions of this paper are as follows: a) We introduce the notion of consensus QP. b) We propose a novel efficient algorithm family C-ALADIN that shows rigorous convergence guarantee, which consists of _Consensus BFGS ALADIN_ and _Reduced Consensus ALADIN_. c) We propose a novel proof framework to perform the convergence analysis of C-ALADIN. d) We propose FedALADIN for FL. e) We perform numerical experiments on both C-ALADIN and FedALADIN. The results say that both C-ALADIN and FedALADIN show significant improvements in convergence performance. ### _Organization_ The rest of this paper is organized as follows. In Section 2, we provide mathematical preliminaries, including fundamentals of Consensus ADMM and T-ALADIN. In Section 3, we present a new algorithm named C-ALADIN. Moreover, we show that a reduced version of C-ALADIN can be applied to FL named FedALADIN in Section 4. Later, convergence theory of C-ALADIN is established in Section 5. In the end, we show the numerical result in Section 6. In Section 7, we provide a literature review. Section 8 concludes this paper. ## 2 Preliminaries In this section, we provide formal fundamentals of Consensus ADMM and T-ALADIN. ### _FL via Consensus ADMM_ Assume that we have \(N\) clients1 and each of those has a local dataset \(\mathcal{D}_{i}\) where \(i\in\{1,\ldots,N\}\). Here, the loss function of client \(i\) is defined as Footnote 1: In numerical optimization, one who solves the sub-problem is called agent, and in FL it is called client. In this paper, we use the two notions interchangeably. \[f_{i}(z)=\alpha_{i}\sum_{t_{i}\in\mathcal{D}_{i}}l_{i}(z;t_{i}), \tag{2}\] where \(l_{i}(z,t_{i}):\mathbb{R}^{n\times|t_{i}|}\rightarrow\mathbb{R}\) is a mapping for measuring the prediction error of global variable \(z\in\mathbb{R}^{n}\). Moreover \(\alpha_{i}\)s are the positive weights with \(\sum_{i=1}^{N}\alpha_{i}=1\). The main goal of machine learning is to solve the following joint optimization problem: \[\min_{z\in\mathbb{R}^{n}}F(z)=\sum_{i=1}^{N}f_{i}(z). \tag{3}\] However, \(\mathcal{D}_{i}\)s usually belongs to different clients and can not be shared with each other. To deal with this, instead of solving Problem (3), FL solves the reformulated Problem (4) in a distributed way. \[\begin{split}\min_{x_{i},z\in\mathbb{R}^{n}}& \sum_{i=1}^{N}f_{i}(x_{i})\\ &\mathrm{s.t.}& x_{i}=z\mid\lambda_{i}.\end{split} \tag{4}\] Here \(x_{i}\) denotes the local primal variable of agent \(i\) and \(z\) indicates the primal global variable. By using Lagrange multiplier (dual variable) \(\lambda_{i}\), the corresponding Lagrangian function can be expressed as \[\begin{split}\mathcal{L}(x_{i},z,\lambda_{i})=& \sum_{i=1}^{N}f_{i}(x_{i})\\ &+\sum_{i=1}^{N}\lambda_{i}^{\top}(x_{i}-z)+\sum_{i=1}^{N}\frac{ \rho}{2}\|x_{i}-z\|^{2}.\end{split} \tag{5}\] Here, \(\rho\) is a given positive penalty parameter. From (5), in the FedADMM, the local primal and dual variables can be updated as Algorithm 1 with learning rate \(\eta_{i}\). Note that the uploaded information \(w_{i}\)s from client side of Algorithm 1 is a linear combination of the local primal and the dual variables, which is a secure way of protecting clients' local information. ``` Initialization: Initial guess of global model \(z=0\), local model \(x_{i}^{\top}=0\)s and dual variables \(\lambda_{i}=0\). Set Total number of rounds \(T\) and penalty parameter \(\rho\). For\(t=1\dots T\)Clients: // In parallel For\(i=1\dots N\) Download \(z\) from the server Locally update \(w_{i}\leftarrow\texttt{ClientUpdate}(z,i)\) Upload \(w_{i}\) to the server End Server:\(z=\frac{1}{N}\sum_{i=1}^{N}w_{i}\). End ClientUpdate(\(z,i\)): Input: Local epoch number \(E_{i}\), client learning rate \(\eta_{i}\). For\(e=1\dots E_{i}\) \(x_{i}=x_{i}-\eta_{i}\left(\nabla f_{i}(x_{i})+\lambda_{i}^{\text{ADMM}}+\rho \left(x_{i}-z\right)\right)\) End \(\lambda_{i}^{\text{ADMM}}=\lambda_{i}^{\text{ADMM}}+\rho(x_{i}-z)\). \(w_{i}=x_{i}+\frac{1}{\rho}\lambda_{i}^{\text{ADMM}}\) return:\(w_{i}\) ``` **Algorithm 1**FedADMM: Consensus ADMM for FL **Remark 1**.: _The private variables \(x_{i}\) can be updated by applying any approximation technologies such as decentralized linearized alternating direction method of multipliers (DLM) [26] or decentralized quadratically approximated alternating direction method of multipliers (DQM) [27]._ **Remark 2**.: _As pointed out in [12, Section IV], a Consensus ADMM variation [11] was specifically designed for solving Problem (4) instead of Problem (1) and should be considered a relatively independent algorithm._ ### _T-ALADIN in A Nutshell_ T-ALADIN is the first distributed optimization algorithm that can generally solve non-convex DO (1). In the first step, Algorithm 2 has a similar operation as ADMM and gets new local optimizers from each client. Later we evaluate the gradients \(g_{i}\)s and positive definite Hessian matrices approximation \(H_{i}\)s with \(\xi_{i}^{+}\)s. In the third step, we solve a large-scale QP for coordination by using \(g_{i}\)s and \(H_{i}\)s from the second step. In the end, we send the updated primal and dual variables to each client in the forth step. The main difference between ADMM and T-ALADIN is that the latter updates the global dual variable \(\mu\) by solving the constrained coupled QP (Equation (8)). Moreover, with the assumption of linearly independent constraint qualification (LQC) and second order sufficient condition (SOSC), Algorithm 2 has local convergence guarantees for distributed non-convex problems [28]. Note that T-ALADIN tries to collect all the good properties of SQP and ADMM. There are two main extreme cases: * When \(\rho\rightarrow\infty\), the first step of Algorithm 2 is redundant and the whole structure is equivalent to SQP. * When \(\rho\to 0\), Algorithm 2 equals a combination of DD and Newton's method. ``` Initialization: Initial guess of primal and dual variables \((y_{i},\mu)\). Repeat: 1. Parallelly solve local nonlinear programming (NLP) without subsystem coupling: \[\xi_{i}{}^{+}=\operatorname*{arg\,min}_{\xi_{i}}f_{i}(\xi_{i})+\mu^{\top}A_{i }\xi_{i}+\frac{\rho}{2}\|\xi_{i}-y_{i}\|^{2}.\] (6) 2. Evaluate Hessian approximation and gradient from \(\xi_{i}{}^{+}\): \[\left\{\begin{array}{l}B_{i}\approx&\nabla^{2}f_{i}(\xi_{i}^{+})\succ 0,\\ g_{i}=&\nabla f_{i}(\xi_{i}^{+}).\end{array}\right.\] (7) 3. Solve the coupled QP on master side: \[\begin{split}\min_{\Delta\xi_{i}}&\sum_{i=1}^{N}\frac{1}{2} \Delta\xi_{i}^{\top}B_{i}\Delta\xi_{i}+g_{i}^{\top}\Delta\xi_{i}\\ &\mathrm{s.t.}\sum_{i=1}^{N}A_{i}(\xi_{i}^{+}+\Delta\xi_{i})=b\mid \mu^{+}.\end{split}\] (8) 4. Download : \[\left\{\begin{array}{l}\mu\leftarrow\mu^{+}\\ y_{i}\leftarrow\xi_{i}^{+}+\Delta\xi_{i}\end{array}\right.\] (9) ``` **Algorithm 2**Typical ALADIN **Remark 3**.: _To the best of our knowledge, although a convex LASSO problem has been solved with T-ALADIN [18, Section 5.1], is a two objectives optimization problem. Besides, no such concept like consensus ALADIN has been formally proposed by others._ ## 3 Consensus ALADIN In Subsection 3.1, we propose the consensus QP.In Subsection 3.2, we detail our communication-efficient design. In Subsection 3.3, we provide our techniques that improve computational efficiency. In Subsection 3.4, with the techniques proposed in the above subsections, we formally describe C-ALADIN with two variants, namely Consensus BFGS ALADIN and Reduced Consensus ALADIN. ### _From Coupled QP to Consensus QP_ Remember that in Subsection 1.1, T-ALADIN is struggling with solving Problem (4) because of the coupling equality constraints in Equation (8). Thus, the first step towards making T-ALADIN survive in (4) is to reconstruct Equation (8), which yields our consensus QP that is formally shown as follows: \[\begin{split}\min_{\Delta x_{i},z}&\sum_{i=1}^{N} \left(\frac{1}{2}\Delta x_{i}^{\top}B_{i}\Delta x_{i}+g_{i}^{\top}\Delta x_{i} \right)\\ \mathrm{s.t.}&\Delta x_{i}+x_{i}^{+}=z\;|\lambda_{i}.\end{split} \tag{10}\] It is trivial to see that, in (10), by introducing a global variable \(z\) and coupling all \(x_{i}\)s to \(z\). The number of coupled equality constraints will be reduced to \(O(N)\) which is lower than \(O(N^{2})\) that comes from the direct adoption of T-ALADIN (shown as Figure 1 and 2). Note that, in Equation (10), we have one global primal variable \(z\) and \(N\) dual variable, which is in contrast to the formulation of Equation (8). After replacing Equation (8) with Equation (10), Algorithm 2 starts working in a consensus fashion. We name this C-ALADIN. **Remark 4**.: _Regarding the number of coupling constraints of T-ALADIN, we want to stress that \(O(N^{2})\), corresponding to the Figure 1, describes the worst case when directly adopting T-ALADIN in solving DC problems. As for the best case, though T-ALADIN can work under \(O(N)\) coupling constraints as those of C-ALADIN, it needs a fine-grained design on \(A_{i}\), which, from user's perspective, hinders the ease of practical adoption of T-ALADIN._ ### _Improve the Communication Efficiency of C-ALADIN_ In this subsection, we jointly improve the upload 3.2.1 and download 3.2.2 communication efficiency of C-ALADIN. #### 3.2.1 Improving Upload Communication Efficiency In the C-ALADIN framework, instead of solving Problem (6), we solve the following decoupled augmented loss function (11) with local dual \(\lambda_{i}\). \[{x_{i}}^{+}=\operatorname*{arg\,min}_{x_{i}}f_{i}(x_{i})+\lambda_{i}^{\top}x_ {i}+\frac{\rho}{2}\|x_{i}-z\|^{2}. \tag{11}\] In order to avoid uploading the gradient and Hessian approximation directly (7), we choose to decode the first and second order information on the master side, which is detailed in (12) \[\begin{cases}g_{i}(x_{i}^{+})=\;\rho(z-x_{i}^{+})-\lambda_{i}\quad\text{((sub )gradient)},\\ s_{i}(x_{i}^{+},x_{i}^{-})=\;x_{i}^{+}-x_{i}^{-},\\ y_{i}(x_{i}^{+},x_{i}^{-})=\;g_{i}(x_{i}^{+})-g_{i}^{-},\\ B_{i}^{+}=\;B_{i}-\frac{B_{i}s_{i}s_{i}^{\top}B_{i}}{s_{i}^{\top}B_{i}s_{i}}+ \frac{y_{i}y_{i}^{\top}}{s_{i}^{\top}y_{i}}\quad\text{(BFGS update)}.\end{cases} \tag{12}\] We want to stress several key designs behind Equation (12). First, we suppose that the augmented NLP (11) can be solved exactly. By applying Clarke sub-differential of \(f_{i}\) at \(x_{i}^{+}\), we have \(g_{i}=\rho(z-x_{i}^{+})-\lambda_{i}\in\partial f_{i}\). It means that the (sub)gradient can be decoded with \(x_{i}^{+}\) without being transmitted. Second, by applying the difference of local private variables and the local (sub)gradients, the BFGS Hessian approximation can also be decoded by the master. In order to ensure the positive definiteness of local BFGS matrices, we adopted the strategy of _damped BFGS_[2, Page 537], that is, we modify the local gradient difference \(y_{i}\) \[y_{i}=y_{i}+\theta(B_{i}s_{i}-y_{i})\] with a turning parameter \[\theta=\frac{0.2(s_{i})^{\top}B_{i}s_{i}-(s_{i})^{\top}y_{i}}{(s_{i})^{\top}B _{i}s_{i}-(s_{i})^{\top}y_{i}}\] if \[(y_{i})^{\top}s_{i}\leq\frac{1}{5}(s_{i})^{\top}B_{i}s_{i}.\] Note that if \(\theta=1\), \(B_{i}^{+}=B_{i}\). The damping thus ensures that the positive curvature of the Hessian in direction \(s_{i}\), which is expressed in the term \(((s_{i})^{\top}B_{i}s_{i})\), will never decrease by more than a factor of 5. Modified versions of BFGS may also work.For example, to solve the storage problem of the BFGS approximation Hessian, _limited memory BFGS_ (L-BFGS) maybe a promising solution. **Remark 5**.: _If we are dealing with a single objective problem, instead of computing the inverse of \(B_{i}^{+}\) directly, we have the following closed form:_ \[\begin{split}(B^{k+1})^{-1}=&(B^{k})^{-1}+\frac{(s ^{\top}y+y^{\top}(B^{k})^{-1}y)(ss^{\top})}{(s^{\top}y)^{2}}\\ &-\frac{(B^{k})^{-1}ys^{\top}+sy^{\top}(B^{k})^{-1}}{s^{\top}y}. \end{split}\] _It can not be applied directly if we have a summation of \(B_{i}\)s._ #### 3.2.2 Improving Download Communication Efficiency If we express the Lagrange function of Equation (10), \[\begin{split}\mathcal{L}^{\text{QP}}(\Delta x_{i},z,\lambda_{i})=& \left(\sum_{i=1}^{N}\frac{1}{2}\Delta x_{i}^{\top}B_{i}\Delta x_{i}+g_{i}^{ \top}\Delta x_{i}\right)\\ &+\left(\sum_{i=1}^{N}\lambda_{i}^{\top}(\Delta x_{i}+x_{i}^{+}-z )\right),\end{split} \tag{13}\] Fig. 1: Fully Connection (Worst case scenario). Fig. 2: Private-master coupling. the KKT system can be then expressed in the following three equations, \[\begin{cases}\frac{\partial\mathcal{L}^{\text{QP}}}{\partial\Delta x_{i}}=B_{i} \Delta x_{i}+g_{i}+\lambda_{i}=0,\\ \frac{\partial\mathcal{L}^{\text{QP}}}{\partial\lambda_{i}}=\Delta x_{i}+x_{i} ^{+}-z=0,\\ \frac{\partial\mathcal{L}^{\text{QP}}}{\partial z}=-\sum_{i=1}^{N}\lambda_{i}= 0,\end{cases} \tag{14}\] which implies \(\Delta x_{i}=z-x_{i}^{+}\) and \(\lambda_{i}=B_{i}(x_{i}^{+}-z)-g_{i}\). It shows that agents can decode the dual variable \(\lambda_{i}\) with the global variable \(z\) without being transmitted. In summary, in C-ALADIN, the agents only upload their private variables update \(x_{i}^{+}\) to the master while the master broadcast the global variable (aggregate model) \(z\). In this way, neither Hessian nor gradient needs to be uploaded and the dual variables need not to be downloaded. Improving the communication efficiency mentioned above is not the end. To further enhance our algorithm, we next boost computational efficiency. ### _Improve the Computational Efficiency of C-ALADIN_ To improve the computational efficiency, a trivial approach is to seek help from existing QP solvers which is described as follows: solvers based on active set qpOASES[29], MOSEK[30], GUROBI[31]; solvers based on interior point methods CVXGEN[32] and OOQP[33]; solvers based on ADMM or operator splitting method OSQP[34]. However, such solvers ignore the special structure of our input (10). Therefore, a customized QP solver is needed for improving the computational efficiency. Next, we show our technical details. The global primal variable can be updated in a cheap operation by introducing the following theorem. **Theorem 1**.: _With the decoding of Hessian approximation and the gradients of the agents by Equation (12), the global \(z\) can be updated as Equation (15)23._ Footnote 2: Assume that the matrix inverse operation can be applied on the master side, instead of an inverse operation, _conjugate gradient descent_ can be also applied here. Footnote 3: Suppose that for iteration \(k\) with \(\log_{\mathcal{K}}(k)\in\mathbb{N}\)[19], the Hessian matrices can be chosen optionally to upload from the agent to the master. The decomposition of a positive definite matrix can be prepared before the consensus QP step (10). \[z_{\text{BFGS}}^{+}=\left(\sum_{i=1}^{N}B_{i}^{+}\right)^{-1}\left(\left(\sum _{i=1}^{N}B_{i}^{+}x_{i}^{+}\right)-\left(\sum_{i=1}^{N}g_{i}\right)\right). \tag{15}\] Proof.: Linear system (14) can be expressed as the following dense form: \[\underbrace{\begin{bmatrix}\mathcal{B}&\mathbf{I}&\mathbf{0}\\ \mathbf{I}&\mathbf{0}&-\mathcal{I}\\ \mathbf{0}^{\top}&(-\mathcal{I})^{\top}&\mathbf{0}\\ \end{bmatrix}}_{M_{\text{KT}}\in\mathbb{R}_{+}^{[(2N+1)\mapsto|\times|(2N+1) |]}}\begin{bmatrix}\Delta x\\ \lambda\\ z\end{bmatrix}=\begin{bmatrix}-\mathcal{G}\\ -x^{+}\\ \mathbf{0}\end{bmatrix} \tag{16}\] with \[\mathcal{B}=\begin{bmatrix}B_{1}&0&\cdots&0\\ 0&B_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&B_{N}\end{bmatrix},\quad\mathbf{I}=\begin{bmatrix}I&0&\cdots&0\\ 0&I&\cdots&0\\ \vdots&\vdots&\ddots&0\\ 0&\cdots&\cdots&I\end{bmatrix},\] \[\mathcal{I}=\begin{bmatrix}I&I&I&I\end{bmatrix}^{\top},\] Directly solving QP (10) is equivalent to solving linear system (16). We note that the roadblock to making C-ALADIN computationally efficient is the heavy computational workload, incurred by deriving the inverse of the large-scale matrix \(M_{\text{KKT}}\). Inspired by _Schur Complement_ we have the following proof. From the first row of Equation (16), we have \[\Delta x=-\mathcal{B}^{-1}(\lambda+\mathcal{G}). \tag{17}\] Then plug (17) into the second row of Equation (16) to get Equation (18). \[\mathcal{B}^{-1}(\lambda+\mathcal{G})+\mathcal{I}z=x^{+}\Rightarrow\lambda= \mathcal{B}(x^{+}-\mathcal{I}z)-\mathcal{G}. \tag{18}\] Next, bring Equation (18) into the third row of Equation (16), the following equation will be obtained \[(-\mathcal{I})^{\top}\mathcal{B}(x^{+}-\mathcal{I}z)+(\mathcal{I})^{\top} \mathcal{G}=0. \tag{19}\] The update of global variable \(z\) can be then expressed as follows from Equation (19). \[z_{\text{BFGS}}^{+}= \underbrace{\left(\mathcal{I}^{\top}\mathcal{B}\mathcal{I}\right) ^{-1}}_{\mathcal{K}\in\mathbb{R}_{+}^{[\alpha|\times|n]}}\left(\mathcal{I}^{ \top}\mathcal{B}x^{+}-(\mathcal{I})^{\top}\mathcal{G}\right)\] \[= \left(\sum_{i=1}^{N}B_{i}^{+}\right)^{-1}\left(\left(\sum_{i=1}^{N} B_{i}^{+}x_{i}^{+}\right)-\left(\sum_{i=1}^{N}g_{i}\right)\right).\] As an extension, if \(B_{i}\)s are set as \(\rho I\), Equation (15) is reduced as \[z_{\rho}^{+}=\frac{1}{N}\sum_{i=1}^{N}\left(x_{i}^{+}-\frac{g_{i}}{\rho}\right). \tag{20}\] Here, the expression of (20) has almost the computation complexity as Consensus ADMM since no inverse of aggregated Hessian matrix is needed. Both methods of updating the global variable ((15) and (20)) avoid computing \(\Delta x_{i}\) and \(\lambda_{i}\) together. In this way, the operation burden of large-scale matrix inversion in Equation (16) is reduced. **Remark 6**.: _With Equation (20), the primal increment \(\Delta x_{i}\) and dual \(\lambda_{i}\) can be then decoded by the agents with_ \[\begin{cases}\Delta x_{i}=z-x_{i}^{+},\\ \lambda_{i}=\rho(x_{i}^{+}-z)-g_{i}\end{cases} \tag{21}\] _in the next iteration._ By combining the technologies proposed in the above three subsections, we illustrate two variations of C-ALADIN in the next subsection. ### _Algorithm Structure_ In this subsection, by combining our proposed techniques described in Subsection 3.1-3.3, we propose two algorithms. Specifically, one, named _Consensus BFGS ALADIN_, benefits from the techniques of BFGS Hessian approximation (with Equation (15)). The other, named _Reduced Consensus ALADIN_ (with Equation (20)), under the scenario where the convergence rate is degraded, can work without the second-order information. We detail Consensus BFGS ALADIN in Algorithm 3. For _Reduced Consensus ALADIN_, it can be easily got from by replacing \(x_{i}^{+}\), \(\lambda_{i}\), \(g_{i}\) and \(z\) in Algorithm 3 with the ones defined by the following equation: \[\begin{cases}\lambda_{i}=\rho(x_{i}-z)-g_{i},\\ x_{i}{}^{+}=\operatorname*{arg\,min}_{x_{i}}f_{i}(x_{i})+\lambda_{i}^{\top}x_ {i}+\frac{\rho}{2}\|x_{i}-z\|^{2},\\ g_{i}=\rho(z-x_{i}^{+})-\lambda_{i},\\ z=\frac{1}{N}\sum_{i=1}^{N}\left(x_{i}^{+}-\frac{g_{i}}{\rho}\right).\end{cases} \tag{22}\] Note that, both two algorithms belong to the class of C-ALADIN. **Remark 7**.: _As a supplement, we analyze the difference between the Reduced Consensus ALADIN and Consensus ADMM in Appendix A._ In order to expand the reach of our proposed C-ALADIN, by meeting the DC problems in FL, we next introduce a novel way of applying Reduced Consensus ALADIN in this area. ## 4 FedALADIN To meet the DC problems in FL, directly adopting Consensus BFGS ALADIN or Reduced Consensus ALADIN is a straightforward approach to solving such problems. However, we will meet the following challenges: first, since the high dimension of private variables \(x_{i}\)s, we cannot benefit from the second order information; second, existing algorithms, when solving the DC problems in FL, does not carefully determine a training _epoch_, which makes the solutions inexact. This challenge renders the failure to directly adopting our techniques that provide exact solutions. By meeting the two challenges, in this section, we carefully design a variant of Reduced Consensus ALADIN, named FedALADIN, that works well for the problems in FL. We sketch the key design of FedALADIN as follows: By following the conventions in Algorithm 1 and observing the structure of Equation (20), each client transmits Equation (25) instead of \(x_{i}\). \[w_{i}=\left(x_{i}-\frac{g_{i}}{\rho}\right). \tag{25}\] Next, we detail FedALADIN in Algorithm 4. Different from Algorithm 1, Algorithm 4 obtains the local optimizer by using the decoded \(\lambda_{i}\) from the previous global model \(z\). Then we evaluate the (sub)gradients with Equation (24). Later we encode the transmitted date \(w_{i}\)s with Equation (25). After receiving \(w_{i}\)s from each client, the server aggregates them as FedADMM. ``` Initialization: Initial guess of global model \(z=0\), local model \(x_{i}^{-}=0\), gradient \(g_{i}=0\) and dual variables \(\lambda_{i}=0\). Set the total number of rounds \(T\) and penalty parameter \(\rho\). For\(t=1\dots T\)Clients:// In parallel For\(i=1\dots N\) Download \(z\) from the server Locally update \(w_{i}\leftarrow\texttt{ClientUpdate}(z,i)\) \(\texttt{Upload}\ w_{i}\) to the server End Server:\(z=\frac{1}{N}\sum_{i=1}^{N}w_{i}\). End ClientUpdate\((z,i)\): Input: Local epoch number \(E_{i}\), client learning rate \(\eta_{i}\). \(\lambda_{i}=\rho(x_{i}^{-}-z)-g_{i}\). For\(e=1\dots E_{i}\) \(x_{i}=x_{i}-\eta_{i}\left(\nabla f_{i}(x_{i})+\lambda_{i}+\rho\left(x_{i}-z \right)\right)\) End \(g_{i}=\nabla f_{i}(x_{i})\) \(w_{i}=x_{i}-\frac{g_{i}}{\rho}\) return:\(w_{i}\) ``` **Algorithm 4** FedALADIN Note that the major difference between FedALADIN and FedADMM is the way of dual update. More importantly, the gradient evaluation (24) is a symmetric operation as the dual update of Equation (45). However, FedALADIN can benefit from the reduced QP operation compare with the former. In terms of structure, all the existing algorithms in FL are special cases of FedALADIN. In the next section, we aim to establish the global and local convergence theory of C-ALADIN. ## 5 Convergence Analysis In this section, we are interested in the convergence behavior of C-ALADIN, which consists of the following three parts: Fig. 3: Consensus ALADIN family. **Initialization:** choose \(\rho>0\), initial guess \((\lambda_{i},z,B_{i}\succ 0)\) (or set \(B_{i}=\rho I\)). **Repeat:** 1. Each agent optimizes its own variable \(x_{i}\) locally and transmit it to the master \[{x_{i}}^{+}=\operatorname*{arg\,min}_{x_{i}}f_{i}(x_{i})+\lambda_{i}^{\top}x_{ i}+\frac{\rho}{2}\|x_{i}-z\|^{2}\] (23) with \(\lambda_{i}=B_{i}(x_{i}^{-}-z)-g_{i}\). 2. Decode the gradients and Hessian of each sub-problem at the master side. a) The master encode the (sub)gradient and BFGS Hessian from each \({x_{i}}^{+}\). \[\begin{cases}g_{i}(x_{i}^{+})=\rho(z-x_{i}^{+})-\lambda_{i}&\text{( local (sub)gradient evaluation)}\,,\\ s_{i}(x_{i}^{+},x_{i}^{-})=x_{i}^{+}-x_{i}^{-}&\text{(difference of private variables)}\,,\\ y_{i}(x_{i}^{+},x_{i}^{-})=g_{i}(x_{i}^{+})-g_{i}^{-}&\text{(difference of local (sub)gradient)}\,.\end{cases}\] (24) b) Modify the local gradient based on the following condition \[\begin{cases}y_{i}=y_{i}+\theta(B_{i}s_{i}-y_{i})\text{ where }\theta=\frac{0.2(s_{i})^{\top}B_{i}s_{i}-(s_{i})^{\top}y_{i}}{(s_{i})^{\top}B_{i }s_{i}-(s_{i})^{\top}y_{i}},&\text{if }(y_{i})^{\top}s_{i}\leq\frac{1}{5}(s_{i})^{\top}B_{i}s_{i}\\ y_{i}=y_{i},&\text{otherwise}.\end{cases}\] c) BFGS Hessian approximation evaluation: \[B_{i}^{+}=\ B_{i}-\frac{B_{i}s_{i}s_{i}^{\top}B_{i}}{s_{i}^{\top}B_{i}s_{i}}+ \frac{y_{i}y_{i}^{\top}}{s_{i}^{\top}y_{i}}.\] 3. The master solve the following coupled QP with updated \(x_{i}^{+},g_{i}\) and \(B_{i}\). \[z_{\text{BFGS}}^{+}=\left(\sum_{i=1}^{N}B_{i}^{+}\right)^{-1}\left(\left(\sum _{i=1}^{N}B_{i}^{+}x_{i}^{+}\right)-\left(\sum_{i=1}^{N}g_{i}(x_{i}^{+})\right) \right).\] Broadcast the global model \(z\) to the agents. **Initialization:** choose \(\rho>0\), initial guess \((\lambda_{i},z,B_{i}\succ 0)\) (or set \(B_{i}=\rho I\)). **Repeat:** 1. Each agent optimizes its own variable \(x_{i}\) locally and transmit it to the master \[{x_{i}}^{+}=\operatorname*{arg\,min}_{x_{i}}f_{i}(x_{i})+\lambda_{i}^{\top}x_ {i}+\frac{\rho}{2}\|x_{i}-z\|^{2}\] (25) with \(\lambda_{i}=B_{i}(x_{i}^{-}-z)-g_{i}\). 2. Decode the gradients and Hessian of each sub-problem at the master side. a) The master encode the (sub)gradient and BFGS Hessian from each \({x_{i}}^{+}\). \[\begin{cases}g_{i}(x_{i}^{+})=\rho(z-x_{i}^{+})-\lambda_{i}&\text{( local (sub)gradient evaluation)}\,,\\ s_{i}(x_{i}^{+},x_{i}^{-})=x_{i}^{+}-x_{i}^{-}&\text{(difference of private variables)}\,,\\ y_{i}(x_{i}^{+},x_{i}^{-})=g_{i}(x_{i}^{+})-g_{i}^{-}&\text{(difference of local (sub)gradient)}\,.\end{cases}\] (26) b) Modify the local gradient based on the following condition \[\begin{cases}y_{i}=y_{i}+\theta(B_{i}s_{i}-y_{i})\text{ where }\theta=\frac{0.2(s_{i})^{\top}B_{i}s_{i}-(s_{i})^{\top}y_{i}}{(s_{i})^{\top}B_{i }s_{i}-(s_{i})^{\top}y_{i}},&\text{if }(y_{i})^{\top}s_{i}\leq\frac{1}{5}(s_{i})^{\top}B_{i }s_{i}\\ y_{i}=y_{i},&\text{otherwise}.\end{cases}\] c) BFGS Hessian approximation evaluation: \[B_{i}^{+}=\ B_{i}-\frac{B_{i}s_{i}s_{i}^{\top}B_{i}}{s_{i}^{\top}B_{i}s_{i}}+ \frac{y_{i}y_{i}y_{i}^{\top}}{s_{i}^{\top}y_{i}}.\] 3. The master solve the following coupled QP with updated \(x_{i}^{+},g_{i}\) and \(B_{i}\). \[z_{\text{BFGS}}^{+}=\left(\sum_{i=1}^{N}B_{i}^{+}\right)^{-1}\left(\left(\sum_{i= 1}^{N}B_{i}^{+}x_{i}^{+}\right)-\left(\sum_{i=1}^{N}g_{i}(x_{i}^{+})\right) \right).\] Broadcast the global model \(z\) to the agents. **Algorithm 3** Consensus BFGS ALADIN * Global convergence of Reduced Consensus ALADIN for convex problems (Subsection 5.1). * Global linear convergence rate of Reduced Consensus ALADIN for Lipschitz continuous or strongly convex problems (Subsection 5.2). * Local convergence analysis of non-convex problems with C-ALADIN (Subsection 5.3). The following equations will be used several times in the following proof. They are provided here for convenience. \[g_{i} =\rho(z-x_{i}^{+})-\lambda_{i}, \tag{27}\] \[\lambda_{i}^{+} =\rho(x_{i}^{+}-z^{+})-g_{i},\] (28) \[\sum_{i=1}^{N}\lambda_{i} =0,\] (29) \[z^{+} =\frac{1}{N}\sum_{i=1}^{N}\left(x_{i}^{+}-\frac{g_{i}}{\rho} \right),\] (30) \[\sum_{i=1}^{N}x_{i}^{+} =\frac{N}{2}(z^{+}+z),\] \[2(a-c)^{\top}(a-b) =\|a-c\|^{2}-\|b-c\|^{2}+\|a-b\|^{2}. \tag{31}\] Note that, by plugging Equation (27) and (28) into (30), (31) can be obtained. Equation (31) comes from [11]. The following lemma will also be useful in our proofs. **Lemma 1**.: _With the procedure of Algorithm 1, the local primal update has a relationship with the local dual and global primal variables in the following way_ \[x_{i}^{+}=\frac{\lambda_{i}^{+}-\lambda_{i}}{2\rho}+\frac{z^{+}+z}{2}. \tag{32}\] Proof.: From Equation (28), \[\lambda_{i}^{+} =\rho(x_{i}^{+}-z^{+})-g_{i}\] \[\overset{\eqref{eq:c_i}}{=}\rho(x_{i}^{+}-z^{+})-\rho(z-x_{i}^{+ })+\lambda_{i}\] \[=\rho(2x_{i}^{+}-z^{+}-z)+\lambda_{i}\] \[\Longleftrightarrow x_{i}^{+}=\frac{\lambda_{i}^{+}-\lambda_{i}}{2\rho}+\frac{z^{+}+z}{2}.\] ### _Global Convergence of Convex Case_ The following global convergence proof relies on the _Lyapunov stability theory_ in the control community. A relationship between the former and the latter has been clarified in Appendix B. We assume that the sub-functions \(f_{i}\)s are closed, proper, and strictly convex. For establishing the global convergence theory of Reduced Consensus ALADIN, we introduce the following _Lyapunov function_[35] with the global minimizer \(z^{*}\). \[\mathscr{L}(z,\lambda)=\frac{1}{\rho}\sum_{i=1}^{N}\|\lambda_{i}-\lambda_{i}^{*} \|^{2}+\rho N\|z-z^{*}\|^{2}. \tag{33}\] Note that the choice of Lyapunov function is not unique. Next, we will prove the global convergence of Reduced Con sensus ALADIN by showing that the Lyapunov function is monotonically decreasing. **Theorem 2**.: _Suppose \(f_{i}\)s are strictly convex and problem (4) has an existing solution \(z^{*}\), then_ \[\mathscr{L}(z,\lambda)-\mathscr{L}(z^{+},\lambda^{+})\geq\alpha\left(\left\|x_{ i}^{+}-z^{*}\right\|\right)\geq 0 \tag{34}\] _will hold by applying Reduced Consensus ALADIN. Here, \(\alpha\) is a class \(\mathcal{K}\) function [35]._ Proof.: See Appendix C. According to Theorem 2, the global convergence of Reduced Consensus ALADIN can be established. In order to prove the convergence of sequence \((z,\lambda)\) to the global optimal solution pair \((z^{*},\lambda^{*})\), we need to show the uniqueness of it. **Theorem 3**.: _We assume that Theorem 2 holds, which yields_ \[\begin{cases}\lim_{k\to\infty}z^{k}=z^{*}\\ \lim_{k\to\infty}\lambda^{k}=\lambda^{*},\end{cases} \tag{35}\] _where \(k\) denotes the index of iterations._ Proof.: See Appendix D. The above two theorems show the convergence of \((z,\lambda)\). Later, we will show \(x_{i}\) is also convergent in the following theorem. **Theorem 4**.: _If Theorem 2 holds, then we have \(x_{i}\to z^{*}\)._ Proof.: See Appendix E. From the above three theorems, a convergence of the sub-gradients of the agents can be also easily established. **Theorem 5**.: _We assume that Theorem 2, 3, and 4 hold jointly. Then, \(g_{i}\) converges to \(-\lambda_{i}^{*}\) globally._ Proof.: See Appendix F. Note that the global convergence proof only requires the strict convexity of objectives without smoothness and strongly convexity assumptions. C-ALADIN, same as T-ALADIN, in case \(f_{i}(x_{i})\)s are only convex rather than strictly convex, guarantees that the solutions can converge to an optimal set instead of a single optimal solution, which will be theoretically analyzed in Theorem 6. **Theorem 6**.: _Suppose that \(\mathbb{Z}^{*}\) denotes the set of optimal primal solution and \(\mathbf{\Lambda}_{i}^{*}\) represents the optimal dual set, with \(\lambda_{i}^{*}\in\mathbf{\Lambda}_{i}^{*}\), we have \(z\) converge to \(\mathbb{Z}^{*}\) globally,_ \[\lim_{z^{*}\in\mathbb{Z}^{*}}\|z-z^{*}\|=0.\] Proof.: Here the second auxiliary function in Equation (56) will be reduced to \[G(\xi)=\sum_{i=1}^{N}f_{i}(\xi_{i})+\sum_{i=1}^{N}(\xi_{i}-z^{*})^{\top}\lambda _{i}^{*} \tag{36}\] with \(\lambda_{i}^{*}\in\mathbf{\Lambda}_{i}^{*}\) that \(\mathbf{\Lambda}_{i}^{*}\) is the optimal set of each dual variables which means that they are not unique. In this way, Theorem 3 is not used in this case. However, if we apply the proof of Theorem 2 again here, the monotone decreasing of the Lyapunov function still holds. In this case, we have a similar result as [36]. Note that, no global convergence rate of convex optimization has been discussed in the T-ALADIN research [36]. As an additional contribution compared with the former, in the next subsection we will find the convergence rate of convex cases by using Reduced Consensus ALADIN with some extra technical assumptions. ### _Global Linear Convergence Rate Analysis_ Next we will prove the Q-linear convergence rate [2] by adding additional \(m_{f}\) strongly convex or \(\omega_{f}\) smooth assumptions. **Theorem 7**.: _Suppose that \(\sum_{i=1}^{N}f_{i}(x_{i})\) is \(m_{f}\) strongly convex, and there exists a \(\delta>0\) such that_ \[\delta\mathscr{L}(z^{+},\lambda^{+})\leq 4m_{f}\sum_{i=1}^{N}\left\|x_{i}^{+}-z ^{*}\right\|^{2}, \tag{37}\] _then C-ALADIN is Q-linearly converging to a unique optimal solution with rate \(\left(\frac{1}{\sqrt{1+\delta}}\right)\)._ Proof.: See Appendix G. In the Reduced Consensus ALADIN, we find the condition of \(m_{f}\) strongly convex and \(\omega_{f}\) smooth are symmetric. Q-linearly convergent results can also be established later. **Corollary 1**.: _Suppose that \(\sum_{i=1}^{N}f_{i}(x_{i})\) is \(\omega_{f}\) smooth and convex, we have a similar result as Theorem 7, there exists a \(\delta>0\) such that_ \[\delta\mathscr{L}(z^{+},\lambda^{+})\leq\frac{4}{\omega_{f}}\sum_{i=1}^{N} \left\|g_{i}-g_{i}^{*}\right\|^{2}, \tag{38}\] _then Reduced Consensus ALADIN can also Q-linearly converge to a unique optimal solution with a rate \(\left(\frac{1}{\sqrt{1+\delta}}\right)\)._ Proof.: From the definition of \(\omega_{f}\) Lipschitz continuous [2] \[\|g_{i}-g_{i}^{*}\|\leq\omega_{f}\|x_{i}^{+}-z^{*}\|, \tag{39}\] the following inequality can be obtained \[\frac{1}{\omega_{f}}\sum_{i=1}^{N}\|g_{i}-g_{i}^{*}\|^{2}\leq\sum_{i=1}^{N} \left(x_{i}^{+}-z^{*}\right)^{\top}\left(g_{i}-g_{i}^{*}\right). \tag{40}\] Note that the right-hand side is the same equation as that of Equation (82). Such that, the later proof is similar to Theorem 7 and is not shown here. Therefore, Reduced Consensus ALADIN needs either \(m_{f}\) strongly convex or \(\omega_{f}\) Lipschitz continuous to establish the global Q-linear convergence theory. In the above two sections, we show the global convergence of Reduced Consensus ALADIN has a similar property as ADMM. But different from the latter, in the next section we will show local convergence analysis of non-convex cases of C-ALADIN. ### _Local Convergence Analysis of Non-convex Case_ The following convergence analysis depends on the assumption that \(f_{i}\)s are twice continuously differentiable in a neighborhood of a local minimizer \(z^{*}\). To benefit from the theory of SQP [2, Chapter 18], in the corresponding convergence analysis of C-ALADIN, we introduce \(\gamma\) as the upper bound of Hessian approximation difference \(\left\|B_{i}-\nabla^{2}f_{i}(x_{i})\right\|\leq\gamma\). **Theorem 8**.: _If \(f_{i}\)s are non-convex, C-ALADIN can still converge with sufficient large \(\rho\) with different convergence rate in different situations of \(\gamma\)._ Proof.: See Appendix H. Next, we will show the numerical performance of C-ALADIN. ## 6 Numerical Experiments In this section, we illustrate the numerical performance of the proposed algorithms on both distributed optimization (Subsection 6.1) and learning (Subsection 6.2). ### _Case Studies on Distributed Consensus Optimization_ In this subsection, all the implementation of algorithms relies on Casadi-v3.5.5 with IPOPT[37]. The first case is a convex consensus least square problem \[\begin{split}\min_{x_{i},z}&\ \frac{1}{2}\sum_{i=1}^{N}\|x_{i}- \zeta_{i}\|_{2}^{2}\\ \mathrm{s.t.}&\ x_{i}=z\ |\lambda_{i}.\end{split} \tag{41}\] Here, \(x_{i}\in\mathbb{R}^{100}\) and \(N=200\). The measured data \(\zeta_{i}\)s are drawn from the Gaussian distribution \(\mathcal{N}(0,25)\). In this setting, Problem (41) has \(20100\) primal variables and \(20000\) dual variables, which is a large-scale optimization problem. In our implementation, the learning rate of FedSGD\(\eta_{i}\)s are set as \(0.01\) while other compared algorithms can update the local primal variables with Casadi exactly. Moreover, the hyper-parameter \(\rho\) is set as \(10^{2}\) for other algorithms. Note that, all the initial values of primal and dual variables are set as zeros vectors. In the optimization framework, we assume that all the other algorithms can be solved exactly with local optimizer. As the same setting, a distributed non-convex optimization problem can be easily implemented as (42). Note that, excluding the second term of the objective function, the non-convex optimization problem is directly reduced to the convex one (41). \[\begin{split}\min_{x_{i},z}&\ \sum_{i=1}^{N}\frac{1}{2}\left(\|x_{i}^{a}-\zeta_{i}^{a}\|_{2}^{2}+\|x_{i}^{b}- \zeta_{i}^{b}\|_{2}^{2}\right)\\ &\ \ Different from the standard Consensus ADMM, the hyperparameter \(\rho_{i}\)s in [25] are set to a different value, which means that the learning rate of each sub-problem will be affected differently. Same as [25], here we first compared performance algorithms on a convex linear regression problem with local objectives (43) and non-i.i.d. data. \[f_{i}(x)=\sum_{t\in\mathcal{D}_{i}}\frac{1}{2d_{i}}((a_{i}^{t})^{\top}x-b_{i}^{ t})^{2}. \tag{43}\] Here \(a_{i}^{t}\in\mathbb{R}^{100}\) and \(b_{i}^{t}\in\mathbb{R}\) are the \(t\)-th sample data of client \(i\). We set \(x\in\mathbb{R}^{100}\) and \(N=100\). Another example is a non-convex logistic regression with sub-objectives (44). \[f_{i}(x)=\frac{1}{|\mathcal{D}_{i}|}\sum_{t\in\mathcal{D}_{i}}\left(\ln\left(1 +e^{(a_{i}^{t})^{\top}x}\right)-b_{i}(a_{i}^{t})^{\top}x\right)+\frac{\lambda} {2}\|x\|^{2}. \tag{44}\] Here \(a\in\mathbb{R}^{1024}\), \(b\in\{0,1\}\), \(\lambda=0.001\), \(x\in\mathbb{R}^{1024}\) and also with \(100\) clients. Importantly, each client has a participation rate and is set as \(0.1\). The rest of the technical details can be referred to [25, Section 5]. For fairness, we have no other updates. Figure (6) and (7) show the numerical comparison on the linear regression and logistic regression problems respectively. Here, FedADMM1 represents the method proposed by [25] that update the dual before global aggregation. FedADMM2, follows the order as [38]. The convergence performance of the two schemes is very different in logistic regression problem. It has to be pointed out that, in the optimization setting, there is only a little difference in performance between the two ADMM variations mentioned above, but in this experiment, we can see that they behave completely differently. This once again illustrates the fundamental difference between locally exact or inexactly search of the local primal variables. While this paper does not address this issue theoretically in C-ALADIN family, it still points out that it is an open question worthy of attention. For further discussion, as shown in Figure (8) and (9), we find that when the hyperparameter \(\rho\) is set to the same value (\(\rho=0.5\) for linear regression and \(0.05\) for logistic regression). With a learning rate \(\eta=0.01\) for all the algorithms, the convergence performance of FedALADIN is far better than the existing algorithms at least in these two examples. In addition, we found that FedADMM2 is not stably converged in the first several iterations. ## 7 Related Work Existing distributed convex optimization algorithms can be roughly divided into two types: primal decomposition (PD) [39, 40, 41] and DD (also called Lagrangian decomposition). We refer [42, 43, 44, 45] as references for more details. PD aims to partition the problem in a lower-upper level fashion, where the upper-level problem considers the lower-higher level problems by their optimal value functions which means control the private variables directly. Different from the former, the higher level problem influence the lower level ones by using dual variables (shadow price) in the DD structure. To the best of our knowledge, only few literature studied the theoretical comparison between PD and DD. However, numerically, [43, 44] showed that DD performances a better convergence rate compare with PD, however the degree of stability of convergence is the opposite in some applications. A discussion can be found in [46, Section I]. The efforts in PD and DD can be further categorized into the following two fashions, namely exact search and inexact search. Exact search comes from the optimization community while inexact search is drawn from the FL community. In the PD family, representative algorithms of exact search are DGD [13] and EXTRA [14]. On the inexact search side, FedSGD [15], FedAvg [8], FedProx [22] and FedDANE [23] were proposed. In the DD family, current techniques, consisting of consensus ADMM [38] and DD, only focus on distributed convex problems with exact search. In contrast, our proposed C-ALADIN has guarantee for non-convex problems. On the inexact search side, the state-of-the-art algorithm is FedADMM [24, 25, 47]. Our proposed FedALADIN extends this line of work by showing a more stable convergence performance than FedADMM. ## 8 Conclusion \(\&\) Outlook This paper proposed a novel distributed consensus algorithm family, named C-ALADIN, that is efficient in solving \begin{table} \begin{tabular}{|c|c|c|} \hline Methods & Primal Decomposition & Dual Decomposition \\ \hline Exact & DGD [13], & Consensus ADMM [3], \\ Search & EXTRA [14], NIDS [48] & DD [10], **C-ALADIN** \\ \hline Inexact & FedSGD [15], FedDANE [23], & FedADMM [25, 47], \\ Search & FedProx [22], FedAvg [8] & **FedALADIN** \\ \hline \end{tabular} \end{table} TABLE I: Existing and our proposed algorithms for DC. Fig. 8: Linear regression with the Fig. 9. Logistic regression with with same \(\rho\) (**convex** problem). Fig. 6: Linear regression with differ- Fig. 7. Logistic regression with different \(\rho\) (**convex** problem). non-convex problems. In the framework of C-ALADIN, the proposed efficient structure for communication and computation efficiency. Based on the framework, depending on whether second order information is used, two variants of C-ALADIN are proposed, named Consensus BFGS ALADIN and Reduced Consensus ALADIN respectively. Finally, to serve the FL community, we compare a variant of Reduced Consensus ALADIN named FedALADIN with existing method in FL. It performs well with several case studies. Other variants of C-ALADIN will be considered in the future to accommodate different types of optimization problems. More importantly, the convergence theory with local inexactly search in C-ALADIN is still lacking. Such theoretical supplement will assist the algorithm to be applied in more complex neural networks.
2307.13329
$L^2$-growth property for wave equations with higher derivative terms
We consider the Cauchy problems in the whole space for wave equations with higher derivative terms. We derive sharp growth estimates of the $L^2$-norm of the solution itself in the case of the space 1, 2 dimensions. By imposing the weighted $L^1$-initial velocity, we can get the lower and upper bound estimates of the solution itself. In three or more dimensions, we observe that the $L^2$-growth behavior of the solution never occurs in the ($L^2 \cap L^1$)-framework of the initial data.
Ryo Ikehata, Xiaoyan Li
2023-07-25T08:43:15Z
http://arxiv.org/abs/2307.13329v1
# \(L^{2}\)-growth property for wave equations with higher derivative terms ###### Abstract We consider the Cauchy problems in \(\mathbf{R}^{n}\) for wave equations with higher derivative terms. We derive sharp growth estimates of the \(L^{2}\)-norm of the solution itself for the case of \(n=1\) and \(n=2\). By imposing the weighted \(L^{1}\)-initial velocity, we can get the lower and upper bound estimates of the solution itself. For the case of \(n\geq 3\), we observe that the \(L^{2}\)-growth behavior of the solution never occurs in the \((L^{2}\cap L^{1})\)-framework of the initial data. 0 Footnote 0: 2010 Mathematics Subject Classification. Primary 35L05; Secondary 35B40, 35C20, 35E15. 0 Footnote 0: 2010 Mathematics Subject Classification. Primary 35L05; Secondary 35B40, 35C20, 35E15. ## 1 Introduction We consider the Cauchy problem of the wave equation with a higher derivative term: \[u_{tt}-\Delta u-\Delta u_{tt}=0,\quad(t,x)\in(0,\infty)\times \mathbf{R}^{n}, \tag{1.1}\] \[u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),\ x\in\mathbf{R}^{n}. \tag{1.2}\] Here, we assume, for the moment, \([u_{0},u_{1}]\in H^{1}(\mathbf{R}^{n})\times H^{1}(\mathbf{R}^{n})\). Concerning the existence of a unique energy solution to problem (1.1)-(1.2), by the Lumer-Phillips Theorem one can find that the problem (1.1)-(1.2) has a unique mild solution \[u\in C^{1}([0,\infty);H^{1}(\mathbf{R}^{n}))\] satisfying the energy conservation law such that \[E(t)=E(0),\quad t\geq 0, \tag{1.3}\] where the total energy \(E(t)\) for the solution to problem (1.1)-(1.2) can be defined by \[E(t):=\frac{1}{2}\left(\|u_{t}(t,\cdot)\|_{L^{2}(\mathbf{R}^{n})}^{2}+\| \nabla u_{t}(t,\cdot)\|_{L^{2}(\mathbf{R}^{n})}^{2}+\|\nabla u(t,\cdot)\|_{L^{ 2}(\mathbf{R}^{n})}^{2}\right).\] References [15] and [24] are helpful in detailing these discussions in Section 2 below. The Strichartz and \(L^{p}\)-\(L^{q}\) estimates for (free) wave equations \[u_{tt}-\Delta u=0\] are powerful and particularly well-known as estimation formulas for the solution of the free wave equation itself (see [30, 31, 4, 27, 25, 28] and the references therein), however when one tries to obtain the \(L^{2}\) estimate of the solution itself from some motivation, one will feel that it may be critical and a bit far from the best estimate, especially in low dimensions. Furthermore, it is important to recognize that the low-dimensional case, coupled with the infeasibility of the Hardy-type inequality, requires more delicate treatment than the high-dimensional case. Among such series of estimations, most of \(L^{2}\) estimates, in some sense, are critical and difficult to derive. On the other hand, since the optimal growth in time estimate of the \(L^{2}\) norm of the solution of the wave equation with strong damping in low dimensions was recently obtained in [17, 21] to the equation \[u_{tt}-\Delta u-\Delta u_{t}=0, \tag{1.4}\] studies observing the best growth estimate of various wave-derived equations have been reported, one after another (see [2, 5, 6, 8, 13, 22, 23]). Among them, Ikehata's estimates of the best \(L^{2}\) norm for free wave and plate solutions are one of the most important fundamental results of these studies (see [19, 20]). The equation (1.4) expresses the wave part \(+\) third-order derivative term in a sense. The purpose of this study is to contribute the \(L^{2}\) estimates of the solution itself when higher derivative terms are added to the free wave equation. We believe that the results we claimed below are completely novel for this type equation. Incidentally, there has been some references [11, 12] related the \(L^{2}\) behavior of the solution itself of the plate equation with higher derivative terms and dissipative terms. Although the form of the equation changes slightly, there are a lot of interesting literature recently on the subject of infinite-time blowup, and some of which are listed as [5, 7, 9, 18, 29]. By the way, although the equation (1.1) is related to the so-called generalized IMBq equation: \[u_{tt}-\Delta u_{tt}-\Delta u=\Delta f(u), \tag{1.5}\] it is natural to image that the result for the linear homogeneous case \(f(u)=0\) in particular has some influence on the behavior of the solution of the equation (1.5), for example, a scattering result as in [33, Theorem 1.2], and thus it would be well worth considering (1.1). The relevant researches for equation (1.5) should be explored in [32, 33] and the references therein. Furthermore, for a study of the coupled Schrodinger equation and IMBq, we refer to [1], [26] and the references therein. It would be interesting to explore whether or not there is an effect of singularity as found from (1.1) in this paper when coupling, but this is a future issue. It should be emphasized that as can be seen from some of the preceding papers ([33] and the references therein), it seems to make particular sense to deal with lower dimensions from a physics perspective as well. Before going to introduce our theorems, we present the following notations. **Notation.** Throughout this paper, \(\|\cdot\|_{q}\) stands for the usual \(L^{q}(\mathbf{R}^{n})\)-norm. For simplicity of notation, in particular, we use \(\|\cdot\|\) instead of \(\|\cdot\|_{2}\). We also introduce the following weighted functional spaces \[L^{1,\gamma}(\mathbf{R}^{n}):=\left\{f\in L^{1}(\mathbf{R}^{n})\ \bigm{|}\|f\|_{1, \gamma}:=\int_{\mathbf{R}^{n}}(1+|x|^{\gamma})|f(x)|dx<+\infty\right\}.\] The Fourier transform \(\mathcal{F}_{x\to\xi}(f)(\xi)\) of \(f(x)\) is defined by \[\mathcal{F}_{x\to\xi}(f)(\xi)=\hat{f}(\xi):=\int_{\mathbf{R}^{n}}e^{-ix\cdot \xi}f(x)dx,\quad\xi\in\mathbf{R}^{n},\] as usual with \(i:=\sqrt{-1}\), and \(\mathcal{F}_{\xi\to x}^{-1}\) expresses its inverse Fourier transform. We denote the surface area of the \(n\)-dimensional unit ball by \(\omega_{n}:=\int_{|\omega|=1}d\omega\). For each \(n=1,2\), we set \[I_{0,n}:=\|u_{1}\|_{L^{2}(\mathbf{R}^{n})}+\|u_{1}\|_{L^{1}(\mathbf{R}^{n})}.\] For observing an essential part of growth property of the solution itself, we treat only the trivial initial amplitude case \(u_{0}(x)\equiv 0\). Our first result is concerned with the optimal growth property in the case of \(n=1\). **Theorem 1.1**: _Let \(n=1\), \(u_{0}=0\) and \(u_{1}\in H^{1}({\bf R})\). Then, the solution \(u(t,x)\) to problem (1.1)-(1.2) satisfies the following properties under the additional regularity on the initial data:_ \[\|u(t,\cdot)\|_{L^{2}({\bf R})}\leq C_{1}I_{0,1}\sqrt{t},\qquad\mbox{if}\ \ u_{1} \in L^{1}({\bf R}),\] \[C_{2}\left|\int_{\bf R}u_{1}(x)dx\right|\sqrt{t}\leq\|u(t,\cdot)\|_{L^{2}({\bf R })},\qquad\mbox{if}\ \ u_{1}\in L^{1,\gamma}({\bf R}),\ \gamma\in(\frac{1}{2},1],\] _for \(t\gg 1\), where \(C_{j}>0\)\((j=1,2)\) are constants depending only on the space dimension and \(\gamma\in(\frac{1}{2},1]\)._ Our next result is the case of \(n=2\). **Theorem 1.2**: _Let \(n=2\), and \(u_{0}=0\) and \(u_{1}\in H^{1}({\bf R}^{2})\). Then, the solution \(u(t,x)\) to problem (1.1)-(1.2) satisfies the following properties under the additional regularity on the initial data:_ \[\|u(t,\cdot)\|_{L^{2}({\bf R}^{2})}\leq C_{1}I_{0,2}\sqrt{\log t},\quad\mbox{ if}\ \ u_{1}\in L^{1}({\bf R}^{2}),\] \[C_{2}\left|\int_{{\bf R}^{2}}u_{1}(x)dx\right|\sqrt{\log t}\leq\|u(t,\cdot)\| _{L^{2}({\bf R}^{2})},\qquad\mbox{if}\ \ u_{1}\in L^{1,\gamma}({\bf R}^{2}),\ \gamma\in(0,1],\] _for \(t\gg 1\), where \(C_{j}>0\)\((j=1,2)\) are constants depending only on the space dimension and \(\gamma\)._ **Remark 1.1**: From these results above, it seems quite natural to choose initia data such as \((-\Delta)^{-\frac{1}{2}}u_{1}\in L^{2}({\bf R}^{n})\) in (for example) [33] to get global in time solutions of the equation (1.5). **Remark 1.2**: The problem (1.1)-(1.2) is already studied in [33, (2.23) of Theorem 2.1], there the so-called \(L^{2}\)-\(L^{2}\) bounded estimate of the solution to problem (1.1)-(1.2) is discussed. Our growth estimate just derived comes from \((L^{2}\cap L^{1})\)-\(L^{2}\) type estimate of the solution. Let us explain where the unique difficulty of this problem arises compared to previous studies in observing the \(L^{2}\) estimates of the solution itself. In particular, the difficulty is more pronounced in the two-dimensional treatment. As in the usual treatment, in the Fourier space \({\bf R}^{n}_{\xi}\) the problem (1.1)-(1.2) and its solution \(u(t,x)\) can be transformed into the following ODE with parameter \(\xi\in{\bf R}^{n}_{\xi}\) \[(1+|\xi|^{2})w_{tt}+|\xi|^{2}w=0,\quad t>0,\quad\xi\in{\bf R}^{n }_{\xi}, \tag{1.6}\] \[w(0,\xi)=0,\quad w_{t}(0,\xi)=w_{1}(\xi),\quad\xi\in{\bf R}^{n}, \tag{1.7}\] where \(w_{1}(\xi):=\hat{u}_{1}(\xi)\) and \(w(t,\xi):=\hat{u}(t,\xi)\). Moreover, one can easily solve the problem (1.6)-(1.7) (formally) as follows: \[w(t,\xi)=\frac{\sin(tf(|\xi|))}{f(|\xi|)}w_{1}(\xi), \tag{1.8}\] and \[f(r):=\frac{r}{\sqrt{1+r^{2}}}. \tag{1.9}\] First, note that the proof of Theorem 1.1 in one dimension can be handled with the same strategy as the method in [19], but in the proof of Theorem 1.2 in two dimensions, the method in [19] can not be applied directly. In [19], it uses the fact that the range of function \(f(r)=r\) is the half-space \([0,\infty)\) because there the free wave equation is studied. While, the range of \(f(r)\) determined in (1.9) is in the bounded interval \([0,1)\), which is one of the factors requiring a decidedly different treatment and thus a new problem arises. In conclusion, even with the addition of higher derivative terms, as in the free wave case, a certain singularity, expressed in augmented estimates, is included in the solution itself for lower dimensions \(n=1,2\). Therefore, it must be handled with sufficient delicacy. For the proof of Theorem 1.2, we only use the method coming from [16, 17] and the integration by parts. The use of integration by parts is inspired by [8, Proposition A.1.], which is an improvement of [19, 20]. The following three basic facts will be used throughout this paper. We set (possibly \(L=1\)) \[L:=\sup_{\theta\neq 0}\left|\frac{\sin\theta}{\theta}\right|<+\infty. \tag{1.10}\] Furthermore, let \(\delta_{0}\in(0,1)\) be a real number such that \[\left|\frac{\sin\theta}{\theta}\right|\geq\frac{1}{2} \tag{1.11}\] for all \(\theta\in(0,\delta_{0}]\). We also prepare the fundamental inequality \[|a+b|^{2}\geq\frac{1}{2}|a|^{2}-|b|^{2} \tag{1.12}\] for all \(a,b\in\mathbf{C}\). The paper is organized as follows. In Section 2, the well-posedness of equation (1.1)-(1.2) will be showed. In Section 3, we derive the lower bound estimates of the \(L^{2}\)-norm of solutions, and in Section 4 we obtain the upper bound estimates of the \(L^{2}\)-norm of solutions. By combining the results obtained in Sections 3 and 4, Theorems 1.1 and 1.2 are proved at a stroke. In Section 5, we consider the higher dimensional case as an additional remark. ## 2 The well-posedness of the solution This section is concerned mainly with the well-posedness for the problem (1.1)-(1.2). A natural tendency to cope with this problem is to adopt semigroup theory. For the reader's convenience, we outline the proof based on the ideas coming from [15] and [24]. We denote \(v=u_{t}\) and \(A=-\Delta\). It follows from (1.1) that \[(I+A)v_{t}=-Au,\] where \(I\) is identity operator in \(H^{1}(\mathbf{R})^{n}\). Setting \[U=\begin{pmatrix}u\\ v\end{pmatrix},\quad U_{0}=\begin{pmatrix}u_{0}\\ u_{1}\end{pmatrix},\quad\mathcal{A}=\begin{pmatrix}0&I\\ -P&0\end{pmatrix},\quad P=-(I+A)^{-1}A,\] we have \[\begin{cases}\frac{d}{dt}U=\mathcal{A}U,\\ U(0,x)=U_{0}.\end{cases} \tag{2.1}\] Here \(\mathcal{D}(P)\) is defined by \[\mathcal{D}(P)=\{u\in H^{1}(\mathbf{R}^{n}) :\text{There exists }y_{u}\in H^{1}(\mathbf{R}^{n})\text{ such that }\] \[(A^{\frac{1}{2}}u,A^{\frac{1}{2}}\phi)=(A^{\frac{1}{2}}y_{u},A^{ \frac{1}{2}}\phi)+(y_{u},\phi),\ \ \forall\phi\in H^{1}(\mathbf{R}^{n})\}, \tag{2.2}\] where \((\cdot,\cdot)\) denotes the inner product of \(L^{2}(\mathbf{R}^{n})\): \[(f,g)=\int_{\mathbf{R}^{n}}f(x)g(x)dx,\quad f,g\in L^{2}(\mathbf{R}^{n}).\] Note that \(\mathcal{D}(P)\) is not empty because \(0\in\mathcal{D}(P)\) when we take \(y=0\). If \(u\in\mathcal{D}(P)\), there exists a unique \(y_{u}\in H^{1}(\mathbf{R}^{n})\) such that (2.2) holds. Otherwise, at least there exists \(y_{u}^{1}\) and \(y_{u}^{2}\in H^{1}(\mathbf{R}^{n})\) satisfying \[(A^{\frac{1}{2}}u,A^{\frac{1}{2}}\phi)=(A^{\frac{1}{2}}y_{u}^{1},A^{\frac{1}{2}}\phi)+(y_{u}^{1},\phi) \tag{2.3}\] \[(A^{\frac{1}{2}}u,A^{\frac{1}{2}}\phi)=(A^{\frac{1}{2}}y_{u}^{2},A^{\frac{1}{2}}\phi)+(y_{u}^{2},\phi) \tag{2.4}\] for every \(\phi\in H^{1}(\mathbf{R}^{n})\). Taking \(z=y_{u}^{1}-y_{u}^{2}\), combining (2.3) and(2.4) yields \[(A^{\frac{1}{2}}z,A^{\frac{1}{2}}\phi)+(z,\phi)=0.\] Let \(\phi=z\), then we conclude that \(z=0\) in \(H^{1}({\bf R}^{n})\). The above arguments imply that the linear operator \(P:u\to y_{u}\) is well defined for each \(u\in{\cal D}(P)\). Next we prove the fact \({\cal D}(P)=H^{1}({\bf R}^{n})\), that is, for every \(u\in H^{1}({\bf R}^{n})\), there exists \(y_{u}\in H^{1}({\bf R}^{n})\) such that (2.2) holds. First, for each \(u\in H^{1}({\bf R}^{n})\), we define the bounded and linear functional \(F_{u}:H^{1}({\bf R}^{n})\to{\bf R}\) as following \[<F_{u},\phi>=(A^{\frac{1}{2}}u,A^{\frac{1}{2}}\phi),\ \ \ \ \forall\phi\in H^{1}({\bf R}^{n}). \tag{2.5}\] By Riesz representation theorem, there exists a unique \(y_{u}\in H^{1}({\bf R}^{n})\) such that \[<F_{u},\phi>=(y_{u},\phi)_{H^{1}({\bf R}^{n})},\ \ \ \ \forall\phi\in H^{1}({\bf R}^{n}), \tag{2.6}\] where \((\cdot,\cdot)_{H^{1}({\bf R}^{n})}\) denotes the inner product of Sobolev space \(H^{1}({\bf R}^{n})\) and it is equivalent to \[(f,\ g)_{H^{1}({\bf R}^{n})}=(A^{\frac{1}{2}}f,A^{\frac{1}{2}}g)+(f,g). \tag{2.7}\] It follows from (2.5), (2.6) and (2.7) that \[(A^{\frac{1}{2}}u,A^{\frac{1}{2}}\phi)=(A^{\frac{1}{2}}y_{u},A^{\frac{1}{2}} \phi)+(y_{u},\phi),\ \ \forall\phi\in H^{1}({\bf R}^{n}). \tag{2.8}\] Therefore, we have \({\cal D}(P)=H^{1}({\bf R}^{n})\) and we can define \(P(u)=y_{u}\) on \(H^{1}({\bf R}^{n})\). Taking the Fourier transform of both sides of (2.8) leads to \[\int_{{\bf R}^{n}}|\xi|^{2}\hat{u}\bar{\bar{\phi}}\ d\xi=\int_{{\bf R}^{n}}(| \xi|^{2}+1)\hat{y}_{u}\bar{\bar{\phi}}\ d\xi.\] Due to the arbitrariness of \(\phi\), we obtain \(|\xi|^{2}\hat{u}=(|\xi|^{2}+1)\hat{y}_{u}\), and then \[\hat{y}_{u}=\frac{|\xi|^{2}}{1+|\xi|^{2}}\hat{u}. \tag{2.9}\] Before giving the following lemma, we have to define the Hilbert space \[{\cal H}:=H^{1}({\bf R}^{n})\times H^{1}({\bf R}^{n})\] equipped with the inner product \[\big{(}[y_{1},z_{1}],[y_{2},z_{2}]\big{)}_{\cal H}:=(A^{\frac{1}{2}}y_{1},A^{ \frac{1}{2}}y_{2})+(y_{1},y_{2})+(A^{\frac{1}{2}}z_{1},A^{\frac{1}{2}}z_{2})+( z_{1},z_{2}).\] **Lemma 2.1**: _The operator_ \[{\cal A}=\begin{pmatrix}0&I\\ -P&0\end{pmatrix}:{\cal H}\to{\cal H}\] _generates a strongly continuous contraction semigroup \(T(t)\) on \({\cal H}\)._ _proof._ For each \(U=[u,v]\in H^{1}({\bf R}^{n})\times H^{1}({\bf R}^{n})\), it follows from (2.9) that \[\big{(}{\cal A}U,U\big{)}_{\cal H} =\big{(}[v,-P(u)],\ [u,v]\big{)}_{\cal H}\] \[=(A^{\frac{1}{2}}v,A^{\frac{1}{2}}u)+(v,u)-(A^{\frac{1}{2}}P(u),A ^{\frac{1}{2}}v)-(P(u),v)\] \[=\int_{{\bf R}^{n}}(1+|\xi|^{2})\hat{v}\bar{\bar{u}}\ d\xi-\int_{ {\bf R}^{n}}\frac{|\xi|^{4}+|\xi|^{2}}{|\xi|^{2}+1}\hat{u}\bar{\bar{v}}\ d\xi\] \[=\int_{{\bf R}^{n}}\hat{v}\bar{\bar{u}}\ d\xi+\int_{{\bf R}^{n}}| \xi|^{2}(\hat{v}\bar{\bar{u}}-\hat{u}\bar{\bar{v}})d\xi\] \[=\int_{{\bf R}^{n}}\hat{v}\bar{\bar{u}}\ d\xi+2i\int_{{\bf R}^{n} }|\xi|^{2}{\rm Im}(\hat{u}\bar{\bar{v}})\ d\xi. \tag{2.10}\] This yields \[{\rm Re}\big{(}{\cal A}U,U\big{)}_{\cal H}={\rm Re}\int_{{\bf R}^{n}}\hat{v} \bar{\bar{u}}\ d\xi\leq{\rm Re}\big{(}\frac{1}{2}U,U\big{)}_{\cal H},\] which implies \[\mbox{Re}\big{(}({\cal A}-\frac{1}{2}{\cal I})U,U\big{)}_{\cal H}\leq 0, \tag{2.11}\] where \({\cal I}\) is identity operator in \({\cal H}\). (2.11) indicates that the operator \({\cal B}:={\cal A}-\frac{1}{2}{\cal I}\) is m-dissipative in Hilbert \({\cal H}\). Next we prove that \(\frac{1}{2}{\cal I}-{\cal B}={\cal I}-{\cal A}\) is surjective, that is, for any fixed \([f,g]\in{\cal H}\), there exists \([u,v]\in H^{1}({\bf R}^{n})\times H^{1}({\bf R}^{n})\) such that \[({\cal I}-{\cal A})\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}f\\ g\end{pmatrix}. \tag{2.12}\] By definition of \({\cal A}\), (2.12) is equivalent to \[u-v=f, \tag{2.13}\] \[Pu+v=g. \tag{2.14}\] Submitting (2.13) into (2.14) leads to \[(P+I)u=f+g.\] By definition of \(P\), one has \[(2A+I)u=(A+I)(f+g). \tag{2.15}\] Multiplying both sides of (2.15) by test function \(h\in H^{1}({\bf R}^{n})\) and integrating over \({\bf R}^{n}\) yield \[\Lambda(u,h)=F(h), \tag{2.16}\] where \[\Lambda(u,h)=2(A^{\frac{1}{2}}u,A^{\frac{1}{2}}h)+(u,h),\quad F(h)=(A^{\frac{1 }{2}}(f+g),A^{\frac{1}{2}}h)+(f+g,h).\] We see that \(\Lambda(u,h)\) is continuous, coercive bilinear form on \(H^{1}({\bf R}^{n})\) and \(F(h)\) is bounded linear functional related to \(f+g\). By the Lax-Milgram theorem, there exists a unique \(u\in H^{1}({\bf R}^{n})\) such that (2.16) holds. Then \(v\) can be determined by (2.13). Finally, we conclude by the Lumer-Phillips theorem applied to the operator \({\cal A}=\frac{1}{2}{\cal I}+{\cal B}\) defined on \({\cal H}\) that the operator \({\cal A}\) generates a strongly continuous contraction semigroup on Hilbert space \({\cal H}\), which completes the proof because the operator \(\frac{1}{2}{\cal I}\) is bounded in \({\cal H}\), and \({\cal B}\) is m-dissipative in \({\cal H}\) (see [14, Theorem 6.4]). \(\Box\) By semigroup theory of linear operators and Lemma 2.1, the well-posedness of the equation (1.1)-(1.2) can be obtained directly. **Theorem 2.1**: _Supposing initial data \([u_{0},u_{1}]\in H^{1}({\bf R}^{n})\times H^{1}({\bf R}^{n})\), the equation (1.1)-(1.2) admits a unique mild solution denoted by_ \[[u,\partial_{t}u]=T(t)[u_{0},u_{1}]\] _with regularity_ \[u\in C^{1}([0,\infty);H^{1}({\bf R}^{n})).\] ## 3 \(L^{2}\)-lower bound estimates of the solutions In this section, we derive the lower bound estimates in the case of \(n=1,2\). In particular, the estimates from below in the two-dimensional case require a delicate discussion to avoid the inherent difficulties of the problem. Now, let us give the lower bound estimates for \(\|w(t,\cdot)\|\) in the case of \(n=1\), where \(w(t,\xi)\) is defined in (1.8). At this first moment, one can assume that the initial velocity \(u_{1}\in C^{\infty}_{0}({\bf R})\) by density. Let \(t>\delta_{0}\), \(n\geq 1\) and denote a subset \(L_{0}\) of \({\bf R}^{n}_{\xi}\) by \[L_{0}:=\{\xi\in{\bf R}^{n}_{\xi}\,:\,|\xi|\leq\frac{\delta_{0}}{\sqrt{t^{2}- \delta_{0}^{2}}}\}. \tag{3.1}\] Note that the function \(f(r)\) defined in (1.9) is monotone increasing in \([0,\infty)\), \(f(0)=0\), \(\{f(r)\,|\,0\leq r<\infty\}=[0,1)\), and \[\xi\in L_{0}\quad\Longleftrightarrow\quad tf(|\xi|)\in[0,\delta_{0}].\] Therefore, using (1.11) one can proceed the estimate of an essential part of the solution \(w(t,\xi)\) for \(n=1\) as follows: \[I_{l}(t):=\int_{L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}d\xi\geq\frac{t ^{2}}{4}\int_{L_{0}}d\xi=\frac{t^{2}}{2}\frac{\delta_{0}}{\sqrt{t^{2}-\delta_ {0}^{2}}}\geq C_{0}t \tag{3.2}\] for \(t\gg 1\), where \(C_{0}>0\) is an universal constant. Let us decompose the initial data \(w_{1}(\xi)\) in the Fourier space \[w_{1}(\xi)=P+(A(\xi)-iB(\xi)),\quad\xi\in{\bf R}^{n}_{\xi},\quad(n\geq 1) \tag{3.3}\] where \[P:=\int_{{\bf R}^{n}}u_{1}(x)dx,\] \[A(\xi):=\int_{{\bf R}^{n}}(\cos(x\xi)-1)u_{1}(x)dx,\quad B(\xi):=\int_{{\bf R} ^{n}}\sin(x\xi)u_{1}(x)dx.\] It is known (see [16]) that with some constant \(M>0\) one has \[|A(\xi)-iB(\xi)|\leq M|\xi|^{\gamma}\|u_{1}\|_{1,\gamma},\quad\xi\in{\bf R}^{ n}_{\xi}, \tag{3.4}\] when \(u_{1}\in L^{1,\gamma}({\bf R}^{n})\) and \(\gamma\in(0,1]\). Then, it follows from (1.12) and (3.3) with \(n=1\) that \[J_{1}(t) :=\int_{L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}|w_{1}(\xi )|^{2}d\xi\] \[\geq\frac{P^{2}}{2}\int_{L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(| \xi|)}d\xi-\int_{L_{0}}|A(\xi)-iB(\xi)|^{2}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(| \xi|)}d\xi\] \[=:\frac{P^{2}}{2}I_{l}(t)-R_{l}(t). \tag{3.5}\] By (3.4), \(R_{l}(t)\) is estimated as follows for \(\gamma\in(1/2,1]\) \[R_{l}(t) \leq M^{2}\|u_{1}\|_{1,\gamma}^{2}\int_{L_{0}}\frac{r^{2\gamma}}{ f(r)^{2}}d\xi\] \[=M^{2}\|u_{1}\|_{1,\gamma}^{2}\int_{0}^{\frac{\delta_{0}}{\sqrt{t ^{2}-\delta_{0}^{2}}}}(1+r^{2})r^{2(\gamma-1)}dr\] \[\leq\frac{CM^{2}}{2\gamma-1}\|u_{1}\|_{1,\gamma}^{2}t^{-(2\gamma- 1)}, \tag{3.6}\] where \(r=|\xi|\), the constant \(C>0\), and \(t\gg 1\). Therefore, from (3.2), (3.5) and (3.6), one can get the desired lower bound estimate \[\|w(t,\cdot)\|^{2}\geq J_{1}(t)\geq C_{1}P^{2}t-C_{\gamma}\|u_{1}\|_{1, \gamma}^{2}t^{-(2\gamma-1)},\quad t\gg 1\] with some constants \(C_{1}>0\) and \(C_{\gamma}>0\). By density argument, one can state the following lemma. **Lemma 3.1**: _Let \(n=1\), and \(\gamma\in(\frac{1}{2},1]\). Assume \(u_{1}\in L^{1,\gamma}({\bf R})\). Then, it holds that_ \[\|w(t,\cdot)\|^{2}\geq CP^{2}t,\quad t\gg 1.\] Next, we discuss the proof of the two dimensional case. We rely on the integration by parts together with a trick initiated from [19]. Using (1.12) and (3.3), we start with the integral combing the trick function \(e^{-|\xi|^{2}}\) \[\|w(t,\cdot)\|^{2} =\int_{\mathbf{R}^{2}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}|w_ {1}(\xi)|^{2}d\xi\] \[\geq\int_{\mathbf{R}^{2}}e^{-|\xi|^{2}}\frac{\sin^{2}(tf(|\xi|))} {f^{2}(|\xi|)}|P+(A(\xi)-iB(\xi))|^{2}d\xi\] \[\geq\frac{1}{2}P^{2}\int_{\mathbf{R}^{2}}e^{-|\xi|^{2}}\frac{\sin ^{2}(tf(|\xi|))}{f^{2}(|\xi|)}d\xi-\int_{\mathbf{R}^{2}}e^{-|\xi|^{2}}\frac{ \sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}\left(M^{2}\|u_{1}\|_{1,\gamma}^{2}|\xi|^{2 \gamma}\right)d\xi\] \[\geq\frac{1}{2}P^{2}\int_{\mathbf{R}^{2}}e^{-|\xi|^{2}}\frac{\sin ^{2}(tf(|\xi|))}{f^{2}(|\xi|)}d\xi-M^{2}\|u_{1}\|_{1,\gamma}^{2}\int_{\mathbf{ R}^{2}}e^{-|\xi|^{2}}\frac{|\xi|^{2\gamma}}{f^{2}(|\xi|)}d\xi\] \[=:\frac{1}{2}P^{2}T(t)-M^{2}\|u_{1}\|_{1,\gamma}^{2}U(t). \tag{3.7}\] Incidentally, this crucial idea to use the trick function has been initiated in the paper [19]. Of course, there are other ways to choose this function. Now, \(U(t)\) can be estimated as follows: \[\frac{U(t)}{\omega_{2}} =\int_{0}^{\infty}e^{-r^{2}}r^{2\gamma-1}dr+\int_{0}^{\infty}e^{- r^{2}}r^{2\gamma+1}dr\] \[=\frac{1}{\gamma}\int_{0}^{\infty}e^{-r^{2}}r^{2\gamma+1}dr+\int_ {0}^{\infty}e^{-r^{2}}r^{2\gamma+1}dr\] \[=:K_{0}.\] Note that \(K_{0}>0\) has a finite value for \(\gamma\in(0,1]\). The role of trick function \(e^{-|\xi|^{2}}\) is useful. Thus, from (3.7) one has \[\|w(t,\cdot)\|^{2}\geq\frac{1}{2}P^{2}T(t)-M^{2}\|u_{1}\|_{1,\gamma}^{2}\omega _{2}K_{0},\quad t>0. \tag{3.8}\] Finally, let us estimate the main term \(T(t)\). Since \[2\sin^{2}(tf(r))=1-\cos(2tf(r)),\] we get for \(t>1\) \[T(t) \geq\frac{\omega_{2}}{2}\int_{1/t}^{1}e^{-r^{2}}\frac{2\sin^{2}( tf(r))}{f^{2}(r)}rdr\] \[=\frac{\omega_{2}}{2}\int_{1/t}^{1}e^{-r^{2}}\frac{r}{f^{2}(r)}dr -\frac{\omega_{2}}{2}\int_{1/t}^{1}e^{-r^{2}}\frac{r}{f^{2}(r)}\cos(2tf(r))dr\] \[=:\frac{\omega_{2}}{2}T_{1}(t)-\frac{\omega_{2}}{2}T_{2}(t). \tag{3.9}\] Here, we see that \[T_{1}(t)=\int_{1/t}^{1}e^{-r^{2}}\frac{1+r^{2}}{r}dr\geq\int_{1/t}^{1}e^{-r^{ 2}}\frac{1}{r}dr\geq e^{-1}\int_{1/t}^{1}\frac{1}{r}dr=e^{-1}\log t,\quad t>1. \tag{3.10}\] \(T_{2}(t)\) can be decomposed into two parts: \[T_{2}(t) =\int_{1/t}^{1}e^{-r^{2}}r^{-1}\cos(2tf(r))dr+\int_{1/t}^{1}e^{-r ^{2}}r\cos(2tf(r))dr\] \[=:R_{1}(t)+R_{2}(t). \tag{3.11}\] \(R_{2}(t)\) can be estimated easily by \[|R_{2}(t)|\leq\int_{1/t}^{1}e^{-r^{2}}rdr=\frac{1}{2}(e^{-\frac{1}{t^{2}}}-e^{-1}) \leq\frac{1}{2},\quad t>1. \tag{3.12}\] In order to estimate \(R_{1}(t)\), we develop the integration by parts. This idea is inspired by [8, Proposition A.1]. Indeed, since \[\cos(2tf(r))=\frac{1}{2f^{\prime}(r)t}\left(\frac{d}{dr}\sin(2tf(r))\right),\] \[f^{\prime}(r)=\frac{1}{(1+r^{2})\sqrt{1+r^{2}}},\] \[R_{1}(t)=\frac{1}{2t}\int_{1/t}^{1}\frac{e^{-r^{2}}}{r}(1+r^{2})\sqrt{1+r^{2}} \frac{d}{dr}\sin(2tf(r))dr=:\frac{1}{2t}K(t), \tag{3.13}\] \(K(t)\) can be estimated by \[K(t) =\left[\frac{e^{-r^{2}}}{r}(1+r^{2})\sqrt{1+r^{2}}\sin(2tf(r)) \right]_{1/t}^{1}\] \[\quad-\int_{1/t}^{1}\frac{d}{dr}\left(\frac{e^{-r^{2}}}{r}(1+r^{2 })^{\frac{3}{2}}\right)\sin(2tf(r))dr\] \[=:K_{1}(t)+K_{2}(t). \tag{3.14}\] It is obvious that \[|K_{1}(t)|\leq C_{1}+C_{2}t \tag{3.15}\] with some constants \(C_{1}>0\) and \(C_{2}>0\). Next, we should check the upper bound estimate for \(|K_{2}(t)|\). In fact, it holds that \[\frac{d}{dr}\left(\frac{e^{-r^{2}}}{r}(1+r^{2})^{\frac{3}{2}}\right)=-\frac{e ^{-r^{2}}}{r^{2}}(2r^{2}+1)(1+r^{2})^{\frac{3}{2}}+3e^{-r^{2}}(1+r^{2})^{ \frac{1}{2}},\] \[K_{2}(t) =\int_{1/t}^{1}\frac{e^{-r^{2}}}{r^{2}}(2r^{2}+1)(1+r^{2})^{\frac {3}{2}}\sin(2tf(r))dr-3\int_{1/t}^{1}e^{-r^{2}}(1+r^{2})^{\frac{1}{2}}\sin(2tf (r))dr\] \[=:K_{2,1}(t)-K_{2,2}(t). \tag{3.16}\] Simple calculations yield \[|K_{2,2}(t)|\leq 3\int_{1/t}^{1}e^{-r^{2}}(1+r^{2})^{\frac{1}{2}}dr\leq 3e^{- \frac{1}{t^{2}}}\sqrt{2}(1-\frac{1}{t})\leq C_{3} \tag{3.17}\] for some constant \(C_{3}>0\). While, \[|K_{2,1}(t)|\leq\int_{1/t}^{1}e^{-r^{2}}r^{-2}(2r^{2}+1)(1+r^{2})^{\frac{3}{2} }dr\leq 6\sqrt{2}e^{-\frac{1}{t^{2}}}\int_{1/t}^{1}r^{-2}dr\leq C_{4}t \tag{3.18}\] for \(t\gg 1\) and some constant \(C_{4}>0\). By summerizing (3.11)-(3.18), the estimate for \(T_{2}(t)\) is given by \[|T_{2}(t)|\leq\frac{1}{2t}(C_{1}+C_{2}t+C_{3}+C_{4}t),\quad t\gg 1. \tag{3.19}\] Therefore, by (3.8), (3.9), (3.10) and (3.19), one can state the following lemma. **Lemma 3.2**: _Let \(n=2\), and \(\gamma\in(0,1]\). Assume \(u_{1}\in L^{1,\gamma}(\mathbf{R}^{2})\). Then, it holds that_ \[\|w(t,\cdot)\|^{2}\geq CP^{2}\log t,\quad t\gg 1.\] \(L^{2}\)-upper bound estimates of the solution In this section, we derive upper bound estimate of \(\|u(t,\cdot)\|\) as \(t\to\infty\). We use the \(L^{2}\)-regularity together with \(L^{1}\)-regularity of the initial velocity. We first treat the one dimensional case. It also suffices to derive the desired upper bound estimate by assuming \(u_{1}\in C_{0}^{\infty}({\bf R}^{n})\). From (1.8) and (1.9), we can proceed the estimate as follows: \[\|w(t,\cdot)\|^{2} =\int_{{\bf R}_{\xi}}|\frac{\sin(tf(|\xi|))}{f(|\xi|)}|^{2}|w_{1}( \xi)|^{2}d\xi\] \[=\int_{L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}|w_{1}(\xi)| ^{2}d\xi+\int_{{\bf R}_{\xi}\setminus L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}( |\xi|)}|w_{1}(\xi)|^{2}d\xi\] \[=:L_{1}(t)+L_{2}(t), \tag{4.1}\] where the set \(L_{0}\) with \(n=1\) is defined in (3.1). Then, from (1.10) it holds that \[L_{1}(t)\leq L^{2}t^{2}\omega_{1}\|u_{1}\|_{1}^{2}\int_{0}^{A(t)}dr=L^{2}t^{2 }\omega_{1}\|u_{1}\|_{1}^{2}A(t)\leq C_{1}\|u_{1}\|_{1}^{2}t \tag{4.2}\] for \(t\gg 1\), where \(C_{1}>0\) is a constant, and \[A(t):=\frac{\delta_{0}}{\sqrt{t^{2}-\delta_{0}^{2}}}. \tag{4.3}\] While, for \(L_{2}(t)\) one has \[L_{2}(t) \leq\int_{{\bf R}_{\xi}\setminus L_{0}}\frac{1+|\xi|^{2}}{|\xi|^ {2}}|w_{1}(\xi)|^{2}d\xi\] \[=\int_{{\bf R}_{\xi}\setminus L_{0}}\frac{1}{|\xi|^{2}}|w_{1}( \xi)|^{2}d\xi+\int_{{\bf R}_{\xi}\setminus L_{0}}|w_{1}(\xi)|^{2}d\xi\] \[\leq\omega_{1}\|u_{1}\|_{1}^{2}\int_{A(t)}^{\infty}r^{-2}dr+\|u_{ 1}\|^{2}=\omega_{1}\|u_{1}\|_{1}^{2}\frac{1}{A(t)}+\|u_{1}\|^{2}\] \[\leq C_{2}\|u_{1}\|_{1}^{2}t+\|u_{1}\|^{2} \tag{4.4}\] for \(t\gg 1\), where \(C_{2}>0\) is a constant. Thus, from (4.2) and (4.4) one can derive the desired estimate as the following lemma. **Lemma 4.1**: _Let \(n=1\), and \(u_{1}\in L^{1}({\bf R})\cap L^{2}({\bf R})\). Then, it holds that_ \[\|w(t,\cdot)\|^{2}\leq C(\|u_{1}\|^{2}+\|u_{1}\|_{1}^{2}t),\quad t\gg 1.\] Next, let us give the upper bound estimate for \(n=2\) based on the decomposition below \[\|w(t,\cdot)\|^{2} =\int_{{\bf R}_{\xi}^{2}}|\frac{\sin(tf(|\xi|))}{f(|\xi|)}|^{2}|w _{1}(\xi)|^{2}d\xi\] \[=\int_{L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}|w_{1}(\xi) |^{2}d\xi+\int_{{\bf R}_{\xi}^{2}\setminus L_{0}}\frac{\sin^{2}(tf(|\xi|))}{f ^{2}(|\xi|)}|w_{1}(\xi)|^{2}d\xi\] \[=:G_{1}(t)+G_{2}(t), \tag{4.5}\] where the set \(L_{0}\) with \(n=2\) is defined in (3.1). It also suffices to assume \(u_{1}\in C_{0}^{\infty}({\bf R}^{2})\) in the derivation of upper bound estimate. We first treat \(G_{1}(t)\) to get the estimate such that \[G_{1}(t)\leq L^{2}t^{2}\omega_{2}\|u_{1}\|_{1}^{2}\int_{0}^{A(t)}rdr=2^{-1}L^ {2}\omega_{2}\|u_{1}\|_{1}^{2}\delta_{0}^{2}\frac{t^{2}}{t^{2}-\delta_{0}^{2} }\leq C_{3}\|u_{1}\|_{1}^{2} \tag{4.6}\] with some constant \(C_{3}>0\) and \(t\gg 1\), where one uses the fact (1.10). For the estimate of \(G_{2}(t)\), it must be decomposed into three parts as follows in order to get the desired growth estimate \[G_{2}(t)=g_{1}(t)+g_{2}(t)+g_{3}(t):=\left(\int_{|\xi|\geq C(t)}+\int_{C(t)\geq| \xi|\geq B(t)}+\int_{B(t)\geq|\xi|\geq A(t)}\right)\frac{\sin^{2}(tf(|\xi|))}{ f^{2}(|\xi|)}|w_{1}(\xi)|^{2}d\xi, \tag{4.7}\] where \(t\gg 1\), \(A(t)\) is already defined in (4.3), and \[B(t):=\frac{\delta_{0}}{\sqrt{t-\delta_{0}^{2}}},\] \[C(t):=\frac{\delta_{0}}{\sqrt{\log t-\delta_{0}^{2}}}.\] Now, we see that \[g_{1}(t)\leq\frac{1}{f^{2}(C(t))}\int_{|\xi|\geq C(t)}|w_{1}(\xi)|^{2}d\xi\leq \frac{\log t}{\delta_{0}^{2}}\|u_{1}\|^{2},\quad t\gg 1. \tag{4.8}\] Recall that the function \(f(r)\) is monotone increasing on \([0,\infty)\). Next, about \(g_{2}(t)\) we have \[g_{2}(t) \leq\omega_{2}\|u_{1}\|_{1}^{2}\int_{B(t)}^{C(t)}\frac{1+r^{2}}{r }dr=\omega_{2}\|u_{1}\|_{1}^{2}\left[\log r+\frac{1}{2}r^{2}\right]_{r=B(t)}^{ r-C(t)}\] \[=\omega_{2}\|u_{1}\|_{1}^{2}\left(\log C(t)-\log B(t)+\frac{1}{2} (C(t)^{2}-B(t)^{2})\right)\] \[=\omega_{2}\|u_{1}\|_{1}^{2}\left(\frac{1}{2}\log(t-\delta_{0}^{ 2})-\frac{1}{2}\log(\log t-\delta_{0}^{2})+\frac{\delta_{0}^{2}}{2}(\frac{1} {\log t-\delta_{0}^{2}}-\frac{1}{t-\delta_{0}^{2}})\right)\] \[\leq C_{4}\|u_{1}\|_{1}^{2}\log t \tag{4.9}\] with some constant \(C_{4}>0\) and \(t\gg 1\). Thirdly, \(g_{3}(t)\) can be estimated similarly to (4.9) \[g_{3}(t) \leq\omega_{2}\|u_{1}\|_{1}^{2}\int_{A(t)}^{B(t)}\frac{1+r^{2}}{r }dr\] \[=\omega_{2}\|u_{1}\|_{1}^{2}\left[\log r+\frac{1}{2}r^{2}\right]_ {r=A(t)}^{r=B(t)}\] \[=\omega_{2}\|u_{1}\|_{1}^{2}\left(\frac{1}{2}\log(t^{2}-\delta_{0 }^{2})-\frac{1}{2}\log(t-\delta_{0}^{2})+\frac{\delta_{0}^{2}}{2}(\frac{1}{t- \delta_{0}^{2}}-\frac{1}{t^{2}-\delta_{0}^{2}})\right)\] \[\leq C_{5}\|u_{1}\|_{1}^{2}\log t \tag{4.10}\] with some constant \(C_{5}>0\) and \(t\gg 1\). Therefore, the following lemma is a direct consequence of (4.5), (4.6), (4.7), (4.8), (4.9) and (4.10). **Lemma 4.2**: _Let \(n=2\), and \(u_{1}\in L^{1}(\mathbf{R}^{2})\cap L^{2}(\mathbf{R}^{2})\). Then, it holds that_ \[\|w(t,\cdot)\|^{2}\leq C(\|u_{1}\|^{2}+\|u_{1}\|_{1}^{2})\log t,\quad t\gg 1.\] Finally, the proofs of Theorems 1.1 and 1.2 are direct consequences of Lemmas 3.1, 3.2, 4.1 and 4.2, and the Plancherel Theorem. \(L^{2}\)-upper bound estimates for the case \(n\geq 3\) In this section, we give some comments in the case of \(n\geq 3\). In this case, the \(L^{2}\)-increasing property never holds, which is verified in below by the following calculations so far. In fact, from (1.8) and (1.9) we can proceed the estimate as follows: \[\|w(t,\cdot)\|^{2} =\int_{{\bf R}_{\xi}^{n}}|\frac{\sin(tf(|\xi|))}{f(|\xi|)}|^{2}|w_ {1}(\xi)|^{2}d\xi\] \[=\int_{M_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|)}|w_{1}(\xi) |^{2}d\xi+\int_{{\bf R}_{\xi}^{n}M_{0}}\frac{\sin^{2}(tf(|\xi|))}{f^{2}(|\xi|) }|w_{1}(\xi)|^{2}d\xi\] \[=:M_{1}(t)+M_{2}(t), \tag{5.1}\] where the set \(M_{0}\) with \(n\geq 3\) is defined by \[M_{0}:=\{\xi\in{\bf R}_{\xi}^{n}\,:\,|\xi|\leq\frac{1}{\sqrt{3}}\}. \tag{5.2}\] Note that \[\xi\in M_{0}\quad\Longleftrightarrow\quad f(|\xi|)\leq\frac{1}{2}.\] Then, it holds that \[M_{1}(t)\leq\int_{M_{0}}\frac{1}{f^{2}(|\xi|)}|w_{1}(\xi)|^{2}d\xi\leq\omega_ {n}\|u_{1}\|_{1}^{2}\int_{0}^{\frac{1}{\sqrt{3}}}(1+r^{2})r^{n-3}dr\leq C_{1, n}\|u_{1}\|_{1}^{2} \tag{5.3}\] for \(t\geq 0\), where \(C_{1,n}>0\) is a constant. While, \(M_{2}(t)\) can be estimated by \[M_{2}(t)\leq\int_{{\bf R}_{\xi}^{n}M_{0}}\frac{1}{f^{2}(|\xi|)}|w_{1}(\xi)|^{ 2}d\xi\leq 4\int_{{\bf R}_{\xi}^{n}\setminus M_{0}}|w_{1}(\xi)|^{2}d\xi\leq C_{2, n}\|u_{1}\| \tag{5.4}\] for \(t\geq 0\), where \(C_{2,n}>0\) is a constant. Thus, by (5.1), (5.3) and (5.4), one can obtain the following desired estimate. **Lemma 5.1**: _Let \(n\geq 3\), and \(u_{1}\in L^{1}({\bf R})\cap L^{2}({\bf R})\). Then, it holds that_ \[\|u(t,\cdot)\|^{2}=\|w(t,\cdot)\|^{2}\leq C(\|u_{1}\|^{2}+\|u_{1}\|_{1}^{2}), \quad t\gg 1.\] **Remark 5.1**: Even if non-trivial initial value \(u_{0}(x)\) is added, the result remains the same because of the easy estimate: \[\int_{{\bf R}^{n}}\cos^{2}(f(\xi)t)|\hat{u}_{0}(\xi)|^{2}d\xi\leq\int_{{\bf R} ^{n}}|\hat{u}_{0}(\xi)|^{2}d\xi=\|u_{0}\|^{2}.\] So, the growth property never occurs for \(n\geq 3\). In this sense, \(n=1,2\) are exceptional. _Acknowledgement._ This paper was written during Xiaoyan Li's stay as an overseas researcher at Hiroshima University from 12 December, 2022 to 11 December, 2023 under Ikehata's supervision as a host researcher. The work of the first author (Ryo Ikehata) was supported in part by Grant-in-Aid for Scientific Research (C) 20K03682 of JSPS. This work of the second author (Xiaoyan Li) was financially supported in part by Chinese Scholarship Council (Grant No. 202206160071). **Declarations** **Data availability** Data sharing not applicable to this article as no datasets were generated or analysed during the current study. **Conflict of interest** The authors declare that they have no conflict of interest.
2307.14446
Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach
Few-shot semantic segmentation (FSS) offers immense potential in the field of medical image analysis, enabling accurate object segmentation with limited training data. However, existing FSS techniques heavily rely on annotated semantic classes, rendering them unsuitable for medical images due to the scarcity of annotations. To address this challenge, multiple contributions are proposed: First, inspired by spectral decomposition methods, the problem of image decomposition is reframed as a graph partitioning task. The eigenvectors of the Laplacian matrix, derived from the feature affinity matrix of self-supervised networks, are analyzed to estimate the distribution of the objects of interest from the support images. Secondly, we propose a novel self-supervised FSS framework that does not rely on any annotation. Instead, it adaptively estimates the query mask by leveraging the eigenvectors obtained from the support images. This approach eliminates the need for manual annotation, making it particularly suitable for medical images with limited annotated data. Thirdly, to further enhance the decoding of the query image based on the information provided by the support image, we introduce a multi-scale large kernel attention module. By selectively emphasizing relevant features and details, this module improves the segmentation process and contributes to better object delineation. Evaluations on both natural and medical image datasets demonstrate the efficiency and effectiveness of our method. Moreover, the proposed approach is characterized by its generality and model-agnostic nature, allowing for seamless integration with various deep architectures. The code is publicly available at \href{https://github.com/mindflow-institue/annotation_free_fewshot}{\textcolor{magenta}{GitHub}}.
Sanaz Karimijafarbigloo, Reza Azad, Dorit Merhof
2023-07-26T18:33:30Z
http://arxiv.org/abs/2307.14446v1
# Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach ###### Abstract Few-shot semantic segmentation (FSS) offers immense potential in the field of medical image analysis, enabling accurate object segmentation with limited training data. However, existing FSS techniques heavily rely on annotated semantic classes, rendering them unsuitable for medical images due to the scarcity of annotations. To address this challenge, multiple contributions are proposed: First, inspired by spectral decomposition methods, the problem of image decomposition is reframed as a graph partitioning task. The eigenvectors of the Laplacian matrix, derived from the feature affinity matrix of self-supervised networks, are analyzed to estimate the distribution of the objects of interest from the support images. Secondly, we propose a novel self-supervised FSS framework that does not rely on any annotation. Instead, it adaptively estimates the query mask by leveraging the eigenvectors obtained from the support images. This approach eliminates the need for manual annotation, making it particularly suitable for medical images with limited annotated data. Thirdly, to further enhance the decoding of the query image based on the information provided by the support image, we introduce a multi-scale large kernel attention module. By selectively emphasizing relevant features and details, this module improves the segmentation process and contributes to better object delineation. Evaluations on both natural and medical image datasets demonstrate the efficiency and effectiveness of our method. Moreover, the proposed approach is characterized by its generality and model-agnostic nature, allowing for seamless integration with various deep architectures. The code is publicly available at GitHub. Keywords:Few-shot Learning Medical Segmentation Self-supervised ## 1 Introduction Computer vision tasks such as localization and segmentation, which require a detailed understanding of image structure, can achieve good results when approached with fully-supervised deep learning methods. Although the success of supervised deep learning methods depends heavily on the availability of a large amount of well-annotated data [5, 3], collecting and annotating such data is costly and challenging, as it requires to be performed manually by a domain expert. The other equally problematic challenge with fully-supervised models is their inflexibility when confronted with new classes of segmentation targets (e.g., different and novel lesion types) [21, 12]. This is a significant challenge, as training a new model for every new segmentation class is impractical and time-consuming. To address the aforementioned problems, few-shot semantic segmentation (FSS) has been proposed. The core concept of FSS is a potent approach that effectively minimizes the requirement for extensive annotation, enabling precise predictions of unobserved classes using only a limited number of guiding examples. By capitalizing on FSS, a model can create a discriminative representation of a previously unknown class using a small set of labeled examples (support). This acquired representation can then be employed to accurately predict the outcomes for unlabeled examples (query), without the need for any model retraining. This approach significantly alleviates the annotation burden and empowers the model to swiftly generalize and adapt to new unseen classes (e.g., new lesions) Several approaches have been proposed to tackle the FSS problem. One approach involves the use of a mask average pooling strategy, which effectively removes irrelevant features based on information from the support masks [32]. Another improvement proposed by Wang et al. [29] is the introduction of a novel prototype alignment regularization between support and query images, resulting in better generability for new classes. Additionally, in other recent works [31], researchers have utilized deep attention mechanisms to learn attention weights between support and query images, enabling improved label propagation. In spite of the promising outcomes observed in applying few-shot learning paradigms to the segmentation of natural images [33], their utilization in medical image segmentation remains limited. This limitation is due to the scarcity of annotated classes, which hinders the network's ability to generalize and to prevent overfitting [30]. The concept of few-shot segmentation on medical images was initially introduced by [19]. The authors proposed the use of adversarial learning to segment brain images, leveraging only one or two labeled brain images, drawing inspiration from successful semi-supervised approaches [25]. Feyjie et al. [8] introduced a novel approach that incorporates a semi-supervised mechanism within the conventional few-shot learning framework. This approach leverages the availability of abundant unlabeled datasets to predict skin lesion masks for previously unseen samples. In recent work, to further benefit from unlabelled data, Ouyang et al. [21] proposed a self-supervised few-shot semantic segmentation (FSS) framework called SSL-ALPNet to segment medical images by utilizing superpixel-based pseudo-labels as supervision signals. This method also improved the segmentation accuracy using an adaptive local prototype pooling module. Xiao et al. [30] proposed a Siamese few-shot network for medical image segmentation and they used a grid attention module to enhance semantic information localization. Ding et al. [6] designed a self-supervised few-shot network to segment medical images. They introduced a Cycle-Resemblance Attention mod ule to effectively capture the pixel-wise relations between query and support medical images. Despite the incorporation of semi-supervised and self-supervised techniques within these strategies to optimize the training procedure of the model, the presence of annotated data remains indispensable during the inference stage for accurate query mask prediction. To mitigate this requirement, we undertake an exploration of the role played by self-supervised techniques in facilitating the acquisition of object representation within a conventional few-shot context. Specifically, we draw inspiration from the accomplishments of few-shot segmentation methods in natural images, which rely on the episodic training paradigm. In our approach (depicted in Figure 1), we aim to eliminate the need for extensive annotation by leveraging the eigenvectors of the Laplacian matrix derived from the feature affinity matrix of self-supervised networks. This allows us to effectively capture the global representation of the object of interest in the Support image. By integrating this concept into the standard few-shot segmentation framework, we propose an end-to-end network that leverages support guidance to predict the query mask. In order to enhance the decoding process of the query image by leveraging the information from the support image, we propose to incorporate large kernel attention along with multi-scale attention gate modules. These modules effectively highlight pertinent features and intricate details, resulting in an enhanced segmentation process. ## 2 Proposed Method ### Problem Formulation In the context of standard FSS, our approach involves three main datasets: a training set denoted as \(D_{train}=\{(X_{i}^{t},Y_{i}^{t})\}_{i=1}^{N_{train}}\), a support set denoted as \(D_{support}=\{(X_{i}^{s},Y_{i}^{s})\}_{i=1}^{N_{support}}\), and a test set denoted as \(D_{test}=\{(X_{i}^{q})\}_{i=1}^{N_{test}}\). Figure 1: The overview of our annotation-free FSS model. Here, \(X_{i}\) and \(Y_{i}\) represent the input image and corresponding binary mask, respectively. Each dataset contains a total of \(N\) images, specified by \(N_{train}\), \(N_{support}\), and \(N_{test}\), involving \(C\) distinct classes. Notably, the classes are shared between the support and test sets but are disjoint with the training set, denoted as \(\{C_{train}\}\cap\{C_{support}\}=\emptyset\). The objective of few-shot learning is to train a neural network \(f_{(\theta,\gamma)}(\cdot)\) on the training set, enabling it to accurately segment a new class \(c\notin C_{train}\) in the test set based on \(k\) reference samples from \(D_{support}\). Here, \(\theta\) and \(\gamma\) represent the learnable parameters of the encoder and decoder respectively. To reproduce this procedure, training on the base dataset \(D_{train}\) follows the episodic learning paradigm introduced in [27], where each episode entails a \(c\)-way \(k\)-shot learning task. Specifically, each episode is created by sampling two components. Firstly, we construct a support training set for each class \(c\), denoted as \(D^{\mathcal{S}}_{train}=\{(X^{t}_{s},Y^{t}_{s}(c))\}_{s=1}^{k}\subset Dtrain\), where \(Y^{t}_{s}(c)\) represents the binary mask corresponding to the support image \(X^{t}_{s}\) for class \(c\). Secondly, we create a query set \(D^{\mathcal{Q}}_{train}=\{(X^{t}_{q},Y^{t}_{q}(c))\}\subset D_{train}\), where \(X^{t}_{q}\) is the query image and \(Y^{t}_{q}(c)\) is the corresponding binary mask for class \(c\). In order to estimate the segmentation mask of a given class \(c\) in the query image, the model leverages the support training set and the query image. This process can be expressed as \(\hat{Y}^{t}_{q}(c)=f_{(\theta,\gamma)}(D^{\mathcal{S}}_{train},X^{t}_{q})\). More specifically, in our approach we utilize an encoder module to encode the support and query images, resulting in feature representations denoted as \(f_{s}\in\mathbb{R}^{W\times H\times M}\) and \(f_{q}\in\mathbb{R}^{W\times H\times M}\), respectively. Here, \(W\), \(H\), and \(M\) represent the width, height, and feature dimensionality in the feature space, respectively. In the subsequent step, we employ a hierarchical approach to acquire the class prototypes, employing a self-supervision strategy in contrast to the prevailing literature [2, 10], which utilizes the support mask \(Y_{s}\) to filter out the support prototype. We will provide a full explanation of our hierarchical prototype estimation process in the next sections. ### Hierarchical Prototypes In the realm of few-shot learning, the support prototype assumes a pivotal role as a representative reference for a specific class, greatly influencing the model's ability to generalize and accurately predict unseen instances. By encapsulating the fundamental characteristics of a class, the support prototype empowers the model with the capacity to make informed predictions. Our study introduces a novel approach for generating a hierarchical support prototype using spectral decomposition, eliminating the need for a support mask. Initially, we extract the support representation denoted as \(f_{s}\in\mathbb{R}^{W\times H\times M}\) by leveraging an encoder module \(f_{(\theta)}(\cdot)\). This support representation is derived from different parts of the encoder module, including combinations of various layers such as hyper-columns [10]. Our experimental findings, consistent with previous research [2], reveal that incorporating features from both shallow and deep layers of the encoder network produces favorable results. This approach captures multi level and multi-scale representations while preserving global contextual object features. Subsequently, we construct an affinity matrix based on pixel correlations. By setting the affinity threshold to zero, our focus lies on aggregating similar features rather than anti-correlated ones. The resulting feature affinities, denoted as \(W_{\text{s}}\in\mathbb{R}^{HW\times HW}\), encompass semantic information at both coarse and low-level resolutions. Utilizing \(W_{\text{s}}\), we compute the eigenvectors of its Laplacian matrix \(L=D^{-1/2}(D-W)D^{-1/2}\) to decompose an image into soft segments, represented as \(y_{0},\cdots,y_{n-1}=\text{eigs}(L)\). Among these eigenvectors, we pay particular attention to the remaining ones \(y_{>0}\) since the first eigenvector \(y_{0}\) is constant, corresponding to an eigenvalue \(\lambda_{0}=0\). To identify the support object, akin to prior studies [16], we examine the Fiedler eigenvector \(y_{1}\) of \(L\) and discretize it by considering its sign, resulting in binary image segmentation. By creating a bounding box around the smaller region, which is more likely to represent the foreground object rather than the background, we establish an alternative to the support mask. This bounding box serves the purpose of hierarchically filtering the support representation to generate support prototype \(f^{\prime}_{s}\). ### Decoder In our network architecture, we incorporate a decoder module consisting of four blocks. Each block follows a specific sequence of operations. Firstly, we employ the cross-LKA (CLKA) module to effectively integrate the query representation with the support prototype. This module aids in capturing meaningful relationships between the query and prototype, enhancing the overall feature fusion process. Subsequently, we utilize the multi-scale attention gate mechanism to combine the output of the CLKA module with the up-sampled features obtained from the previous decoder layer. The multi-scale attention gate (MS-AG) Figure 2: The overview of the proposed **CLKA** and **MS-AG** modules. In each block of the decoder network, we include both CLKA and MS-AG for conditioning the query representation based on the support prototype. facilitates the selective integration of relevant spatial information from different scales, promoting the refinement of the feature representation. In the next subsections, we will elaborate on cross-LKA and MS-AG in more detail. #### 3.1.1 Large Kernel Attention (LKA) The attention mechanism, also known as an adaptive selection process, has the ability to identify discriminative features while disregarding noisy responses with respect to the input features. Generating an attention map that signifies the importance of various parts is one of the key roles of the attention mechanism. There are two well-known attention mechanisms and each one has its own pros and cons. The first one is the self-attention mechanism [7] which has the potential to discover long-range dependencies however it has some drawbacks (e.g., ignoring channel adaptability, high quadratic complexity for high-resolution images, neglecting the 2D structure of images). The second one is large kernel convolution [22] which can establish relevance and produce an attention map. Nevertheless, employing large-kernel convolutions introduces substantial computational overhead and increases the number of parameters. To address the mentioned limitations and leverage the advantages of both self-attention and large kernel convolutions, the large kernel attention (LKA) approach is proposed in [9], which decomposes a large kernel convolution operation to capture long-range relationships. In our study, we extend the idea of the LKA for distilling discriminative representation derived from the support prototype into a query representation to condition the prediction of the query mask based on support distribution. To this end, we first fuse the support prototype \(f^{\prime}_{S}\) and the query representation \(f_{q}\) with learnable parameters (modeled as 3D convolution) followed by a non-linear activation function. This operation enables the network to encode the prior knowledge obtained from the support representation into query features for estimating the object of interest's representation and eliminating the background noise. Next, to capture long-range dependency in an efficient way, we follow the LKA approach. Regarding \(C\) as the number of channels, then a \(C\times C\) convolution can be decomposed into a \([\frac{c}{d}]\times[\frac{c}{d}]\) depth-wise dilation convolution (a spatial long-range convolution) with dilation d, a \((2d-1)\times(2d-1)\) depth-wise convolution (a spatial local convolution) and a \(1\times 1\) convolution (a channel convolution). Therefore, long-range relationships can be extracted within a feature space and the attention map is generated with a few computational complexity and parameters. The large kernel attention (LKA) module is written as \[\text{Attention }=\text{Conv}_{1\times 1}(\text{DW}-\text{D}- \text{Conv}(\text{DW}-\text{Conv}(\text{F}(\text{f}^{\prime}_{\text{s}}, \text{f}_{\text{q}})))) \tag{1}\] \[\text{Output }=\text{ Attention }\otimes\text{F}(\text{f}^{\prime}_{\text{s}}, \text{f}_{\text{q}}) \tag{2}\] where \(F(f^{\prime}_{s},f_{q})\in R^{C\times H\times W}\) and \(Attention\in R^{C\times H\times W}\) denote the 3D convolutional operation for support and query aggregation and the attention map, respectively. Also, \(\otimes\) indicates the element-wise product and the value of the attention map represents the importance of each feature. Unlike conventional attention methods, the LKA approach does not use an additional normalization function such as sigmoid or SoftMax. The overall process is depicted in Figure 2 #### 2.2.3 Multi-scale Attention Gate (MS-AG) The main purpose of AGs is to mitigate irrelevant features in background regions by employing a grid-attention technique that considers the spatial information of the image [20]. To achieve this objective, we initiate the fusion process by combining the feature representation obtained from the decoder block \(x_{d}^{l-1}\) with the output of the CLKA module \(x_{e}^{l}\). This fusion is accomplished using a 1\(\times\)1 convolution operation, which combines the two sets of information into a unified representation. Next, to model multi-scale representation we employ Atrous convolution in our attention gate module. Atrous convolution, also referred to as dilated convolution, is a technique that expands the kernel size of a filter without increasing the parameter count or computational load. By introducing \(r-1\) zeros between consecutive filter values, the kernel size of a \(k\times k\) filter is effectively enlarged to \(k_{Atrous}=k+(k-1)(r-1)\). Using this multi-scale attention mechanism allows the model to more precisely determine the importance of each region and effectively manage their impact on the final outcome. The multi-scale attention gate \(MS-AG(\cdot)\) can be formulate as follows: \[q_{att}(x_{e},x_{d})=C_{at}(\sigma_{1}\left(BN\left(C_{e}(x_{e})+ BN\left(C_{d}(x_{d})\right)\right))\right) \tag{3}\] \[MS-AG(x_{e},x_{d})=x_{d}*\sigma_{2}\left(BN\left(C\left(q_{att}( x_{e},x_{d})\right)\right)\right) \tag{4}\] where \(\sigma_{1}(\cdot)\) refers to ReLU, and \(\sigma_{2}(\cdot)\) corresponds to the Sigmoid activation function. \(C_{e}(\cdot),C_{d}(\cdot)\), and \(C(\cdot)\) indicate the channel-wise \(1\times 1\) convolution operation. \(BN(\cdot)\) denotes the batch normalization operation and \(C_{at}(\cdot)\) shows the Atrous convolution operation. \(x_{d}\) and \(x_{e}\) represent the up-sampled and skip connection features, respectively. Figure 2 illustrates the overall process. ## 3 Experiments ### Dataset In this study, the FSS-1000 dataset is utilized to assess the effectiveness of our method in analyzing natural images. Additionally, to examine the network's ability to generalize to medical images, we evaluate its performance on the publicly accessible (\(PH^{2}\)) dataset, specifically designed for skin lesion segmentation. **FSS-1000:** The FSS-1000 class dataset [14] is a significantly large-scale dataset specifically tailored for few-shot segmentation tasks. It encompasses a total of 1000 classes, with each class consisting of 10 images accompanied by their corresponding pixel-level ground truth annotations. The official training split, comprising 760 classes, is utilized as the primary dataset for training purposes. On the other hand, the testing set, comprising 240 classes, is used for inference. \(PH^{2}\) **dataset:** The \(PH^{2}\) dataset [17] consists of 200 RGB dermoscopic images of melanocytic lesions including 80 common nevi, 80 atypical nevi, and 40 melanomas. The dataset was provided at the Dermatology Service of Hospital Pedro Hispano in Matosinhos, Portugal. The resolution of images is 768x560 pixels, but in our work we resized them to 224\(\times\)224 pixels. In our experimental setup, we follow the same setting suggested in [8] to evaluate our method. ### Implementation Details In our implementation, ResNet50 backbone network with ImageNet pre-trained weights is used. Feature extraction is performed by extracting features from the last convolutional layer at each encoder block of the backbone network. This feature extraction approach yields four pyramidal layers (\(P=4\)). To ensure consistency, we set the spatial sizes of both support and query images to \(400\times 400\) pixels, resulting in \(H,W=400\). Consequently, we obtain the following spatial sizes for each pyramidal layer: \(H_{1},W_{1}=100\), \(H_{2},W_{2}=50\), \(H_{3},W_{3}=25\), and \(H_{4},W_{4}=13\). As a result, our decoder component consists of four blocks, where each block involves fusing the support prototype with the query representation, as illustrated in Figure 1. The entire network is implemented using the PyTorch framework and optimized using the Adam optimizer, with a learning rate of \(1e-3\). To prevent the pre-trained backbone networks from learning class-specific representations from the training data, we freeze the encoder weight. ### Evaluation Metrics For FSS-1000 benchmark, we adopt the mean intersection over union (mIoU) as our evaluation metric. To assess the performance of our network on the skin dataset, we compare our method against the unsupervised \(k\)-means clustering method, as well as SOTA self-supervised methods such as DeepCluster [4], IIC [11], and spatial guided self-supervised strategy (SGSCN) [1]. Our evaluation methodology follows the guidelines outlined in [1]. To evaluate the efficacy of our network, we employ three evaluation metrics: the Dice similarity coefficient (DSC), the Hammoud distance (HM), and the XOR metric. ### Results **FSS-1000:** We commence our evaluation of the proposed model on the FSS-1000 dataset, considering two distinct settings. In the first setting, the inference process incorporates the support mask to guide the network. We compare our results with recent few-shot methods, including DoG [2], PFENet [26], HSNet [18] and etc. The results for 1-shot and 5-shot scenarios are summarized in Table 0(b). Remarkably, our models, set new benchmarks in terms of performance while maintaining a minimal number of learnable parameters. With including the support annotation on the inference time, our 1-shot and 5-shot results exhibit substantial improvements of 15.3% and 14.9% in mIoU, respectively, compared to the baseline OSLSM method. Furthermore, compared to the recent SOTA approaches, HSNet [18] and DAN [28], our strategy achieves promising results. In the second setting, we conduct additional experiments without including support annotation. As outlined in our proposed method, we estimate the support distribution through spectral decomposition. Notably, our model performs exceptionally well even without annotation, as evident from Table 0(a). In the 1-shot scenario, our model achieves a notable mIoU improvement of 19.7% over the FSS-baseline method. In addition using the same setting, our method is abale to obtain superior performance than HSNet [18]. Some challenging cases are visualized in Figure 3. \(PH^{2}\)**:** We present a comparative results in Table 1b. The comparative results highlight the superiority of our approaches over SOTA methods across all evaluation metrics, affirming the effectiveness of our self-supervised few-shot learning strategy. Notably, by employing the episodic training paradigm, a noticeable enhancement of approximately 19.1% is observed compared to the few-shot baseline model suggested in [8]. In contrast to the semi-supervised strategy [8] that integrates additional samples through the utilization of unsupervised methodologies, the proposed models demonstrate a superior level of performance by employing a self-supervised strategy. Moreover, our strategy differentiates itself from the self-supervised approach [1, 13] that generates a supervisory signal solely based on image content. Instead, we leverage a support sample to incorporate prior knowledge, effectively guiding the network and elevating its performance. From a qualitative standpoint, we provide a visual comparison in Figure 3. ### Ablation Study The proposed architecture incorporates two key modules: the CLKA, and the MS-AG module in the decoding path. These modules are designed to facilitate the feature representation and adaptively fuse support information into the query representation. In order to assess the impact and contribution of each module on the generalization performance, we conducted experiments where we selectively Figure 3: Sample of prediction results of the proposed method on the FSS-1000 and \(PH^{2}\) datasets, employing a one-shot setting. removed individual modules, as outlined in Table 0(c). The experimental results highlight the significance of each module in the overall architecture. Specifically, removing any of the modules from the network leads to a noticeable decrease in performance. Notably, when the CLKA module is removed, the impact of support prior knowledge diminishes, resulting in a clear drop in performance. Similarly, replacing the MS-AG with simple concatenation results in a performance drop. However, by including the MS-AG module, our model tends to reduce the number of wrong predictions and isolated false positives. ## 4 Conclusion Our study presents a novel approach for addressing few-shot semantic segmentation on medical images in the absence of annotated data. We reframe the problem as a graph partitioning task and leverage the eigenvectors of the Laplacian matrix derived from self-supervised networks to effectively model the Support representation and capture the underlying distribution. Within the standard FSS framework, we predict the query mask by utilizing the learned support distribution. Furthermore, we introduce the hierarchical LKA module to enrich the feature representation and improve the decoding process. #### 4.0.1 Acknowledgment This work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG)- project number 455548460. \begin{table} \end{table} Table 1: (a) Comparison of IoU on the FSS-1000 dataset. (b) Comparative performance of the proposed method against the SOTA approaches on the PH\({}^{2}\) dataset. (c) Contribution of each module on the model performance.
2303.09686
Effect of Haptic Assistance Strategy on Mental Engagement in Fine Motor Tasks
This study investigates the effect of haptic control strategies on a subject's mental engagement during a fine motor handwriting rehabilitation task. The considered control strategies include an error-reduction (ER) and an error-augmentation (EA), which are tested on both dominant and non-dominant hand. A non-invasive brain-computer interface is used to monitor the electroencephalogram (EEG) activities of the subjects and evaluate the subject's mental engagement using the power of multiple frequency bands (theta, alpha, and beta). Statistical analysis of the effect of the control strategy on mental engagement revealed that the choice of the haptic control strategy has a significant effect (p < 0.001) on mental engagement depending on the type of hand (dominant or non-dominant). Among the evaluated strategies, EA is shown to be more mentally engaging when compared with the ER under the non-dominant hand.
Hemanth Manjunatha, Shrey Pareek, Amirhossein H. Memar, Thenkurussi Kesavadas, Ehsan T. Esfahani
2023-03-16T23:10:40Z
http://arxiv.org/abs/2303.09686v1
# Effect of Haptic Assistance Strategy on Mental Engagement in Fine Motor Tasks ###### Abstract This study investigates the effect of haptic control strategies on a subject's mental engagement during a fine motor handwriting rehabilitation task. The considered control strategies include an error-reduction (ER) and an error-augmentation (EA), which are tested on both dominant and non-dominant hand. A non-invasive brain-computer interface is used to monitor the electroencephalogram (EEG) activities of the subjects and evaluate the subject's mental engagement using the power of multiple frequency bands (theta, alpha, and beta). Statistical analysis of the effect of the control strategy on mental engagement revealed that the choice of the haptic control strategy has a significant effect (p \(<\) 0.001) on mental engagement depending on the type of hand (dominant or non-dominant). Among the evaluated strategies, EA is shown to be more mentally engaging when compared with the ER under the non-dominant hand. Haptic systems, Rehabilitation, Mental engagement, EEG 0 level of therapy intensity, there was no significant difference between conventional and robotic rehabilitation outcomes. It is very well established that the outcome of the robotic rehabilitation depends on two main factors: the intensity of the exercise and the mental engagement (active participation) of the patient in the therapy.[8, 9] The training strategies in haptic-based rehabilitation mainly fall into two categories: 1) _error-reduction (ER)_, which decreases the performance error by providing active assistance to enable the patient to perform the rehabilitation tasks better. 2) _error-augmentation (EA)_ that increases the task difficulty to evoke a higher voluntary involvement of the patient to accomplish the goal. Despite the wide implementation of both ER and EA strategies, there is still a lack of agreement on which strategy evokes more clinically-significant outcomes after training.[10, 11] In this regard, Youlin et al.[10] performed a comprehensive review of the effect of ER and EA strategies in enhancing upper extremity performance and post-stroke recovery. Their review suggested that the EA strategy was statistically more effective than conventional repetitive practice in both motor recovery and task performance. They even reported a statistically significant improvement in motor performance using EA when compared with ER. However, neither EA nor ER evoked clinically significant changes in motor recovery and function.[10, 11, 12] Regardless of the type of haptic training strategy (EA or ER), the outcome of the haptic-based therapy may not necessarily be superior to manual therapy unless there is an active engagement from the patient.[8, 13] For instance, in the ER, a high level of assistance may render the task too easy, causing the loss of patient engagement and failure to learn the motor primitives.[14] Conversely, in EA, the task can become very difficult to the point that it induces anxiety, causing the patient to give up rehabilitation training at an early stage. Therefore, quantification of patient engagement has a pivotal role in the success of robotic rehabilitation. This quantification is even more critical in home-based therapy as the clinician may not be physically present to encourage the patients to perform the rehabilitation tasks. To maximize the patient's engagement during the rehabilitation, it is, therefore, necessary to adaptively modify the interaction parameters.[9] Fig. 1 demonstrates the general framework of such an adaptive rehabilitation system where interaction parameters (e.g., controller type/gains, visualization, feedback, task type, and intensity) are modified depending on the level of patient's engagement and performance to encourage his/her active involvement. To evaluate the rehabilitation performance as well as the engagement level, subjective (physician evaluations) and data-driven approaches can be used. It should be noted that the adaptation algorithm requires a proper understanding of the relationship between the subject's engagement and the system's interaction parameters. There are usually two types of engagement, Bio-mechanical and Psycho-physiological (cognitive), which are considered in rehabilitation systems for extracting the state of the patient. Although the importance of both engagement types is very well understood for the success of rehabilitation,[9, 15] the majority of the adaptive rehabilitation techniques only consider the bio-mechanical measures such as applied force[16] or muscle activity[17] due to the ease of use. As a result, the relationship between biochemical engagement and the rehabilitation control parameters are much better studied and understood[9] compared to cognitive engagement. This knowledge gap is mostly due to the difficulty in real-time estimation of mental engagement. In this regard, electroencephalogram (EEG) activities have attracted much attention in quantifying the mental engagement as it provides a direct insight into the subject's cognition during rehabilitation[18] with very good temporal resolution. Such quantification has been suggested as a continuous outcome during rehabilitation[19, 20] to change the interaction parameters to the optimum levels. In this context, to the best of our knowledge, there is still no study investigating the effect of control strategies type (EA or ER) on patient mental engagement during fine motor tasks. Figure 1: Typical framework of an adaptive haptic rehabilitation strategy. The highlighted boxes indicated the scope of this work which is studying the effect of haptic control strategy on mental engagement The scope of our work about the adaptive rehabilitation strategy is highlighted in Fig. 1. To reiterate, we are using EEG as a passive assessment method to quantify the engagement under different haptic control strategies, namely error augmentation and error reduction. In this paper, we investigate the above-mentioned knowledge gap by studying the mental engagement of subjects during haptic-based fine motor writing tasks. For this purpose, our design of the experiment considers three types of haptic control strategies: ER, EA, and baseline (free control - no haptic assistance). For passive mental engagement quantification, we use the EEG-based index and explore two research questions i) which type of haptic control (EA v/s ER) evolves higher engagement in subjects, and ii) whether the subject's engagement depends on the hand (dominant or non-dominant) used in the haptic-based rehabilitation task. A Generalized Linear Mixed Model (GLMM) is used to conduct the statistical study on the effect of control strategy (EA v/s ER) and hand type (dominant v/s non-dominant) on the engagement level of the subjects. This study explores an objective assessment of the effect of haptic-control strategy on mental engagement during fine motor tasks. Understanding the relation between mental engagement and haptic-control strategy can facilitate the design of adaptive haptic controllers that can promote the patient's engagement in home-based rehabilitation. Such controllers can leverage the mental engagement of the patients as a metric to adapt the level and type (EA v/s ER) of assistance being supplied to achieve better rehabilitation outcomes. ### Engagement in robotic rehabilitation In this section, we briefly highlight the importance of mental engagement in motor learning and the outcomes of robotic rehabilitation and then discuss the approaches with which the engagement can be evaluated. There are multiple studies providing evidence that, in robot-assisted rehabilitation, passive movements are insufficient to improve motor recovery [21] unless they are coupled by active movements and engagement [22]. For instance, Lynch et al. [21] conducted randomized controlled trials with 32 patients who received continuous passive movements. They reported a positive trend in the motor learning of the patients but no significant differences when compared to patients performing therapist-supervised self-range of motion. Moreover, it is suggested that motor rehabilitation is a form of learning [23, 24] that can be enhanced by active patient participation/engagement [23]. Recent researches in robotic rehabilitation are therefore focused on maximizing the patient's engagement by providing assistance as needed and adapt the rehabilitation procedure according to the patient's intent [25, 26, 27]. Blank et al. have provided a comprehensive review of the importance and the approaches for promoting patient engagement in robot-assisted stroke rehabilitation [8]. Gadi Bartur et al. [19] evaluated the relationship between the single-channel Brain Engagement Index (BEI) measured in terms of EEG and temporary functional changes induced during the standard rehabilitation sessions. The clinical study included 18 post-stroke patients with an average of 35 minutes treatment sessions followed by 30 second evaluation period. The study demonstrated that when BEI is higher, the temporary functional improvement due to the treatment session is also better. Trujillo et al. [28] assessed the relationship between qualitative EEG measures and the motor recovery outcome in chronic stroke patients in robot-assisted rehabilitation. Ten stroke patients with the upper limb deficit were recruited for the study. Clinical assessment was done by a physical therapist after one month of treatment using the Fugl-Meyer Assessment (FMA). The study showed that qualitative EEG measures were indeed correlated with motor recovery in chronic patients. Along with the patient's mental engagement, the patient's intention to move is also used to assist or challenge as needed. For instance, Marquez-Chin et al. [25] used EEG-triggered functional electrical stimulation therapy to treat the upper limb, reaching the motion of a 64-year-old stroke patient. The study reported a clinically significant improvement in the Fugl-Meyer Assessment Upper Extremity as well as moderate improvement in Functional Independence Measure Self-Care subscore. On the same lines, Sullivan et al. [27] conducted a multi-year clinical study involving an EEG-based movement-intention detection for elbow flexion/extension rehabilitation. The results indicated that increasing the patient's engagement through intention detection enhanced the effectiveness of robotic rehabilitation system. The above studies signify the importance of the patient's active mental engagement along with intensive repetitive movements (facilitated by a robot) on the outcome of robotic rehabilitation. Nonetheless, objective quantification of mental engagement is still not straight forward due to its' multi-dimensional nature. Mental engagement includes different aspects of emotion, cognition, and motivation of the subject, making it an intricate and complicated feature and often associated with vigilance and alertness [29]. To measure cognitive engagement, researchers have explored various modalities such as electrocardiography (ECG) [30], galvanic skin response (GSR) [31], and electromyography (EMG) [32, 33], and pupillon-etry [34]. However, these modalities have certain drawbacks that limit their interpretability and scalability. For instance, the temporal resolution of ECG and GSR is poor and cannot relate directly to the stimuli. EMG signals cannot distinguish between passive and active movements from the patients [35]. Eye features are sensitive to lighting conditions and are more relevant in visually oriented tasks [36]. Also, the pupillometry measures visual attention [37] (the covert aspect of attention) that represent the level of visual information-gathering and provide vital information if the task difficulty is modulated by visual complexity. However, it contains less information about cognitive engagement when the task is not visual predominantly oriented. Alternatively, recent advances in BCI have made it possible to measure cognitive engagement, which can potentially address the aforementioned problems [38]. In particular, using EEG has recently drawn significant attention [19, 39, 40]. Many researchers have quantified mental engagement in terms of EEG-based engagement indices [41, 42]. These engagement indices are calculated using ratios of powers of different EEG frequency bands. The most common frequency bands used are Theta(\(\theta\): 4-7 Hz), Alpha(\(\alpha\): 8-13 Hz), and Beta(\(\beta\): 14-35 Hz) [43]. Lubar et.al [44] suggested the \(\beta/\theta\) ratio averaged over all electrodes as an indicator of engagement. Pope et al. [45] and Freeman et al. [43] improved that index by including the contribution of alpha rhythm as \(\beta/(\theta+\alpha)\) and suggested that it captured cognitive processes such as information gathering and attention. Berka et al. [46] supported the use of \(\beta/(\theta+\alpha)\) index by demonstrating that it correlates with sustained attention, information gathering, and visual scanning. Gevins and Smith [47] introduced a new index \(\theta/\alpha\), considering the theta and the alpha activities in the frontal and parietal cortices, respectively as a reflection of performance in demanding tasks. Yamada and Fumio [48] used only frontal mid-line \(\theta\) power as an indicator of mental demand and focused attention. Within alpha band (\(1/\alpha\)), lower and higher alpha have been found to reflect attention and task processing, respectively [49]. In our study, we used a Psycho-physiological baseline task to evaluate the robustness of the above-mentioned engagement indices. Out of five indices, \(\beta/(\theta+\alpha)\) was selected for the subsequent analysis. ## 2 Materials and Methods To regain writing skills, patients require kinesthetic assistance. In traditional rehabilitation programs, therapists provide such assistance to the patients, and as the patient's skills improve, the assistance is relaxed. In this paper, the same approach has been adopted for home-based robotic rehabilitation to recover a subject's writing skills. We have designed a writing simulation environment to study motor learning (trajectory tracking) of a subject with haptic assistance. As shown in (Fig. 2), the experimental setup consists of three main components: (1) a virtual environment to simulate the writing task, (2) a haptic device for assistance/resistance, and (3) an EEG headset to record the subject's cognitive activity. ### Simulation Environment A writing simulation environment (Fig. 2) is developed using the Unity3D interface, in which the end-effector of the haptic device acts as the writing stylus controller [4]. It allows the therapist to draw a template of interest (Fig. 2) on the screen, which is then used as the reference trajectory to be followed by the patient. In cases that the subjects deviate from the desired trajectory, a controlled force is applied by the haptic device to correct the trajectory. To achieve a high fidelity haptic rendering (thus better haptic assistance), the sampling rates of the simulation system (100 Hz) and the haptic device (1 kHz) are synchronized by resampling the trajectory data using B-Spline interpolation. With this approach, a continuous function of the discretely sampled reference trajectory can be generated, which can be used for designing the controllers, as explained in the subsequent sections. Further, the B-Spline interpolation can be differentiated analytically to obtain a reference velocity. To determine the tracking error (**e**) between the desired position (\(\mathbf{x_{d}}\)) and the current position of the subject (\(\mathbf{x}\)), a virtual 'no-error-zone' (Fig. 3a) is constructed around the trajectory. For instance, consider a simple case of tracking a Figure 2: Experiment setup and writing simulator with haptics. straight line from one target (\(P_{1}\)) to another (\(P_{2}\)). The no-error-zone can be imagined as a rectangle with a width of \(w\) and a length equal to the distance between the targets. While the end-effector is inside the rectangle, the tracking error is set to zero; otherwise, the error is defined as the minimum distance from the subject's position (\(\mathbf{x}\)) to the edge of the rectangle (\(\mathbf{x_{d}}\)). ### _Haptic Interface_ A 6 degrees-of-freedom (6 revolute joints - 3 actuated and 3 passive) Geomagic(r) Touch(tm) is used to provide force feedback to the user. The force feedback capability, a small form factor, lower cost, and 3-dimensional work-space make it a viable choice for home-based rehabilitation. The device is capable of applying a maximum force of 3.3 N and samples at a rate of 1000Hz. However, the range of force applied in limited. The simulation environment employs three control strategies: i) _Free_, ii) _Error-Reduction (ER) strategy_, and iii) _Error-Augmentation (EA) strategy_. A detailed explanation of each control strategy is given in succeeding sections. The haptic interaction with the simulation environment (haptic rendering) is modeled as a mass-spring-damper system. The robot provides an assistive or resistive force as the subject uses the system. The force command is generated based on a PD controller described by (1). \[\mathbf{e}(t)=\mathbf{x}_{d}(t)-\mathbf{x}(t)\] \[\mathbf{u}(t)=K_{p}\mathbf{e}(t)-K_{d}\mathbf{\dot{x}}(t) \tag{1}\] Where \(K_{p}\) and \(K_{d}\) denote the proportional and derivative gains, respectively. \(\mathbf{u}\) is the control input provided by the robot to generate the haptic feedback. The control gains \(K_{p}\) and \(K_{d}\) determine the degree of robotic assistance or resistance which can be adjusted based on the subject's response. The derivative gain term (\(K_{d}\)) is set to a small positive value to simulate the sensation of moving through a lightly viscous environment. The desired velocity (\(\mathbf{\dot{x}_{d}(t)}\)) in (1) is set to zero, hence the second term is \(K_{d}\mathbf{\dot{x}}(t)\). #### 2.2.1 Free Strategy Free mode refers to the unassisted paradigm in the writing simulator. In this strategy, the robot applies no assistive/resistive force to the subject's hand. The subject is solely responsible for controlling the cursor motion. In this case, the proportional gain (\(K_{p}\)) of the control law is set to zero. This mode serves as a baseline for evaluating the subject's progress. The derivative gain (\(K_{d}\)) is fixed at 0.02 across all modes to simulate a friction sensation analogous to writing over a real piece of paper. #### 2.2.2 Error-Reduction (ER) Strategy In this strategy, as long as the subject remains inside the no-error-zone, the robot offers no assistance. If the subject moves outside the no-error-zone, the robot applies an assistive force towards the closest point, \(x_{d}(t)\), on the rectangle's edge (Fig. Fig. 3: (a) Schematic representation of virtual no-error-zone and (b) templates used for the fine motor tasks. 3a). The closest point on the spline is calculated as (2). In certain cases, it may be possible that the closest point lies on an untraversed section of the trajectory. In such cases, the closest point is chosen by searching over the next \((k+n)^{th}\) points. \(k\) is the index of the current closest point \(x_{d}(t)\); and \(n\) is the length of the search space (in this experiment \(n=5\)). \[\mathbf{x}_{d}(t)=\mathbf{x}_{l}(t)-\frac{w}{2}(\mathbf{x}_{l}(t)-\mathbf{x}( t)) \tag{2}\] The control law according to error (\(e(t)\)) and current position (\(x(t)\)) to generate the assistive force for ER is given as (3), \[\mathbf{u}(t)=\begin{cases}K_{p}\mathbf{e}(t)-K_{d}\dot{\mathbf{x}}(t),&\text {if }d>\frac{w}{2}\\ 0,&\text{otherwise}\end{cases} \tag{3}\] Where, \(d\) is the Euclidean distance between the current point \(\mathbf{x}(t)\) and closest point on the line \(\mathbf{x}_{l}(t)\). The \(K_{p}\) gain is set to a value of 1. This value was chosen experimentally to ensure that sufficient assistance is provided to the subject without causing instability in the system. #### 2.2.3 Error-Augmentation (EA) Strategy EA is similar in inception to ER with the exception that in the case of deviations from the error-zone, the haptic device forces the user away from the trajectory. In other words, \(K_{p}\) is replaced by \(-K_{p}\) in (3). The subject then needs to guide the cursor back into the no-error-zone against the resistive force of the haptic device. The calculations for the closest point are the same as described for the ER. The \(K_{p}\) gain is set to a value of \(-1\). As with the previous case, this value was chosen experimentally to ensure that the resistance applied by the robot could be countered by the subjects without any discomfort and maintain controller stability. ### Human Subject Study In an experiment approved by the Institutional Review Board (IRB# 770128-1), ten subjects (six males and four females) were recruited from the students of University at Buffalo. Participants' age ranged from 19 to 26 years (Mean=24, SD=2.1), and they had normal or corrected to normal vision. The subjects were instructed to write one of the ten reference templates of the same size (Fig. 3b) using the simulator and traced it 10 times in each control mode. The templates were chosen and modified from the Visual-Motor Integration textbook [50]. These simple templates were chosen to maintain the difficulty level of the task in an acceptable range and Figure 4: Experiment procedure flowchart. The order of the haptic control strategy and the choice of the shape were randomize for all subjects. All subjects were trained on their dominant hand and then the non-dominant one. avoid any potential mental overload leading to the loss of engagement. All the subjects first performed the writing task using their dominant hand and then switched (with different figures) to their non-dominant hand. Under the dominant and non-dominant hand, all three control strategies were tested. The order of the control strategies and the templates were pseudo-randomized (Fig. 4). Before the experiment, the subjects were briefed about the writing task. They practiced on the writing simulator using the Free strategy before the experiment. A reference trajectory was generated at the beginning of the experiment by the subject, which was used to calculate the errors with subsequent trajectories. During the experiment, the EEG activity of subjects was recorded. EEG signals were recorded using the B-Alert X10 wireless headset (Advanced Brain Monitoring. Carlsberg, CA, USA) from 9 locations. These channels were F3, Fz, F4, C3, Cz, C4, P3, Poz, P4 according to the 10/20 international systems. We ensured the electrode impedance to be below 40 k\(\Omega\). ### EEG Signal Analysis EEG signals were band-pass filtered (0.1-70 Hz) and then transmitted from the headset via a Bluetooth link to a nearby PC at a 256 Hz sampling rate. Artifacts caused by eye blinks and muscle contractions were removed using independent component analysis with the infomax algorithm in EEGLAB[51]. We visually examined 2-D scalp component maps to remove signal sources corresponding to eye movements and non-cognitive activities. After removal, the components were projected back to get an artifact-free EEG signal. Furthermore, relative and absolute power spectral densities were extracted using Welch's method from 1-second epochs with a 50% overlapping Hamming window. The features of 3 frequency bands, namely: theta (4-7 Hz), alpha (8-13 Hz), and beta (14-35 Hz) were used to extract five engagement indices commonly used in mental engagement analysis. This workflow is presented in Fig. 5. ### Engagement Index Selection In this study, we considered five engagement indices (Table 1), of which only one index was considered for subsequent analysis. Even though EEG-based engagement indices provide an objective measure of mental engagement, these measurements are dependent on individual differences and type of task. Hence, to select an engagement index that is robust towards the variation of a task and individual difference, a baseline task was conducted on twenty-two subjects (not participating in the primary human subject study), and the aforementioned five engagement indices were checked. The baseline included three tasks: Three-Choice Vigilance Task (3CVT), Eyes Open (EO), and Eyes Closed (EC) task corresponding to high engagement, low engagement, and relaxed wakefulness, respectively. For each index, we conducted a paired Student's t-test between the averaged scores of 3CVT and EO. The index with the significant discriminating power (lowest p-value) was selected as the engagement index for subsequent analysis. A significance level of 1% (2-tailed) was used for all the comparisons. All the considered indices demonstrated statistically significant results (\(p<0.01\)) (refer Table 1) except for \(\beta/\alpha\). Among the indices, which showed significant results, ratio \(\beta/(\alpha+\theta)\) was considered for the rest of the study as it is a widely used measure of engagement. Figure 5: EEG analysis pipeline to calculate the engagement index. ### Statistical Analysis To study the effect of haptic control strategy and hand type on the engagement level, we have used generalized linear mixed models (GLMM). It is an extension of the general linear model and considers both fixed and random effects. It is widely used for the analysis of grouped data, as it can model the differences between groups as a random effect. GLMM formulation is given by equation 4, where the \(\mathbf{\beta}\) are the fixed effect coefficients, and \(\mathbf{u}\) are random effect coefficients and \(\mathbf{X}\), \(\mathbf{Z}\) are model matrices for fixed and random effects respectively. \[\mathbf{ln}(\mathbf{y})=\mathbf{X}\mathbf{\beta}+\mathbf{Z}\mathbf{u}+\mathbf{\varepsilon} \tag{4}\] GLMM also allows the response variable, \(\mathbf{y}\), to have different distribution rather than Gaussian. Throughout the study, we have used a natural log-transformed engagement index as the log-normal fitted the engagement index distribution well (see section 3.2). The log-transformation of \(\mathbf{y}\) doesn't alter conclusions of the GLMM model because the natural log is a monotonic function. In our study, we always consider each subject as a random effect on the intercept to account for individual differences. This consideration allows us to study the general trend in the main population from which the samples (subjects) are selected and not just the specific samples. ## 3 Results and Discussion Before presenting the results of the statistical analysis, it is crucial to determine if the subjects received two different levels of force distributions from the ER and EA controllers. This provides a validation of the haptic control implementation and also gives insights into the kinetic behavior of the dominant and the non-dominant hand. Moreover, we need to ensure that the levels of force chosen are such that the task is not too difficult or too easy for the subjects, which may lead to low engagement in either case. Also, we need to ensure that in our experimental design, we maintain the same level of bio-mechanical engagement (reflected in the interaction force and tracing speed and error) to avoid its confounding effect. Hence, in section 3.1, we first provide results of the kinesthetic aspects of our experiment: hand trajectory, force applied by the haptic device, error distribution in trajectory, and average speed of tracking to validate our design of the experiment. Section 3.2 concludes with GLMM results for the engagement index under the dominant and the non-dominant hand using different control strategies. ### Kinesthetic Results of the Writing Task Fig. 6 shows the two-dimensional position data for one of the subjects performing the writing task under different haptic control strategies. The spread of traces is larger in the non-dominant hand when compared to the dominant hand. This is because the subjects are already familiar with fine motor movement in their dominant hands. The EA strategy forces are distributed towards the positive side, and the ER strategy forces are distributed towards the negative side, which suggests that the control strategies behaved as intended (Fig. 6(a)). As the writing task is planar, we neglect the force in the z-direction. \begin{table} \begin{tabular}{c c c} **Index** & **Location** & **p-value** \\ \hline \(\beta/(\alpha+\theta)\) & Avg. over all electrodes & \(<\)**0.01** \\ \hline \(\theta/\alpha\) & Avg. frontal midline \(\theta\) and avg. parietal \(\alpha\) & \(<\)**0.01** \\ \hline \(1/\alpha\) & Avg. over parietal electrodes & 0.025 \\ \hline \(\beta/\theta\) & Avg. over frontal electrodes & \(<\)**0.01** \\ \hline \(\beta/\alpha\) & Avg. over parietal and occipital electrodes & 0.792 \\ \end{tabular} \end{table} Table 1: The list of most common engagement indices used in the literature. The effectiveness of each index in distinguishing between the baseline experiments is shown in terms of p-values. The magnitude of EA and ER strategy forces are significantly different from the no-force condition under the dominant and non-dominant hand, which is expected. However, the magnitude of forces in EA and ER are not significantly different, which indicates that the main factor that is changing is the strategy type, not the magnitude of force itself. In terms of the average speed of tracking, there was no significant difference between different control strategies under dominant and non-dominant hands (Fig. 7b)). A plausible reason might due to simple drawing figures, and the force was applied only to correct the error in trajectory rather than guiding the subject along the trajectory. In terms of tracking error, for the baseline task, the error is not significantly different from the tracking error in the dominant hand or non-dominant hand. A probable reason might be because healthy subjects participated in the experiment who are familiar with the writing task. Another probable reason might be the size of the shapes used for tracking. The shapes were not large nor complicated enough to be very challenging for healthy subjects to make a significant error. This constraint on the size of the shapes and the error zone comes from the design location that we wanted to control the bio-mechanical engagement and study only active mental engagement. Also, there is no significant difference between the EA or ER within the dominant or non-dominant hand (Fig. 8). The non-significant difference in tracking error under different control strategies also explains the non-significant difference in the magnitude of force distribution (Fig. 7a). In terms of variance (in tracking error), the variance in the non-dominant hand (0.088) is higher compared to the dominant hand (0.06). This may be due to the higher familiarity of the subjects with the fine motor control task using the dominant hand than the non-dominant hand. A similar error distribution across the two control strategies signifies that the control range chosen in the experiment is not too high to disengage or distract the subject. This is important because providing too much assistance can reduce the subject's involvement, and they may fail to learn the motor primitives needed to complete a task. Conversely, challenging the patient beyond a certain level might distract the subjects from the motor task itself, thereby leading to loss of engagement. ### Statistical Analysis Results of Engagement Level For the general linear mixed models, it is important to consider the distribution of the response variable to fit the correct distribution. In this study, four different distributions, namely: Weibull, Gamma, Gaussian, and Log-normal, were fitted to the engagement index for each subject to identify the best distribution fit. To identify the goodness of fit, the Akaike information criterion (AIC) was used. AIC was calculated for each distribution fit and averaged over all the subjects (Table 2). Log-normal distribution on average (over subjects) gave the least AIC value, followed by Gamma and Gaussian (normal) distributions. Consequently, for the rest of the linear mixed models, Log-normal distribution was used. Many researchers have shown that the beta, theta, and alpha rhythms are negatively correlated with the task en Figure 6: Tracking data (1 unit \(\approx\) 350 mm) for the (a) dominant and (b) non-dominant hand using different control strategies. \begin{table} \begin{tabular}{c c c c c} \hline **Distribution** & **Weibull** & **Gamma** & **Gaussian** & **Log normal** \\ \hline **Average AIC** & -1538.259 & -2463.218 & -1820.818 & **-2676.88** \\ \hline \end{tabular} \end{table} Table 2: Average (over subjects) Akaike information criterion (AIC) of different distributions fitted to engagement index. Figure 8: (a) Tracking error distribution for the dominant, and (b) Non-dominant hand under ER and EA control strategies. Figure 7: **(a)** Comparison of the interaction forces using three control strategies for both dominant and non-dominant hand. All comparisons are significantly different. **(b)** Comparison of the average tracking speed using different control strategies under the dominant and non-dominant hand. All comparisons are statistically insignificant. The interaction force is measured in Newton (N), and the tracking speed is measured in Unity unit per second (uu/s). gagement and alertness.[43, 45] Concerning the engagement index, numerous studies have shown that beta oscillations have a vital role in attention-related processes.[52] The beta oscillations are normally associated with sensorimotor processing.[53] The beta power (sensorimotor cortex) decreases during the preparation and execution of movement, but increases after the movement are complemented. Jenkinson and Perter[54] proposed that the level of beta activity is inversely proportional to the likelihood that new voluntary action will need to be processed and performed. They hypothesized that net dopamine levels and beta activity are inversely related.[55] Theta oscillations in the engagement index \(\beta/(\alpha+\theta)\), in general, are correlated with memory and emotional regulation (both positive and negative emotions). Also, the desynchronization of lower alpha activity is associated with the attention process.[55] Jensen et al.[56] suggested that frontal theta oscillations are observed during increased workload, indicating sustained attention to new information. Thus, an increase in \(\theta\) activity indicates an increase in the attention process. Consequently, the index \(\beta/(\alpha+\theta)\) is negatively correlated with the engagement level. The rest results from GLMM are interpreted based on this consideration. Moving forward, we provide a general model with all the factors. The factors considered are the control type, the magnitude of the force, and the hand type. The reason for considering only the magnitude of the force is that the sense of direction is already encoded in the control type. Within this model, the hand type is used as a random slope effect. This consideration accounts for dominant and non-dominant differences, as well as the variation of templates used in those hands. If any difference is found, a separate mixed model is constructed under the dominant and non-dominant hand to study the effect of control strategies on engagement. It should be noted that GLMM considers one level of each of the predictors as the reference. For the general model, all the comparisons are made taking dominant hand and no force control type as the references. Under this general model, GLMM (Table 3) revealed a significant effect of controller type, hand type, and interaction of controller type with hand type under the non-dominant hand. Thus, we can conclude that depending on the hand type, control types affect the engagement level. Consequently, we constructed two different GLMM under dominant and non-dominant hands to study the effect of the control strategy. Fig. 9 further highlights the above results under the dominant and non-dominant hand across different subjects. The regression shows a negative slope for the non-dominant hand and a positive slope for the dominant hand. This difference can be explained by the fact that the dominant hand is well versed in fine motor tasks such as writing due to extensive practice,[57] resulting in less mental attention in executing simple motor tasks. In fact, using PET functional imaging studies, Grafton et al.[58] have shown the recruitment of widespread frontal and temporal regions in the brain during non-dominant hand motor learning which is identified as a source of higher consumption of attentional resources observed in ERP studies of non-dominant hand motor learning.[59] Moreover, the dominant hand is shown to have high motor units than non-dominant hand.[60, 61] The non-dominant hand does not enjoy such an advantage; thus, it is expected to see some difference using different control strategies. This is highlighted further in the following GLMM model under the non-dominant hand. The GLMM revealed a significant difference in engagement level in the non-dominant hand when the EA control \begin{table} \begin{tabular}{c c c c c} \hline \hline **Effect** & **Predictors** & **Estimates** & **Confidence Interval** & **p-value** \\ \hline \multirow{4}{*}{Main} & Intercept & -0.75 & [-0.96 & -0.59] & \(<\)**0.001** \\ & Error Augmentation (EA) & 0.27 & [0.14 & 0.41] & \(<\)**0.001** \\ Main & Error Reduction (ER) & 0 & [-0.16 & 0.16] & 0.987 \\ Main & Force & 0.02 & [-0.13 & 0.18] & 0.749 \\ Main & Non Dominant & 0.16 & [0.09 & 0.24] & \(<\)**0.001** \\ Interaction & EA \(\times\) Force & -0.18 & [-0.39 & 0.01] & 0.065 \\ Interaction & EA \(\times\) Non Dominant & -0.42 & [-0.63 & -0.21] & \(<\)**0.001** \\ Interaction & ER \(\times\) Non Dominant & -0.12 & [-0.37 & 0.11] & 0.281 \\ Interaction & Force \(\times\) Non Dominant & 0.01 & [-0.21 & 0.24] & 0.886 \\ \hline \hline \end{tabular} Other interaction terms are not reported due insignificant effects. \end{table} Table 3: GLMM results with all the factors and engagement index. strategy was used. The EA strategy decreased the engagement index by 0.14 (Table 4) units with respect to baseline (no force) strategy. Note that a decrease in engagement index signifies an increase in engagement level. With ER, the engagement index decreased by 0.11. As discussed before, the \(\beta/(\alpha+\theta)\) engagement index is negatively correlated with the engagement level; thus, EA strategy induces more engagement when compared to ER and baseline strategy under the non-dominant hand. We repeated the above model considering the ER strategy as a reference level; this reveals any differences between EA and ER strategy. There was a significant difference (p\(<\)0.05) under the EA strategy when compared to the ER strategy under the non-dominant hand. Under the dominant hand, using the EA, the engagement index increased by 0.14 and decreased by 0.01 under ER when compared to the baseline strategy. A line of reasoning for this trend can be that subjects are already familiar with fine motor tasks in their dominant hand. Hence, the chance of making a significant mistake is very low. So, any force applied might not lead to more engagement in the task. Conversely, the non-dominant hand has less dexterity to perform fine motor tasks such as writing. Hence, a different level of engagement was observed only under the non-dominant hand. As a consolidation, Fig. 10 shows the results for one of the subjects with different control strategies and the type of hand. Under the dominant hand, the engagement index in EA is significantly higher than the baseline (no force/free) strategy (Fig. 10a). However, in the non-dominant hand, the engagement index under EA and ER are significantly less than than the baseline strategy (Fig. 10b). Concretely, EA induces more engagement when compared to the ER under the non-dominant hand. The observation of higher engagement in EA is also in agreement with the motor adaption principle, \begin{table} \begin{tabular}{c c c c c} \hline \hline **Hand Type** & **Predictors** & **Estimates** & **Confidence Interval** & **p-value** \\ \hline \multirow{3}{*}{**Non Dominant**} & Intercept & -0.59 & [-0.71 & -0.47] & \(<\)**0.001** \\ & Error Augmentation (EA) & -0.14 & [-0.18 & -0.11] & \(<\)**0.001** \\ & Error Reduction (ER) & -0.11 & [-0.15 & -0.08] & \(<\)**0.001** \\ \hline \multirow{3}{*}{**Dominant**} & Intercept & -0.77 & [-0.87 & -0.68] & \(<\)**0.001** \\ & Error Augmentation (EA) & 0.14 & [0.11 & 0.18] & \(<\)**0.001** \\ \cline{1-1} & Error Reduction (ER) & -0.01 & [-0.02 & 0.04] & 0.507 \\ \hline \hline \end{tabular} \end{table} Table 4: The results of linear mixed model of engagement index with respect to no-force control Figure 9: Trend of log-transformed engagement index with respect to the variation of force. The positive force indicates EA strategy and negative force indicates ER strategy. which indicates that neural signals that drive motor adaptation are generated by kinematic errors during movement [62, 63, 64]. Another important measure of attention is Sensorimotor Rhythm ( SMR) in the sensorimotor region. The SMR has been used in the study of psychomotor efficiency and attention-related tasks [65]. In this context, John Gurzelier et al. [66] provides a scoping review of neuro-feedback procedures using SMR modulation and beta band inhibition for improving attention, mood, and memory. In this study, we used the average SMR power (with log-transformed) calculated at the C3 and C4 between 13 Hz and 15 Hz to study the attention level under different haptic control strategies. Table 5 presents the results of the general linear mixed model of sensorimotor rhythm power with respect to the baseline (no force). Under the non-dominant hand, the SMR power using ER and EA control strategies is significantly more than the baseline (no force). However, under the EA strategy, the SMR power slightly better. This results agrees with the engagement index results presented in Table 4 (Fig. 10). Under the dominant hand, SMR of ER and EA strategies are significantly higher than the baseline (force). This result conflicts with previous results. Hence, a conclusive result cannot be drawn about the effect of control strategies under the dominant hand. The same can be seen in the \(\mu\) rhythm topographic map for one of the subjects (Fig. 11). The power is comparable between no-force condition under the dominant \begin{table} \begin{tabular}{c c c c c} \hline \hline **Hand Type** & **Predictors** & **Estimates** & **Confidence Interval** & **p-value** \\ \hline \multirow{4}{*}{**Non Dominant**} & Intercept & -27.54 & [-27.92 & -27.16] & \textless{}**0.001** \\ & Error Augmentation (EA) & 0.11 & [0.05 & 0.18] & \textless{}**0.001** \\ & Error Reduction (ER) & 0.09 & [0.03 & 0.15] & **0.005** \\ \hline \multirow{4}{*}{**Dominant**} & Intercept & -27.63 & [-28.04 & -27.21] & \textless{}**0.001** \\ & Error Augmentation (EA) & 0.23 & [0.16 & 0.29] & \textless{}**0.001** \\ \cline{1-1} & Error Reduction (ER) & 0.19 & [0.13 & 0.25] & \textless{}**0.001** \\ \hline \hline \end{tabular} \end{table} Table 5: The results of the general linear mixed model of SMR power with respect to no-force control Figure 10: Significant differences between engagement indices using different control strategies under (a) dominant hand (b) non-dominant hand (The asterisk denotes significant difference, p \(<\) 0.01). and non-dominant hand. However, the power is significantly more in error reduction as well as in error augmentation when compared to baseline, which reflects the results of the linear mixed model (Table 5). The limitations of the study are as follows. The controller gains were chosen based on the limitations of the haptic device. The haptic device used in the study cannot apply force more than 3.3 N. Consequently, a very large force could not be applied. As a result, we could not conclude any significant difference in bio-mechanical engagement parameters such as force and tracking error. Also, the participants considered in the experiments are healthy subjects who performed the experiments well, even with force from the haptic device. Consequently, the above-concluded results may differ if the system is used with clinical patients. Nonetheless, even though no significant difference in bio-mechanical engagement was not observed, a significant difference was observed in active mental engagement. ## 4 Conclusion and Future Directions This paper studies the effect of different control strategies on the mental engagement of subjects while performing fine motor control tasks using a haptic device with force feedback. An experiment is designed, where ten subjects perform a writing task using a haptic device that can either assist (error reduction) or challenge (error augmentation) the subject when they deviate from the reference trajectory. During the experiment, the subject's brain activity (EEG) is monitored to quantify mental engagement. The ratio of beta power (from all electrodes) and alpha + theta power is used (\(\beta/(\alpha+\theta)\)) as a measure of engagement. With this setup, the study explores two research questions: i) which type of haptic control evokes higher engagement in subjects, and ii) whether the subject's engagement depends on the hand (dominant or non-dominant) used. General linear mixed models are used to identify the influence of different control strategies on the engagement level of the subject. When all the factors: hand type, control type, force magnitude was considered, hand type had a significant effect. Consequently, two GLMMs are constructed corresponding to the dominant and non-dominant hand. Under the non-dominant hand, the engagement of EA and ER is statistically higher than baseline control strategies. However, under the EA strategy, the engagement is slightly higher. Along with the EEG engagement index, Sensorimotor Rhythm ( SMR) power is also analyzed. The SMR is significantly better for EA and ER strategy when compared to the baseline task under the non-dominant hand. The same trend is observed under the dominant hand also. However, the shapes that are considered in the study are comprised of simple arcs and lines, which can render the writing task too easy under the dominant hand. So, to arrive at more conclusive results under the dominant, more complex shapes need to be studied. Even though General linear mixed models have enough power to signify the effect to control strategy, increasing the number of subjects can further bolster the study outcomes. Future work should investigate the relationship between higher levels of force feedback and engagement of the subject in the task. This is vital because making a task more challenging may cause the subjects to give up. Understanding such relationships facilitates adaptive assistance based on the mental engagement of the subject during fine motor tasks. Figure 11: \(\mu\) rhythm power over the scalp under different control strategies and hand type. The force significantly higher than the baseline condition under both dominant and non-dominant hand. ## 5 Acknowledgment This material is based upon work supported by the National Science Foundation under Grant No. 1502287 and 1502339.
2305.15121
Beyond Individual Input for Deep Anomaly Detection on Tabular Data
Anomaly detection is vital in many domains, such as finance, healthcare, and cybersecurity. In this paper, we propose a novel deep anomaly detection method for tabular data that leverages Non-Parametric Transformers (NPTs), a model initially proposed for supervised tasks, to capture both feature-feature and sample-sample dependencies. In a reconstruction-based framework, we train an NPT to reconstruct masked features of normal samples. In a non-parametric fashion, we leverage the whole training set during inference and use the model's ability to reconstruct the masked features to generate an anomaly score. To the best of our knowledge, this is the first work to successfully combine feature-feature and sample-sample dependencies for anomaly detection on tabular datasets. Through extensive experiments on 31 benchmark tabular datasets, we demonstrate that our method achieves state-of-the-art performance, outperforming existing methods by 2.4% and 1.2% in terms of F1-score and AUROC, respectively. Our ablation study further proves that modeling both types of dependencies is crucial for anomaly detection on tabular data.
Hugo Thimonier, Fabrice Popineau, Arpad Rimmel, Bich-Liên Doan
2023-05-24T13:13:26Z
http://arxiv.org/abs/2305.15121v6
# Beyond Individual Input for Deep Anomaly Detection on Tabular Data ###### Abstract Anomaly detection is crucial in various domains, such as finance, healthcare, and cybersecurity. In this paper, we propose a novel deep anomaly detection method for tabular data that leverages Non-Parametric Transformers (NPTs), a model initially proposed for supervised tasks, to capture both feature-feature and sample-sample dependencies. In a reconstruction-based framework, we train the NPT model to reconstruct masked features of normal samples. We use the model's ability to reconstruct the masked features during inference to generate an anomaly score. To the best of our knowledge, our proposed method is the first to combine both feature-feature and sample-sample dependencies for anomaly detection on tabular datasets. We evaluate our method on an extensive benchmark of tabular datasets and demonstrate that our approach outperforms existing state-of-the-art methods based on both the F1-Score and AUROC. Moreover, our work opens up new research directions for exploring the potential of NPTs for other tasks on tabular data. ## 1 Introduction Anomaly detection is a critical task that aims to identify samples that deviate from a pre-defined notion of normality within a dataset. Traditional approaches to anomaly detection characterize the _normal1_ distribution almost exclusively using samples considered as _normal_, and flag data points as anomalies based on their deviation from this distribution. Anomaly detection (AD) is especially useful for applications involving imbalanced datasets, where standard supervised methods may fail to achieve satisfactory performance [51]. Those applications include fraud detection [19], intrusion detection in cybersecurity [27], astronomy [34], medical diagnosis [6], and data cleaning to remove samples that may hinder the performance of machine learning models. Footnote 1: The term _normal_ here relates to the concept of normality in opposition to _abnormal_. Anomaly detection encompasses both unsupervised and supervised methods. In most real-world scenarios, labeled datasets that differentiate normal samples from anomalies are unavailable or costly to obtain. To address this, efficient anomaly detection methods must be robust to dataset contamination, where the training set is predominantly composed of normal samples but also includes anomalies. However, when labeled data is available, one can consider a supervised approach to create a training set consisting solely of _normal_ samples, thereby indirectly incorporating label information into the anomaly detection model. Many general AD methods tend to work well on tasks that involve unstructured data (_e.g._, natural language processing or computer vision) such as [41; 47; 25; 36; 37; 21; 26]. However, recent work [4; 32; 43] has revealed that the best-performing methods for tabular data involve models tailored to consider the particular structure of this data type. Anomaly detection methods for structured data typically use either _feature-feature_ or _sample-sample_ dependencies to identify anomalies. For instance, in [43], the authors assume a class-dependent relationship between a subset of variables in a sample's feature vector and the rest of its variables. The authors thus propose a contrastive learning framework to detect anomalies based on this assumption. Another recent method [48] identifies anomalies in tabular datasets by focusing on sample-sample dependencies. This approach uses a variational autoencoder to estimate the _normal_ distribution and subsequently computes the influence of _normal_ samples on validation samples to construct an anomaly score. Both approaches have demonstrated competitive results for anomaly detection in tabular datasets. Recent work on supervised deep learning methods for tabular data [42; 1; 11; 46; 23] has also highlighted the importance of considering the particular structure of tabular data. In particular, in [23; 46], the authors emphasize the significance of considering both feature-feature and sample-sample dependencies for supervised regression and classification problems on tabular data. Based on the latter observation, we formulate the hypothesis that not only are feature-feature relations class-dependent as supported by [43] but **sample-sample dependencies are also class-dependent** and can be used to identify anomalies. In particular, since interactions between samples are learned exclusively using _normal_ samples in the anomaly detection setup, they should be especially discriminative in identifying anomalies during inference. To test this hypothesis, we employ Non-Parametric Transformers (NPT) [23], first proposed for supervised tasks on tabular datasets. NPTs leverage two novel attention mechanisms to capture these relations between samples and between features: Attention Between Datapoints (ABD) and Attention Between Attributes (ABA). We show that NPTs are particularly relevant for flagging anomalies, in line with recent work [16] demonstrating the effectiveness of new deep learning architectures, such as transformers [49], for anomaly detection on tabular data. We experiment on an extensive benchmark of tabular datasets to demonstrate the capacity of our proposed approach to detect anomalies and compare our performances to existing AD methods. We obtain state-of-the-art results when it comes to detection accuracy. We also test the robustness of our approach to dataset contamination and give evidence that it can serve for unsupervised anomaly detection when the training set contamination is not too severe. The present work offers the following contributions: * We put forward the first deep anomaly detection method to combine feature-feature and sample-sample dependencies. * Our reconstruction-based method shows state-of-the-art anomaly detection capacity on an extensive benchmark of tabular datasets. * Our reconstruction-based method shows robustness to small data contamination. ## 2 Related works Identifying samples that do not belong to a _normal_ distribution has been discussed in the literature under several denominations: anomaly detection, novelty detection, or outlier detection [38]. One can summarize approaches to tackle this problem in four non-exhaustive categories: density estimation, one-class classification, reconstruction-based methods, and self-supervised approaches. Density EstimationThe most straightforward approach to detecting samples that do not belong to a distribution is to estimate the distribution directly and to measure the likelihood of a sample under the estimated distribution. Several approaches found in the literature have considered using non-parametric density estimation methods to estimate the density of the _normal_ distribution, such as KDE [30], GMM [35], or Copula as in COPOD [24]. Other approaches also focused on local density estimation to detect outliers, such as Local Outlier Factor (LOF) [5]. In inference, one flags as anomalies the samples that lie in low-probability regions under the estimated distribution. Reconstruction Based MethodsOther methods have consisted in learning to reconstruct samples that belong to the _normal_ distribution. In this framework, the models' incapacity to reconstruct a sample correctly serves as a proxy to measure anomaly. A high reconstruction error would indicate that a sample does not belong to the estimated _normal_ distribution. Those approaches can involve PCA [18] or neural networks such as diverse types of autoencoders [50; 31; 6; 21], or GANs [39; 40]. One-Class ClassificationThe term _one-class classification_ was coined in [28] and describes identifying anomalies without directly estimating the _normal_ density. One-class classification involves discriminative models which directly estimate a decision boundary. For instance, in kernel-based approaches [41; 47], authors propose to characterize the support of the _normal_ samples in a Hilbert space and to flag as anomalies the samples that would lie outside of the estimated support. Similarly, recent work has extended their approach by replacing kernels with deep neural networks [36]. In the latter approach, neural networks must be constrained in their architectures to avoid model collapse, _i.e._ mapping all _normal_ samples to a single value when minimizing a one-class loss. Thus, in [7], authors proposed regularization techniques to alleviate this issue. In [13], authors proposed a method that involves generating, in the course of training, synthetic anomalous samples in order to learn a classifier on top of the one-class representation. Parallel to that, in [25; 17] proposed one-class classification approaches using tree-based methods. Their methods rely on the assumption that anomalies are easier to _isolate_. Thus, they can flag samples as anomalies when they are isolated from the rest of the dataset close to the root of the isolation trees. Self-Supervised ApproachesRecent methods have also considered self-supervision as a means to identify anomalies. In [4], authors apply several affine transformations to each sample and train a classifier to identify from the transformed samples which transformation was applied. The classifier only learns to discriminate between transformations using _normal_ transformed samples: assuming this problem is class-dependent, the classifier should fail to identify transformation applied to anomalies. In [32], authors propose a contrastive framework in which samples are transformed using neural mappings and are embedded in a latent semantic space using an encoder. The objective is to learn transformations so that transformed samples still share similarities with their untransformed counterpart while different transformations are easily distinguishable. The contrastive loss then serves as the anomaly score in inference. Similarly, [43] also propose a contrastive framework in which they identify samples as anomalies based on their inter-feature relations. Other self-supervised approaches, such as [45; 33], have focused on representation learning to foster the performance of one-class classification models. Attention MechanismsFirst introduced in [49], the concept of attention has become ubiquitous in the machine learning literature. Scholars have successfully applied transformers on a broad range of tasks, including computer vision, _e.g._ image generation with the Image Transformer [29] or image classification with the Vision Transformer (ViT) [10], natural language processing _e.g._ Masked Language Models (MLM) such as BERT [9], and classification tasks on structured datasets [46; 23]. Deep Learning for Tabular DataDespite the effectiveness of deep learning models for numerous tasks involving unstructured data, non-deep models remain the prevalent choice for machine learning tasks such as classification and regression on tabular data [14; 44]. However, in recent years scholars have shown that one could successfully resort to deep learning methods for various tasks on tabular datasets. For instance, in [42], authors discuss how regularization is crucial in training a deep learning model tailored for tabular data. Hence, they propose a new regularization loss to accommodate the variability between features. Similarly, [20] shows that correctly selecting a combination of regularization techniques can suffice for a Multi-Layer Perceptron (MLP) to compete with GBDT. Finally, [46; 23] propose deep learning models based on attention mechanisms that rely on feature-feature, feature-label, sample-sample, and sample-label attention. Both models achieve competitive results on several baseline datasets and emphasize sample-sample interaction's role in classifying samples correctly. ## 3 Method In this section, we discuss the learning objective used to optimize the parameters of our model, then we briefly present the mechanisms involved in Non-Parametric Transformers [23], the core model used in our approach, and finally, we present NPT-AD, our method to derive an anomaly score. ### Learning Objective Reconstruction-based approaches for anomaly detection involve training a model to accurately reconstruct _normal_ samples while failing to reconstruct anomaly samples. Such methods effectively identify anomalies by exploiting differences in the underlying data distributions between _normal_ and anomalous samples. Let \(\mathcal{D}_{train}=\{\mathbf{x}_{i}\in\mathbb{R}^{d}\}_{i=1}^{n}\) represent the training set composed of \(n\)_normal_ samples with \(d\) features. Standard reconstruction-based approaches consider the task of learning a mapping \(\phi_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) to minimize a reconstruction loss. The parameters \(\theta\in\Theta\) are optimized to reconstruct each sample \(\mathbf{x}\in\mathbb{R}^{d}\) in the training set with minimal error. Formally, the overall objective can be expressed as \[\min_{\theta\in\Theta}\sum_{\mathbf{x}\in\mathcal{D}_{train}}d(\mathbf{x}, \phi_{\theta}(\mathbf{x})), \tag{1}\] where \(d(\mathbf{x},\phi_{\theta}(\mathbf{x}))\) measures how well the model reconstructs sample \(\mathbf{x}\). The latter is often set to be a distance measure such as the Euclidean distance. The AD method proposed in [43] employs a masking strategy that maximizes the mutual information between each sample and its masked-out part by minimizing a contrastive loss. Recently, [22] demonstrated how stochastic masking [9] also maximizes mutual information, thereby establishing a link between the method of [43] and stochastic masking. In stochastic masking, each entry in a sample vector \(\mathbf{x}\in\mathbb{R}^{d}\) is masked with probability \(p_{mask}\), and the objective task is to predict the masked-out features from the unmasked features. Formally, let \(m\in\mathbb{R}^{d}\) be a binary vector taking value \(1\) when the corresponding entry in \(\mathbf{x}\) is masked, \(\mathbf{x}^{m}=\{x_{j}:m_{j}=1\}\) represents the masked entries of sample \(\mathbf{x}\), and \(\mathbf{x}^{o}=\{x_{j}:m_{j}=0\}\) denotes the complement of \(\mathbf{x}^{m}\), composed of the observed features of sample \(\mathbf{x}\). In this framework, the objective in eq. 1 is modified to \[\min_{\theta\in\Theta}\sum_{\mathbf{x}\in\mathcal{D}_{train}}d(\mathbf{x}^{m}, \phi_{\theta}(\mathbf{x}^{o})), \tag{2}\] where \(\phi_{\theta}(\mathbf{x}^{o})\) denotes the reconstructed masked features of sample \(\mathbf{x}\) by the model. Our proposed approach leverages the entire dataset in a non-parametric manner to reconstruct masked features. This method considers feature-feature interactions and also captures relationships between samples to optimize the reconstruction objective. Let \(\mathbf{X}\in\mathbb{R}^{n\times d}\) denote the dataset matrix, consisting of \(n\) training samples with \(d\) features. We introduce the matrix equivalents of \(m\), \(\mathbf{x}^{m}\), and \(\mathbf{x}^{o}\), denoted as \(\mathbf{M}\), \(\mathbf{X}^{M}\), and \(\mathbf{X}^{O}\), respectively, all in \(\mathbb{R}^{n\times d}\). The reconstruction objective described in eq. 2 can then be reformulated as \[\min_{\theta\in\Theta}\sum_{\mathbf{x}\in\mathcal{D}_{train}}d\left(\mathbf{x} ^{m},\phi_{\theta}\left(\mathbf{x}^{o}\mid\mathbf{X}^{O}\right)\right). \tag{3}\] ### Non-parametric transformer (NPT) We resort to Non-Parametric Transformer (NPT) [23] as the core model for our approach, denoted as \(\phi_{\theta}\) in section 3.1. NPT involves both attention between features and attention between samples, thus allowing the ability to capture feature-feature and sample-sample dependencies. More precisely, two mechanisms involved in NPTs allow anomalies to be identified: Attention Between Datapoints (ABD) and Attention Between Attributes (ABA). Both attention mechanisms rely on multi-head self-attention (MHSA), which was first introduced in the natural-language processing literature [3, 9, 49]. We discuss MHSA more thoroughly in App. A and only detail in this section the two mechanisms put forward in [23]. As an input, NPT receives both the dataset and a masking matrix \((\mathbf{X},\mathbf{M})\in\mathbb{R}^{n\times d}\times\mathbb{R}^{n\times d}\). Before feeding the input to the NPT, we pass each of the \(n\) data samples through a linear embedding layer to obtain an \(e\)-dimensional embedding for each feature. Thus, as an input, NPT receives a representation \(\mathbf{H}^{0}\in\mathbb{R}^{n\times d\times e}\). A sequence of MHSA layers is applied to the input, alternating between ABA and ABD. The model then outputs a prediction for masked features while keeping unmasked features unchanged \(\widehat{\mathbf{X}}\in\mathbb{R}^{n\times d}\). Attention Between Datapoints (ABD)It is the key feature that differentiates NPT from standard transformer models. This mechanism captures **pairwise relation between data samples**[23]. Consider as an input to the ABD layer the previous layer representation \(\mathbf{H}^{(\ell)}\in\mathbb{R}^{n\times d\times e}\) flattened to \(\mathbb{R}^{n\times h}\) where \(h=d\cdot e\). Then, NPT applies MHSA, as seen in equation 12 in appendix A, between the data samples flattened representations \(\{\mathbf{H}_{i}^{(\ell)}\in\mathbb{R}^{1\times h}|i\in 1,\dots,n\}\). \[\text{ABD}(\mathbf{H}^{(\ell)})=\text{MHSA}(\mathbf{H}^{(\ell)})=\mathbf{H}^{( \ell+1)}\in\mathbb{R}^{n\times h} \tag{4}\] After applying ABD, the data representation is reshaped to its original dimension in \(\mathbb{R}^{n\times d\times e}\). Attention Between Attributes (ABA)As already discussed, NPT alternates between ABD and ABA layers. ABA layers should help learn per data sample representation for the inter-sample representations [23]. In contrast with ABD, ABA consists in applying MHSA independently to each row in \(\mathbf{H}^{(\ell)}\), _i.e._ to each data sample's intermediate representation \(\mathbf{H}^{(\ell)}_{i}\in\mathbb{R}^{d\times e},i\in\{1,\ldots,n\}\). \[\text{ABA}(\mathbf{H}^{(\ell)})=\underset{\text{axis-a}}{\text{stack}}\left( \text{MHSA}(\mathbf{H}^{(\ell)}_{1}),\ldots,\text{MHSA}(\mathbf{H}^{(\ell)}_{ n})\right)\in\mathbb{R}^{n\times d\times e} \tag{5}\] ### Anomaly score We directly derive the anomaly score from the loss optimized during training. For numerical features, the loss corresponds to the squared difference between the reconstructed feature and its actual value. Meanwhile, for categorical features, we use the cross-entropy loss function. The anomaly score relies on our model's capacity to reconstruct masked features correctly and assumes that the model should better reconstruct _normal_ samples. Two reasons support this assumption. First, relations between features are class-dependent, as supported by [43]; having observed only _normal_ samples in the training phase, the model should be unable to fetch the learned feature-feature interactions to reconstruct anomalies properly. Second, sample-sample interactions seen by the model only correspond to interactions between _normal_ samples, making it difficult to successfully exploit interactions between _normal_ samples and anomalies. As detailed in Figure 1, we consider \(m\)\(d\)-dimensional deterministic mask vectors that designate which of the \(d\) features of _each_ validation sample will be hidden. We set the maximum number of features to be masked simultaneously \(r\), and construct \(m=\sum_{k=1}^{r}\binom{d}{k}\) masks. Each mask is applied to each validation sample \(\mathbf{z}\in\mathcal{D}^{val}\) to obtain \(m\) different masked samples \(\{\mathbf{z}^{(1)},\ldots,\mathbf{z}^{(m)}\}\) of the original sample \(\mathbf{z}\). We use the whole unmasked training set2\(\mathcal{D}^{train}\) to predict the masked features of each sample for each of the \(m\) masked vectors and construct the anomaly score for a validation sample \(\mathbf{z}\) as Footnote 2: For large datasets, we resort to a random subsample of the training set for computational reasons. \[\text{NPT-AD}(\mathbf{z};\mathcal{D}^{train})=\frac{1}{m}\sum_{k=1}^{m} \mathcal{L}_{features}(\mathbf{z}^{(k)};\mathcal{D}^{train}), \tag{6}\] where \(\mathcal{L}_{features}(\mathbf{z}^{(k)};\mathcal{D}^{train})\) designates the loss for the sample \(\mathbf{z}\) with mask \(k\). We also considered other forms of aggregation, such as the maximum loss over all masks. Figure 1: NPT-AD Inference Pipeline. In step (a), mask \(j\) is applied to each validation sample. We construct a matrix \(\mathbf{X}\) composed of the masked validation samples and the whole _unmasked_ training set. In step (b), we feed \(\mathbf{X}\) to the Non-Parametric Transformer (NPT), which tries to reconstruct the masked features for each validation sample. On top of the learned feature-feature interactions, NPT will use the unmasked training samples to reconstruct the mask features. In step (c), we compute the reconstruction error that we later aggregate in the NPT-AD score. Experiments DatasetsWe experiment on an extensive benchmark of tabular datasets following previous work [43]. The benchmark is comprised of four datasets widely used in the anomaly detection literature and a wide range of tabular datasets used in [43]. The first group of datasets comprises two small medical datasets, namely Arrhythmia and Thyroid, and two larger cyber-intrusion datasets KDD and KDDRev. The second group of datasets, the "Multi-dimensional point datasets", is obtained from the Outlier Detection DataSets (ODDS)3 and contains 28 datasets (notice that following [43], we omit the datasets Heart and Yeast). See App. B for more detail on the datasets' characteristics. Footnote 3: [http://odds.cs.stonybrook.edu/](http://odds.cs.stonybrook.edu/) Experimental settingsPer the literature [54, 4], we construct the training set with a random subsample of the _normal_ samples representing 50% of the _normal_ samples, we concatenate the 50% remaining with the entire set of anomalies to constitute the validation set. Following previous work, [4, 43], the decision threshold for the NPT-AD score is chosen such that the number of predicted anomalies is equal to the number of existing anomalies. We report the results in tables 1, 2, and 6 in App. C. Most metrics are obtained from [43], apart from NeuTraL-AD [32] which we trained using their official code made available online. We evaluate the different methods using both the F1-Score (\(\uparrow\)) and AUROC (\(\uparrow\)) metrics. We compare our method to both recent deep methods, namely GOAD [4], DROCC [12], NeuTraL-AD [32] and the contrastive approach proposed in [43], and classical non-deep methods such as Isolation Forest [25], KNN, RRCF [15] and COPOD [24]. We refer the reader to [43] for implementation details of non-deep models. Notice that for DROCC [12], GOAD [4], and NeuTraL-AD [32], we report in table 1 the architecture that obtained the highest mean F1-Score. The metrics obtained for the other architectures are detailed in table 8, 9, and 10 in App. C. The mean rank, provided in table 1 and 2, was computed including each architecture of each approach. Following the literature, we report the average metrics over 20 runs for ODDS datasets except for larger datasets (Kdd and KddRev) for which we report an average over \(10\) runs due to computational limitations. Our model was trained for each dataset on 4 or 8 Nvidia GPUs V100 16Go/32Go depending on the dataset dimension. For each dataset, we considered the same NPT architecture composed of \(4\) layers alternating between Attention Between Datapoints and Attention Between Attributes and \(4\) attention heads. Per [23], we consider a Row-wise feed-forward (rFF) network with one hidden layer, 4x expansion factor, GeLU activation, and also include dropout with \(p=0.1\) for both attention weights and hidden layers. We used LAMB [52] with \(\beta=(0.9,0.999)\) as the optimizer and also included a Lookahead [53] wrapper with slow update rate \(\alpha=0.5\) and \(k=6\) steps between updates as in [23]. Similarly, following [23], we consider a flat-then-anneal learning rate schedule: flat at the base learning rate for 70% of steps and then anneals following a cosine schedule to 0 by the end of the training phase, and set gradient clipping at 1. We chose \(r\) in accordance with the masking probability \(p_{mask}\) used during training and the total number of features \(d\). We hypothesized that a too-high value of \(r\) for a low \(p_{mask}\) would pollute the anomaly score with reconstructions too challenging for the model, leading to high reconstruction error for both _normal_ samples and anomalies. We detail in App. B.2 the varying hyperparameters used for each dataset in our experiments. Notice that for most datasets, the hyperparameters remain unchanged. Variations of the hyperparameters are motivated by a swifter convergence of the training loss or computational costs for larger datasets. Each experiment can be replicated using the code made available on github4. Footnote 4: [https://github.com/hugothimonier/NPT-AD/](https://github.com/hugothimonier/NPT-AD/) ResultsAs seen in table 1 and 2, our model surpasses existing methods on most datasets by a significant margin regarding the F1-Score. Moreover, our approach displays the highest mean F1-Score and mean rank over all datasets out of the \(17\) tested approaches. The approach of [43] ranks as the second highest mean F1-Score and mean rank over all datasets. Also, our approach displays a smaller variance than competing methods except for COPOD, which performs significantly worse than our approach regarding the F1-Score. The smaller variance could originate from the fact that our model uses, in a non-parametric fashion, the training set in the inference phase. This contributes to flattening the variations in the anomaly score attributed to discrepancies in the model's weights between runs. We also display in table 6 in App. C the AUROC for the same experiments and observe that we obtain the highest mean AUROC while also displaying a smaller variance than other tested approaches. ## 5 Discussion ### Training set contamination Real-life anomaly detection applications often involve contaminated training sets; anomaly detection models must therefore be robust to small levels of dataset contamination. We experimented using a synthetic dataset to evaluate how much NPT-AD suffers from dataset contamination compared to recent deep AD methods. We constructed a synthetic dataset using two perfectly separable distributions for _normal_ and anomaly samples. Our training set contained \(900\)_normal_ samples, and we kept aside \(100\) anomaly samples that we could add to the training set. We considered \(11\) different training sets with contamination shares ranging from \(0\)% to \(10\)% with a \(1\)% step while keeping the validation set constant with a fixed composition of \(10\)% anomalies and \(90\)% _normal_ samples. We display the results of this experiment in Figure 2 in which we show how the performance of NPT-AD varies when the contamination share increases in comparison with NeuTraL-AD [32], GOAD [4] and the internal contrastive approach of [43]. We did not include DROCC [12] in the latter figure since too big error bars caused the graph to be difficult to analyze. We display the figure containing all five approaches, including DROCC [12], in Figure 3 in App.C. Our experimental results show that, as expected, the performance of NPT-AD deteriorates as the proportion of anomalies in the training set \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & DROCC & GOAD & NeuTraL-AD & Internal Cont. & NPT-AD \\ & (abalone) & (thyroid) & (arrhyth.) & & \\ \hline Wine & \(63.0\pm 20.0\) & \(67.0\pm 9.4\) & \(78.2\pm 4.5\) & \(\mathbf{90.0}\pm 6.3\) & \(72.5\pm 7.7\) \\ Lymphho & \(65.0\pm 5.0\) & \(68.3\pm 13.0\) & \(20.0\pm 18.7\) & \(86.7\pm 6.0\) & \(\mathbf{94.2}\pm 7.9\) \\ Glass & \(14.5\pm 11.1\) & \(12.7\pm 3.9\) & \(9.0\pm 4.4\) & \(\mathbf{27.2}\pm 10.6\) & \(\mathbf{26.2}\pm 10.9\) \\ Vertebral & \(\mathbf{9.3}\pm 6.1\) & \(\mathbf{16.3}\pm 9.6\) & \(3.8\pm 1.2\) & \(\mathbf{26.0}\pm 7.7\) & \(20.3\pm 4.8\) \\ Wbc & \(9.0\pm 6.2\) & \(\mathbf{66.2}\pm 2.9\) & \(60.9\pm 5.6\) & \(\mathbf{67.6}\pm 3.6\) & \(\mathbf{67.3}\pm 1.7\) \\ Ecoli & N/A & \(61.4\pm 31.7\) & \(7.0\pm 7.1\) & \(70.0\pm 7.8\) & \(\mathbf{77.7}\pm 0.1\) \\ Ionosph. & \(76.9\pm 2.8\) & \(83.4\pm 2.6\) & \(90.6\pm 2.4\) & \(\mathbf{93.2}\pm 1.3\) & \(\mathbf{92.7}\pm 0.6\) \\ Arrhyth. & \(37.1\pm 6.8\) & \(52.0\pm 2.3\) & \(59.5\pm 2.6\) & \(\mathbf{61.8}\pm 1.8\) & \(60.4\pm 1.4\) \\ Breastw & \(93.0\pm 3.7\) & \(\mathbf{96.0}\pm 0.6\) & \(91.8\pm 1.3\) & \(\mathbf{96.1}\pm 0.7\) & \(95.7\pm 0.3\) \\ Pima & \(66.0\pm 4.1\) & \(66.0\pm 3.1\) & \(60.3\pm 1.4\) & \(59.1\pm 2.2\) & \(\mathbf{68.8}\pm 0.6\) \\ Vowels & \(66.2\pm 8.8\) & \(31.1\pm 4.2\) & \(10.0\pm 6.2\) & \(\mathbf{90.8}\pm 1.6\) & \(88.7\pm 1.6\) \\ Letter & \(55.6\pm 3.6\) & \(20.7\pm 1.7\) & \(5.7\pm 0.8\) & \(62.8\pm 2.4\) & \(\mathbf{71.4}\pm 1.9\) \\ Cardio & \(49.8\pm 3.2\) & \(\mathbf{78.6}\pm 2.5\) & \(45.5\pm 4.3\) & \(71.0\pm 2.4\) & \(\mathbf{78.1}\pm 0.1\) \\ Seismic & \(19.1\pm 0.9\) & \(24.1\pm 1.0\) & \(11.8\pm 4.3\) & \(20.7\pm 1.9\) & \(\mathbf{26.2}\pm 0.7\) \\ Musk & \(99.4\pm 1.5\) & \(\mathbf{100.0}\pm 0.0\) & \(99.0\pm 0.0\) & \(\mathbf{100.0}\pm 0.0\) & \(\mathbf{100.0}\pm 0.0\) \\ Speech & \(4.3\pm 2.0\) & \(4.8\pm 2.3\) & \(4.7\pm 1.4\) & \(5.2\pm 1.2\) & \(\mathbf{9.3}\pm 0.8\) \\ Thyroid & \(72.7\pm 3.1\) & \(72.5\pm 2.8\) & \(69.4\pm 1.4\) & \(76.8\pm 1.2\) & \(\mathbf{77.0}\pm 0.6\) \\ Abalone & \(17.9\pm 1.3\) & \(57.6\pm 2.2\) & \(53.2\pm 4.0\) & \(\mathbf{68.7}\pm 2.3\) & \(59.7\pm 0.1\) \\ Optdigits & \(30.5\pm 5.2\) & \(0.3\pm 0.3\) & \(16.2\pm 7.3\) & \(\mathbf{66.3}\pm 10.1\) & \(\mathbf{62.0}\pm 2.7\) \\ Satimage2 & \(4.8\pm 1.6\) & \(90.7\pm 0.7\) & \(92.3\pm 1.9\) & \(92.4\pm 0.7\) & \(\mathbf{94.8}\pm 0.8\) \\ Satellite & \(52.2\pm 1.5\) & \(64.2\pm 0.8\) & \(71.6\pm 0.6\) & \(73.2\pm 1.6\) & \(\mathbf{74.6}\pm 0.7\) \\ Pendigits & \(11.0\pm 2.6\) & \(40.1\pm 5.0\) & \(69.8\pm 8.7\) & \(82.3\pm 4.5\) & \(\mathbf{92.5}\pm 1.3\) \\ Ananthyr. & \(64.2\pm 3.3\) & \(50.3\pm 6.3\) & \(44.1\pm 2.3\) & \(45.4\pm 1.8\) & \(57.7\pm 0.6\) \\ Mnist & N/A & \(66.9\pm 1.3\) & \(84.8\pm 0.5\) & \(85.9\pm 0.0\) & \(\mathbf{71.8}\pm 0.3\) \\ Mammo. & \(32.6\pm 2.1\) & \(33.7\pm 6.1\) & \(19.2\pm 2.4\) & \(29.4\pm 1.4\) & \(\mathbf{43.6}\pm 0.5\) \\ Shuttle & N/A & \(73.5\pm 5.1\) & \(97.9\pm 0.2\) & \(\mathbf{98.4}\pm 0.1\) & \(\mathbf{98.2}\pm 0.3\) \\ Mullcross & N/A & \(99.7\pm 0.8\) & \(96.3\pm 10.5\) & \(\mathbf{100.0}\pm 0\) & \(\mathbf{100.0}\pm 0.0\) \\ Forest & N/A & \(0.1\pm 0.2\) & \(51.6\pm 8.2\) & \(44.0\pm 4.1\) & \(\mathbf{58.0}\pm 10\) \\ Kdd & N/A & \(79.6\pm 3.9\) & \(96.9\pm 2.0\) & \(\mathbf{99.4}\pm 0.1\) & \(98.7\pm 0.3\) \\ Kddrev & N/A & \(98.0\pm 0.1\) & \(96.5\pm 1.5\) & \(\mathbf{99.2}\pm 0.3\) & \(98.5\pm 0.1\) \\ \hline mean & \(33.6\) & \(55.9\) & \(53.9\) & \(69.7\) & \(\mathbf{71.2}\) \\ mean std & \(4.6\) & \(4.2\) & \(3.9\) & \(2.9\) & \(\mathbf{2.0}\) \\ mean rank & \(10.7\) & \(7.6\) & \(9.0\) & \(3.2\) & \(\mathbf{3.0}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Deep Models: Anomaly Detection F1-Score (\(\uparrow\)). We perform 5% T-test to test whether the difference between the highest metrics for each dataset is statistically significant. increases. For contamination shares lower than 2% (resp. \(4\)%), the F1-Score (resp. AUROC) remains close to its maximum value of \(100\)%. However, the F1-Score and AUROC deteriorate significantly for higher contamination levels while displaying a higher standard deviation. When anomalies constitute \(10\)% of the training set, our approach achieves an average F1-Score slightly lower than \(50\)% and an average AUROC of \(87\)%. We observe that NPT-AD suffers less from dataset contamination than [43] and DROCC [12] for both F1-Score and AUROC. We also notice that DROCC [12] and the approach proposed in [43] are particularly sensible to dataset contamination regarding the F1-Score in comparison with NeuTraL-AD [32], GOAD [4] and NPT-AD even for low contamination shares. Finally, this experiment also highlights that NeuTraL-AD [32] appears significantly more robust than other tested deep methods to training set contamination even for large contamination values. ### Sample-sample dependencies ablation study To investigate the impact of sample-sample dependencies on the effectiveness of our proposed model in detecting anomalies, we conduct an ablation study by shuffling the columns of the unmasked training samples used to reconstruct the test samples. This process prevents the NPT from attending to other samples for reconstructing the masked features, as discussed in [23]. We perform this study on a subset of datasets and present the results in Table 3. Our findings indicate a significant decrease in F1-score for the tested datasets, while the AUROC is less affected. The smaller variation of the AUROC is in line with [8] in which authors highlight that the AUROC may fail to account for \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Method & COPOD & IForest & KNN & PIDForest & RRCF & NPT-AD \\ \hline Wine & \(60.0{\pm}4.5\) & \(64.0{\pm}12.8\) & \(\mathbf{94.0}{\pm}4.9\) & \(50.0{\pm}6.4\) & \(69.0{\pm}11.4\) & \(72.5{\pm}7.3\) \\ Lympho & \(85.0{\pm}5.0\) & \(71.7{\pm}7.6\) & \(80.0{\pm}11.7\) & \(70.0{\pm}0.0\) & \(36.7{\pm}18.0\) & \(\mathbf{94.2}{\pm}7.9\) \\ Glass & \(11.1{\pm}0.0\) & \(11.1{\pm}0.0\) & \(11.1{\pm}9.7\) & \(8.9{\pm}6.0\) & \(15.6{\pm}13.3\) & \(\mathbf{26.2}{\pm}10.9\) \\ Vertebral & \(1.7{\pm}1.7\) & \(13.0{\pm}3.8\) & \(10.0{\pm}4.5\) & \(12.0{\pm}5.2\) & \(8.0{\pm}4.8\) & \(\mathbf{20.3}{\pm}4.8\) \\ Wbc & \(\mathbf{71.4}{\pm}0.0\) & \(70.0{\pm}3.7\) & \(63.8{\pm}2.3\) & \(65.7{\pm}3.7\) & \(54.8{\pm}6.1\) & \(67.3{\pm}1.7\) \\ Ecoli & \(25.6{\pm}11.2\) & \(58.9{\pm}22.2\) & \(\mathbf{77.8}{\pm}3.3\) & \(25.6{\pm}11.2\) & \(28.9{\pm}11.3\) & \(\mathbf{77.7}{\pm}0.1\) \\ Ionosphere & \(70.8{\pm}1.8\) & \(80.8{\pm}2.1\) & \(88.6{\pm}1.6\) & \(67.1{\pm}3.9\) & \(72.0{\pm}1.8\) & \(\mathbf{92.7}{\pm}0.6\) \\ Arrhythmia & \(58.2{\pm}1.4\) & \(60.9{\pm}3.3\) & \(\mathbf{61.8}{\pm}2.2\) & \(22.7{\pm}2.5\) & \(50.6{\pm}3.3\) & \(60.4{\pm}1.4\) \\ Breastw & \(96.4{\pm}0.6\) & \(\mathbf{97.2}{\pm}0.5\) & \(96.0{\pm}0.7\) & \(70.6{\pm}7.6\) & \(63.0{\pm}1.8\) & \(95.7{\pm}0.3\) \\ Pima & \(62.3{\pm}1.1\) & \(\mathbf{69.6}{\pm}1.2\) & \(65.3{\pm}1.0\) & \(65.9{\pm}2.9\) & \(55.4{\pm}1.7\) & \(68.8{\pm}0.6\) \\ Vowels & \(4.8{\pm}1.0\) & \(25.8{\pm}4.7\) & \(64.4{\pm}3.7\) & \(23.2{\pm}3.2\) & \(18.0{\pm}4.6\) & \(\mathbf{88.7}{\pm}1.6\) \\ Letter & \(12.9{\pm}0.7\) & \(15.6{\pm}3.3\) & \(45.0{\pm}2.6\) & \(14.2{\pm}2.3\) & \(17.4{\pm}2.2\) & \(\mathbf{71.4}{\pm}1.9\) \\ Cardio & \(65.0{\pm}1.4\) & \(73.5{\pm}4.1\) & \(67.6{\pm}0.9\) & \(43.0{\pm}2.5\) & \(43.9{\pm}2.7\) & \(\mathbf{78.1}{\pm}0.1\) \\ Seismic & \(29.2{\pm}1.3\) & \(\mathbf{73.9}{\pm}1.5\) & \(30.6{\pm}1.4\) & \(29.2{\pm}1.6\) & \(24.1{\pm}3.2\) & \(26.2{\pm}0.7\) \\ Musk & \(49.6{\pm}1.2\) & \(52.0{\pm}15.3\) & \(\mathbf{100.0}{\pm}0.0\) & \(35.4{\pm}0.0\) & \(38.4{\pm}6.5\) & \(\mathbf{100}{\pm}0.0\) \\ Speech & \(3.3{\pm}0.0\) & \(4.9{\pm}1.9\) & \(5.1{\pm}1.0\) & \(2.0{\pm}1.9\) & \(3.9{\pm}2.8\) & \(\mathbf{9.3}{\pm}0.8\) \\ Thyroid & \(30.8{\pm}0.5\) & \(\mathbf{78.9}{\pm}2.7\) & \(57.3{\pm}1.3\) & \(72.0{\pm}3.2\) & \(31.9{\pm}4.7\) & \(77.0{\pm}0.6\) \\ Abalone & \(50.3{\pm}6.4\) & \(53.4{\pm}1.7\) & \(43.4{\pm}4.8\) & \(58.6{\pm}1.6\) & \(36.9{\pm}6.4\) & \(\mathbf{59.7}{\pm}0.1\) \\ Optdigits & \(3.0{\pm}0.3\) & \(15.8{\pm}4.3\) & \(\mathbf{90.0}{\pm}1.2\) & \(22.5{\pm}16.8\) & \(1.3{\pm}0.7\) & \(62.0{\pm}2.7\) \\ Satimage2 & \(77.9{\pm}0.9\) & \(86.5{\pm}1.7\) & \(93.8{\pm}1.2\) & \(35.5{\pm}0.4\) & \(47.9{\pm}3.4\) & \(\mathbf{94.8}{\pm}0.8\) \\ Satellite & \(56.7{\pm}0.2\) & \(69.6{\pm}0.5\) & \(\mathbf{76.3}{\pm}0.4\) & \(46.9{\pm}3.7\) & \(55.4{\pm}1.3\) & \(74.6{\pm}0.7\) \\ Pendigits & \(34.9{\pm}0.6\) & \(52.1{\pm}6.4\) & \(91.0{\pm}1.4\) & \(44.6{\pm}5.3\) & \(16.3{\pm}2.6\) & \(\mathbf{92.5}{\pm}1.3\) \\ Annthyroid & \(31.5{\pm}0.5\) & \(57.3{\pm}1.3\) & \(37.8{\pm}0.6\) & \(\mathbf{65.4}{\pm}2.7\) & \(32.1{\pm}0.8\) & \(57.7{\pm}0.6\) \\ Mnist & \(38.5{\pm}0.4\) & \(51.2{\pm}2.5\) & \(69.4{\pm}0.9\) & \(32.6{\pm}5.7\) & \(33.5{\pm}1.7\) & \(\mathbf{71.8}{\pm}0.3\) \\ Mammo. & \(\mathbf{53.4}{\pm}0.9\) & \(39.0{\pm}3.3\) & \(38.8{\pm}1.5\) & \(28.1{\pm}4.3\) & \(27.1{\pm}1.9\) & \(43.6{\pm}0.5\) \\ Shuttle & \(96.0{\pm}0.0\) & \(96.4{\pm}0.8\) & \(97.3{\pm}0.2\) & \(70.7{\pm}1.0\) & \(32.0{\pm}2.2\) & \(\mathbf{98.2}{\pm}0.3\) \\ Mullcross & \(66.0{\pm}0.1\) & \(99.1{\pm}0.5\) & \(\mathbf{100.0}{\pm}0.0\) & \(67.4{\pm}2.1\) precision or recall variations in the presence of imbalanced classes. This analysis demonstrates the importance of leveraging sample-sample dependencies for effective anomaly detection on tabular data. ## 6 Limitations and Conclusion LimitationsAs with most non-parametric models, NPT-AD tends to display higher complexity than parametric approaches. NPT-AD can scale well for datasets with a reasonable number of features \(d\); however, for large values of \(d\), our approach involves a high computational cost in terms of memory and time. This cost originates from the complexity of NPT [23] itself and how the anomaly score is derived. ConclusionIn this work, we have proposed a novel deep anomaly detection method designed explicitly for tabular datasets. To the best of our knowledge, our approach is the first to utilize both feature-feature and sample-sample dependencies to identify anomalies. Using an extensive benchmark of tabular datasets, our experiments have demonstrated the effectiveness of our approach, outperforming existing state-of-the-art methods in terms of F1-score and AUROC. Our experiments further demonstrate the robustness of our method to a small training set contamination. This work emphasizes the importance of leveraging sample-sample dependencies to detect anomalies on tabular datasets effectively. Overall, our work invites further exploration of the potential of NPTs for other tasks on tabular data. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Mammo. & Glass & BreastW & Pendigits \\ \hline \(\Delta F1\) & \(-1.0\) & \(-9.6\) & \(-0.5\) & \(-2.8\) \\ \(\Delta\)AUROC & \(-0.1\) & \(-0.1\) & \(-0.1\) & \(-0.1\) \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study. Variation of the F1-Score and AUROC when preventing NPT from attending to sample-sample interactions. Average difference over 20 runs. All hyperparameters are kept unchanged. Figure 2: Training set contamination impact on the F1-Score and AUROC. Each model was trained \(5\) times for each contamination share. The architecture used for NPT-AD is the same as for all experiments (see section 4). The NPT was trained for \(100\) epochs with batch size equal to the dataset size, with learning rate \(0.01\), optimizer LAMB [52] with \(\beta=(0.9,0.999)\), per-feature embedding dimension \(16\), \(r\) set to \(1\), and masking probability \(p_{mask}=0.15\). NeuTraL-AD [32] and GOAD [4] were trained with hyperparameters as for the thyroid dataset in the original papers and [43] with its default parameters in their implementation. AcknowledgmentThis work was granted access to the HPC resources of IDRIS under the allocation 2023-101424 made by GENCI. This research publication is supported by the Chair "Artificial intelligence applied to credit card fraud detection and automated trading" led by CentraleSupelec and sponsored by the LUSIS company. The authors would also like to thank Gabriel Kasmi for his helpful advice and feedback and Julien Despois for proofreading the final manuscript.
2301.08487
Effect of multiple scattering on the Transmission spectra and the Polarization phase curves for Earth-like Exoplanets
It is the most appropriate time to characterize the Earth-like exoplanets in order to detect biosignature beyond the Earth because such exoplanets will be the prime targets of big-budget missions like JWST, Roman Space Telescope, HabEx, LUVOIR, TMT, ELT, etc. We provide models for the transmission spectra of the Earth-like exoplanets by incorporating effects of multiple scattering. For this purpose we numerically solve the full multiple-scattering radiative transfer equations instead of using Beer-Bouguer-Lambert's law that doesn't include the diffuse radiation due to scattering. Our models demonstrate that the effect of this diffuse transmission radiation can be observationally significant, especially in the presence of clouds. We also calculate the reflection spectra and polarization phase curves of Earth-like exoplanets by considering both cloud-free and cloudy atmospheres. We solve the 3D vector radiative transfer equations numerically and calculate the phase curves of albedo and disk-integrated polarization by using appropriate scattering phase matrices and integrating the local Stokes vectors over the illuminated part of the disks along the line of sight. We present the effects of the globally averaged surface albedo on the reflection spectra and phase curves as the surface features of such planets are known to significantly dictate the nature of these observational quantities. Synergic observations of the spectra and phase curves will certainly prove to be useful in extracting more information and reducing the degeneracy among the estimated parameters of terrestrial exoplanets. Thus, our models will play a pivotal role in driving future observations.
Manika Singla, Aritra Chakrabarty, Sujan Sengupta
2023-01-20T09:32:19Z
http://arxiv.org/abs/2301.08487v1
Effect of multiple scattering on the Transmission spectra and the Polarization phase curves for Earth-like Exoplanets ###### Abstract It is the most appropriate time to characterize the Earth-like exoplanets in order to detect biosignature beyond the Earth because such exoplanets will be the prime targets of big-budget missions like JWST, Roman Space Telescope, HabEx, LUVOIR, TMT, ELT, etc. We provide models for the transmission spectra of the Earth-like exoplanets by incorporating effects of multiple scattering. For this purpose we numerically solve the full multiple-scattering radiative transfer equations instead of using Beer-Bouguer-Lambert's law that doesn't include the diffuse radiation due to scattering. Our models demonstrate that the effect of this diffuse transmission radiation can be observationally significant, especially in the presence of clouds. We also calculate the reflection spectra and polarization phase curves of Earth-like exoplanets by considering both cloud-free and cloudy atmospheres. We solve the 3D vector radiative transfer equations numerically and calculate the phase curves of albedo and disk-integrated polarization by using appropriate scattering phase matrices and integrating the local Stokes vectors over the illuminated part of the disks along the line of sight. We present the effects of the globally averaged surface albedo on the reflection spectra and phase curves as the surface features of such planets are known to significantly dictate the nature of these observational quantities. Synergic observations of the spectra and phase curves will certainly prove to be useful in extracting more information and reducing the degeneracy among the estimated parameters of terrestrial exoplanets. Thus, our models will play a pivotal role in driving future observations. planets and satellites: atmospheres -- atmospheric effects -- transmission spectroscopy -- reflection spectroscopy -- radiative transfer -- polarization -- scattering 0000-0002-4002-4002]Manika Singla 0000-0002-4002-3886-7880]Aritra Chakrabarty 0000-0002-4073-3888]Sujan Sengupta ## 1 Introduction Over 5000 extra-solar planets have been detected till date and many techniques are being developed to study their atmospheres in detail. Techniques such as reflection, transmission and emission photometry and spectroscopy(Tinetti, 2006; Seager, 2010) help in characterizing the planetary atmospheres. Characterizing the terrestrial exoplanets is extremely challenging because of their very small size and low planet to star flux ratio (Selsis et al., 2008; Rice, 2014). We will only be able to detect bio-signature on extra-terrestrial planets unambiguously if we can precisely characterize the Earth-sized planets which are in the circumstellar habitable zone of their host stars (Huang, 1959, 1960; Whitmire et al., 1991; Kasting et al., 1993; Kopparapu et al., 2013; Torres et al., 2015; Kane et al., 2016; Fujii et al., 2018; Covone et al., 2021). Presence of the biosignatures like oxygen, water, methane, etc. signals the high chances of the presence of life on these planet. The presence of oxygen and ozone is the result of an extended biomass production through oxygenic photosynthesis (Owen, 1980; Sagan et al., 1993; Selsis et al., 2002; Selsis, 2004; Segura et al., 2007; Seager, 2008; Selsis et al., 2008; Scharf, 2009; Grenfell et al., 2014; Fujii et al., 2018; Claudi & Alei, 2019). When an exoplanet transits the host star, a fraction of the starlight passes through the planetary atmosphere. The radiation interacts with the atmosphere through scattering and absorption and provides the spectral fingerprints on the transmitted flux. Model transmission spectra for terrestrial exoplanets have previously been presented by Ehrenreich et al. (2006); Kaltenegger & Traub (2009); Palle et al. (2009, 2010); Yan et al. (2015); Wunderlich et al. (2019, 2020); Lin et al. (2021); Gialluca et al. (2021); Madden and Kaltenegger (2020), etc. In these models, only the total extinction of the incident stellar flux is considered by using the Beer-Bouguer-Lambert's law (Tinetti et al., 2013). These models, albeit include the scattering opacity to the true absorption opacity, do not incorporates the angular distribution of the transmitted photons due to scattering in the planetary atmosphere. Sengupta et al. (2020) have considered the in and out scattering for the hot Jupiter like exoplanets while modeling the transmission spectra. In the present work, we have calculated the transmission spectra of the Earth-like planets by solving the multiple-scattering radiative transfer equations by using discrete space theory (Peraiah and Grant, 1973), following Sengupta et al. (2020). We demonstrate that diffuse transmission radiation due to scattering can affect the overall broad natures of the transmission spectra, especially when the single-scattering albedo increases in the presence of clouds. When incoming stellar flux hits the solid planetary surface, some fraction of it gets reflected, absorbed or transmitted depending on the wavelength and the angle of incidence of the incident stellar radiation (Seager, 2010; Selsis et al., 2008). Study of the reflection spectra and phase curves can add to the information that is obtained from the transmission spectroscopy. Moreover, these techniques can be used to characterize the planets with arbitrary orbital alignment with respect to the line of sight. Previously, Sagan et al. (1993) have obtained the reflection spectra of the Earth using the observations by Galileo satellite. Also, Kawashima and Rugheimer (2019); Batalha et al. (2019); Segura et al. (2005); Kaltenegger et al. (2007); Kitzmann et al. (2010, 2010); Rugheimer et al. (2013); Rugheimer and Kaltenegger (2018), among many have calculated the reflected spectra for Earth-like exoplanets and also studied the effects of clouds on the spectra. In this paper, we present new model reflection spectra for the Earth-like exoplanets for the sake of completeness of our investigation on the atmospheres of these planets. These model spectra have been calculated by using the same numerical method mentioned above. Several polarimetric techniques are also increasingly being used for the study of exoplanetary atmospheres. Polarimetric studies for planets were initiated with the observation of the Solar system objects and are still being continued (Coffeen, 1969; Coffeen and Gehrels, 1969; Hall and Riley, 1974; Michalsky and Stokes, 1977; West et al., 1983; Joos and Schmid, 2007, etc.). Mallama (2009) characterized the terrestrial exoplanets based on the phase curves of some solar system planets. Stam et al. (2003); Stam (2003); Rossi et al. (2018), etc. studied the polarization spectra for the extra-solar planets. By studying the polarization profiles, we can extract information about the atmospheric as well as physical properties such as cloud distribution, mean size of cloud particulate as well as rotation-induced oblateness etc. as demonstrated by Sengupta and Krishan (2001); Sengupta and Maiti (2006); Sengupta (2008); Sengupta and Marley (2009, 2010, 2011, 2016); Sengupta (2016) for the case of brown dwarfs and self-luminous exoplanets. In addition, phase dependent polarization of reflected planetary radiation can help understanding more atmospheric composition including biosignatures, surface constitutents like ocean, ice, forest etc and thus the evidence of a habitable environment in exoplanets (Zubko et al., 2008; Kedziora-Chudczer and Bailey, 2010; Rossi et al., 2017). The traces of exo-moons are also being searched by means of polarization (Sengupta and Marley, 2016; Molina et al., 2017, 2018). The reflected light can be polarized because of the various scattering processes which depend on the types of scatterers and the scattering mechanism (Seager, 2010). Linear polarization signals from the starlight reflected from the horizontally inhomogenous Earth-like planets is presented in Karalidi and Stam (2012). Groot et al. (2020); Rossi and Stam (2018) studied the linearly or circularly polarized signals from the sunlight reflected from the model Earth. Polarization signals from starlight reflected by the Earth-like exoplanets have been studied by Stam (2008); Fauchez et al. (2017); Wei and Zhong-quan (2017); Munoz (2018); Sterzik et al. (2019); Patty et al. (2021); Gordon et al. (2022), among others. Wang et al. (2019) have used PARASOL data to calculate the variation of the disk-integrated polarization. Karalidi et al. (2011, 2012); Michael F. Sterzik and Manev (2020) have modeled the polarized signal from the clouds on exoplanets and Zugger et al. (2010, 2011) from the exoplanetary oceans and atmospheres. Stam and Hovenier (2005) have estimated the errors in the calculated phase functions and albedos of planets if polarization is neglected. Detecting the polarization signals of the reflected radiation is, however, extremely difficult because of the very low signal to noise (S/N) ratio as compared to that of the Solar-system planets. Some of the upcoming telescopes will unravel the polarimetric properties of the Earth and the extra-solar planets. LOUPE (Lunar Observatory for Unresolved Polarimetry of the Earth), a small spectropolarimeter is being developed to observe the Earth from the Moon as an exoplanet (Klindzic et al., 2021; Karalidi et al., 2021) and also ELF (Exo-Life finder) Telescope (Berdyugina et al., 2018) will be used for the direct detection of exoplanet biosignatures. Other big-budget missions like HabEx, LUVOIR, Roman Space Telescope, etc. will also have imaging polarimetric facility. These missions will additionally have coronographic instruments onboard which will allow us to detect polarimetric signals from the exoplanets in the habitable zones directly. Most of the above mentioned polarization models either use Monte Carlo method or solve 1-D vector radiative transfer equations and invoke generalized spherical harmonic expansion to integrate the scattering polarization over the visible disk. In the present work we calculate the azimuth-dependent intensity vectors by solving the 3-D vector radiative transfer equations. The disk integrated flux and polarization are estimated by integrating the intensity vector at each local point over the illuminated disk. The 1-D version of the same numerical code has also been used to solve the radiative transfer equations in their vector form for the calculation of polarized spectra of rotation-induced oblate self-luminous exoplanets and cloudy brown dwarfs (Sengupta and Marley, 2009, 2010; Marley and Sengupta, 2011; Sengupta, 2016; Sengupta, 2018). However, in order to calculate the polarization over the rotation-induced oblate disk of the object, spherical harmonic expansion method was used in those work. This scalar version of the same code has also been used to calculate the transmission spectra for the hot Jupiters (Sengupta et al., 2020; Chakrabarty and Sengupta, 2020). Chakrabarty and Sengupta (2021) have presented the polarization models for hot-Jupiters by solving 3D vector radiative transfer equations. In the present work we employ the same methodology to calculate the polarization for the Earth-like exoplanets. In the next section, we discuss about the necessary inputs used to calculate the transmission spectra and the reflected spectra for Earth-like exoplanets. In sections 3.1 and 3.2, we present the results of the transmission and the reflection spectra. Vector phase curve models are presented in section 3.3. In section 4, we analyze and discuss the results and finally, in the last section, we present the conclusions of this work. ## 2 Atmospheric Models for Earth-like Exoplanets We present models for transmission spectra, reflection spectra and the phase curves for geometric albedo and linear polarization for the Earth-like exoplanets orbiting around Sun-like stars. For calculating the reflection and the transmission spectra as well as the scattering polarization, we take the atmospheric chemical composition for the modern Earth-like exoplanets from Kawashima and Rugheimer (2019) and opacity data, i.e. absorption and scattering cross-sections for all the molecules that have been taken in the atmospheric composition of the planet, from the database for PICASO (Batalha et al., 2020). The observed temperature-pressure profile of the Earth's atmosphere is considered for the calculations. We consider two types of atmosphere in all of our model calculations: cloudy and cloud-free. In the case of cloudy atmospheres, we consider very thin clouds or haze and we have used an approximate Rayleigh model to express the effect of these clouds/haze following Sing et al. (2016); Kempton et al. (2017), etc. In case of transmission spectra, we have included thin clouds with scattering cross-sections (\(\sigma\)) equal to 100, 200 and 400 times the scattering cross-section (\(\sigma_{R}\)) of the dominant atmospheric constituent i.e. nitrogen in this case. The cloud deck and base have been fixed at 5x10\({}^{2}\) Pa and 5x10\({}^{3}\) Pa. For the case of reflection spectra, we have considered the cloud position between the pressure levels of 1x10\({}^{3}\) Pa and 5x10\({}^{4}\) Pa with a scattering cross-section equal to 400 times the scattering cross-section of nitrogen gas. The cloud position (considering 100 % coverage) for the case of reflected spectra is kept at deeper layers of the atmosphere while for the case of the transmission spectra, the clouds are considered at the upper layers of the atmosphere. It is because we can probe only the outer atmosphere by transmission spectra. Also, we will probe complementary portions of the atmosphere in terms of the altitude. A terrestrial exoplanet usually should have water cloud in the upper atmosphere. For large cloud particulates, Mie scattering phase matrix should be appropriate to describe the angular distribution of photon before and after scattering. But for small size of cloud particles, Rayleigh phase matrix serves the purpose reasonably well. In the present work we have used Rayleigh phase matrix for water droplets (Sengupta and Maiti, 2006). ## 3 Results ### The Transmission Spectra Using the atmospheric models described in Section 2, we have presented models of transmission spectra of the Earth-like exoplanets. Studying the absorption lines on the transmission spectra can directly tell us about the volatiles and biosignatures present in their atmospheres. But such interpretation requires accurate models of the broad continua of the spectra, especially in the visible wavelengths and in the presence of clouds. Also, an accurate model can help us understand how the presence of clouds can suppress the absorption lines since detecting these absorption lines of the Earth-like planets is already extremely challenging. Following Sengupta et al. (2020), we solve the multiple-scattering radiative transfer equation for diffuse reflection and transmission which is given as, \[\mu\frac{dI(\tau,\mu,\lambda)}{d\tau}=I(\tau,\mu,\lambda)-\frac{\omega}{2}\int_{- 1}^{1}p(\mu,\mu^{\prime})I(\tau,\mu^{\prime},\lambda)\mathrm{d}\mu^{\prime}- \frac{\omega}{4}Fe^{-\tau/\mu_{0}}p(\mu,\mu_{0}). \tag{1}\] Here, I(\(\tau\),\(\mu\),\(\lambda\)) is the specific intensity of the transmitted radiation along our line of sight. \(\omega\) is the single scattering albedo, F is the incident stellar flux along our line of sight (along the direction -\(\mu_{0}\)), p(\(\mu\),\(\mu^{\prime}\)) is the scattering phase function and \(\tau\) is the optical depth along the line of sight. The detail formalism and numerical technique are described in (Sengupta et al., 2020). Surface albedo of the planet is not considered in this case as transmission spectra predominantly convey the information of the upper atmosphere. Figure 1 presents the transmission depth with and without diffusion of radiation due to scattering. When the diffusion due to scattering is not considered, especially in the longer wavelengths where the values of \(\omega\) are extremely low (\(\omega\approx 0\)), we can use the Beer-Bouguer-Lambert's law instead of solving the radiative transfer equations. Note that even when the diffuse radiation due to scattering is not incorporated, total atmospheric optical depth is determined by both the absorption and the scattering opacities. ### The Reflection Spectra While orbiting the host star, an exoplanet reflects a part of the incident starlight depending on its orbital phase, orbital inclination, and of course, the atmospheric and surface constituents. The study of the reflected light from the planets helps us to probe deeper into their atmospheres, detect their surface as well as cloud properties. It also reduces the degeneracy among the estimated parameters when combined with the results from the study of transmission spectra. We solve the scalar one-dimensional multiple-scattering radiative transfer equation to model the geometric albedo (see, for example, Batalha et al. (2019)) and the full-phase (fully illuminated disk) reflection spectra of the Earth-like Figure 1: Transmission depth of the Earth-like exoplanets with (solid) and without (dashed) diffuse scattering for cloudy and cloud-free atmospheres (see Section 2). Scattering opacity is however, included even in the case for cloud-free atmosphere. planets using the atmospheric model explained in Section 2. The equation is somewhat similar to Equation 1 but follows a different geometry (radial geometry) to calculate the layerwise optical depths and single-scattering albedos. Surface albedo, which depends on the surface composition, contributes to the overall reflectivity of the rocky planets. For example, if the whole surface is covered with snow, the surface albedo is the highest, i.e. around 0.9 and hence it contributes heavily to the geometric albedo but if the whole surface is covered with ocean, the surface albedo is much less, around 0.06, thus contributing much less to the geometric albedo (Kaltenegger et al., 2007). For the present Earth-like exoplanets, we take the surface albedo to be 0.14 at all wavelengths, where the surface components are 70% ocean, 2% coast, and 28% land, which in turn, is divided into 30% grass, 30% trees, 9% granite, 9% basalt, 15% snow, and 7% sand (Kaltenegger et al., 2007). The surface reflection is assumed to be Lambertian i.e. isotropic in nature. We calculate the surface albedo by summing all the components' albedo multiplied by their respective fraction of the planetary surface coverage. Figure 2 shows the reflection spectra and the geometric albedo of cloudy and cloud-free Earth-like planets with the annotations of oxygen and water absorption lines. ### The Phase Curve Models The reflected light observable from the planets depends on their orbital phase and the study of these phase curves conveys valuable information about the atmospheres and surfaces of the Earth-like exoplanets. The orbital phase Figure 2: Reflected spectra (top panel) and Geometric albedo (bottom panel) for the Earth-like exoplanets orbiting around the Sun-like stars at a resolution of 300. The blue plot is for the clear sky while the orange plot is for the cloudy atmosphere. The absorption lines of O\({}_{2}\) and H\({}_{2}\)O are shown. (\(\alpha_{\rm orb}\)) is \(0^{o}\) when the maximum area of the illuminated disk is viewed and \(180^{o}\) when the minimum or no illuminated part of the planetary disk is viewed. However, modeling these phase curves is cumbersome and requires us to invoke three-dimensional radiative transfer models as explained by (Chakrabarty & Sengupta, 2021). Here, we solve the 3-D vector radiative transfer equation to calculate both the albedo (total reflectivity of the disk) phase curves and the disk-integrated polarization phase curves. The partial illumination of a planetary disk yields net non-zero disk-integrated scattering polarization of the reflected light. A study of this polarization can provide us information about the atmospheric clouds in detail, surface composition, and also the light absorbers present in the atmospheres (Chakrabarty & Sengupta, 2021). We assume the incident starlight to be unpolarized and the polarization of the planet's reflected light is solely caused by the scattering process. We ignore polarization due to strong magnetic field if any. The state of polarization of each beam of ray after scattering is determined by the scattering phase matrices which depend on the scattering mechanism. We follow the methods prescribed by Chakrabarty & Sengupta (2021) to solve the vector radiative transfer equations and calculate the phase dependent reflected flux and the polarization (\(P\)) averaged over the illuminated planetary disk. The corresponding atmospheric model is explained in Section 2. We studied the effect of surface albedo on the overall disk albedo and polarization as depicted in Figure 3. For the rest of the calculations, we considered the value of surface albedo to be 0.14 as explained in Section 3.2. Figure 4-5 show the the total albedo and the disk polarization (\(P\)) at \(\lambda=0.6\)\(\mu\)m and \(\lambda=1\)\(\mu\)m for both the cloud-free and cloudy atmospheres, by considering multiple scattering of the incident radiation. Figure 6 shows the same for visible wavelength (\(\lambda=0.6\)\(\mu\)m), considering only single scattering. These phase curves can be detected with the next-generation polarimetric missions which will use their coronagraphic instruments to resolve the Earth-like exoplanets from their host stars. The observable flux ratio i.e. the ratio of the observable reflected flux from the planet to the observable starlight is shown in the figures. This indicates the contrast required by those instruments to directly detect the reflection spectra from such planets. Since the flux ratios are in the order of parts per billion (ppb), detecting the polarization of the planets without resolving them separately amidst the stellar glare will be impossible with the current technology and hence not shown in the figures. ## 4 Analysis and Discussion The transmission depth for modern Earth-like exoplanets orbiting around Sun-like stars at wavelengths up to 2.0 \(\mu\)m is shown in Figure 1. We can see that the transit depth reduces with the inclusion of diffuse radiation due to scattering as explained by Sengupta et al. (2020). The transmission depth increases with the inclusion of clouds as the cloud particles block the transmitted flux through the atmosphere. Clouds also suppress the absorption features of the molecules in shorter wavelengths. The effect of diffusion due to scattering on the broadband continuum can be significant with respect to the levels of the individual absorption features, especially in the presence of atmospheric clouds. This calls for more accurate modeling of the transmission spectra by solving the complete radiative transfer equation. Otherwise detecting the features of the biosignatures of the Earth-like planets can be confusing and may be erroneous. Of course, at longer wavelength regions, all the plots are found to merge because the effect of scattering is negligible. The relatively stronger absorption lines such as O\({}_{2}\), H\({}_{2}\)O are easily detectable in the reflection spectra (see Figure 2). Clearly, the geometric albedo increases with the decrease in wavelength because of the dominance of Rayleigh scattering (\(\propto\frac{1}{\lambda^{4}}\)) at shorter wavelength. Also, because of increased back scattering of the incident stellar radiation, the presence of clouds significantly increases the geometric albedo and hence the reflected flux at the visible wavelength region.. Figure 3 shows the variation of the albedo and the disk-averaged polarization (\(P\)) for different orbital phase at a fixed wavelength (\(\sim 0.6\mu\)m) and an orbital inclination angle of \(90^{o}\) (edge-on view) for different surface albedos. The value of the surface albedo depends on the surface composition of the planet i.e. the amount of ocean cover, land cover, trees, ice, etc. The surface albedo of 0.9 corresponds to the case where the whole surface of the planet is covered with snow. The intermediate surface albedo of 0.05 in the figure corresponds to the case where almost the whole surface is covered with ocean and 0.1 corresponds to the case where half of the surface is covered with ocean and the remaining half with trees and grass. As the surface reflection is assumed to be Lambertian, it completely depolarizes the light that is reflected in the upward direction from the surface of the planet, i.e., from the bottom of the atmosphere (BOA) (Rossi et al., 2018). As a result, with an increase in the surface albedo, the overall albedo increases but the polarization (\(P\)) decreases. \(P\) is evidently found to peak at around \(90^{o}\) orbital phase. The phase dependent polarization profile presented here is consistent with that presented by (Sengupta & Maiti, 2006; Stam et al., 2003). Figures 4 and 5 show the phase dependent light curves at wavelengths of 0.6 \(\mu\)m and 1.0 \(\mu\)m respectively for a planet with two orbital inclinations, e.g., 45\({}^{o}\) and 90\({}^{o}\) Evidently, the peak-to-peak fluctuations of the light curves decrease with a decrease in the orbital inclination. Moreover, these figures also demonstrate the effects of clouds. The presence of clouds and non-zero surface albedo increase the total albedo of the disk as expected. Figure 6 show the phase dependent light curve at a wavelength of 0.6 \(\mu\)m for the same orbital inclinations by considering only single scattering at each atmospheric layer. The effect of clouds on the albedo is the same as the case for multiple scattering. But clouds do not affect the disk averaged polarization because angle of scattering at each layer is the same. We note that the peak polarization (i.e. at 90\({}^{o}\) phase angle) is 1 for clear sky as well as cloudy atmosphere. This happens because we have approximated the effects of clouds with the Rayleigh phase matrix and for the case of Rayleigh scattering, the degree of single-scattering polarization is 1 at 90\({}^{o}\) phase angle (see Fig. 3 of Chakrabarty & Sengupta (2021)). Basically, the single-scattering approximation overestimates the observable polarization and underestimates the total albedo. Figure 3: Effect of surface albedo on the phase curves of albedo (or flux ratio, F(\(\alpha_{\rm orb}\))/F\({}_{0}\)) and net polarization (\(P\)) integrated over the illuminated planetary disk at a wavelength of 0.6 \(\mu\)m and at an orbital inclination = 90\({}^{o}\). We have used surface albedo 0.14 for our calculations. And 0.9 surface albedo is for the case of snowball (fully covered with snow) planet. Although, we see opposite behaviour of disk averaged polarization with the clouds in case of hot-Jupiters (see Fig. 15 of Chakrabarty & Sengupta (2021)). For hot-Jupiters, the polarization depends on single scattering albedo, but for the Earth-like exoplanets, it depends on single scattering albedo as well as the surface albedo as explained in the next paragraph. However, to understand the total degree of polarization for a planet with a Lambertian surface with non-zero surface albedo, we divide the total upward (towards us) radiation at the top of atmosphere (TOA) into two streams: (i) the downward incident radiations that get scattered back to the upward direction and get polarized, especially at disk locations away from the substellar point (e.g., Stam et al., 2006; Chakrabarty & Sengupta, 2021), and (ii) the upward radiations from BOA that get transmitted in the same direction which are predominantly unpolarized. For a cloud-free atmosphere, as the wavelength increases, the intensity of stream-i decreases since the single-scattering albedo of the atmosphere decreases, whereas the intensity of stream-ii remains almost constant as we have assumed the same surface albedo at both the wavelengths. Hence, the relative dominance of stream-ii increases at higher wavelengths, and the Figure 4: The phase curves of albedo (or flux ratio, F(\(\alpha\))/F\({}_{0}\)) and polarization (\(P\)) integrated over the illuminated disk at a wavelength of 0.6 \(\mu\)m (visible) and at orbital inclinations angle 90\({}^{\circ}\) (solid) and 45\({}^{\circ}\) (dashed) for both cloud-free and cloudy atmospheres. polarization (\(P\)) drops significantly at \(\lambda\) =1 \(\mu\)m compared to that at \(\lambda\) =0.6 \(\mu\)m. For the same reason, the total disk albedo at 0.6 \(\mu\)m is only slightly higher than that at 1 \(\mu\)m for the cloud-free case which is also suggested by Figure 2. The effect of clouds is twofold. For a very low value of surface albedo, as in the case of the gaseous planets, the presence of clouds increases the depolarization of radiations due to multiple scattering as the single-scattering albedo of the atmosphere increases (Chakrabarty & Sengupta, 2021). This causes the total disk polarization to drop with the presence of clouds while the disk albedo rises (see Figures 15-17 of Chakrabarty & Sengupta (2021)). On the other hand, for a rocky planet with relatively high surface albedo, we find that stream-ii dominates over stream-i at the TOA in the absence of any cloud particle which causes a low value of disk polarization. However, the presence of a cloud layer tends to strengthen stream-i by reflecting back more of the downward radiations to the upward direction and weaken stream-ii by reflecting them back in the downward direction. As a result, the presence of clouds increases the overall disk-integrated degree of polarization and also increases the albedo of the disk. Thus, polarization serves as an indicator of the presence of clouds and helps us understand the thickness and the properties of the cloud layers when combined with the scalar spectrum of the planet. ## 5 Conclusions Figure 5: Same as figure 4 but at a wavelength of 1.0 \(\mu\)m (near infrared). This paper focuses on the various existing techniques that can be used synergistically to characterize Earth-like exoplanets. We have demonstrated how the inclusion of diffuse radiation due to scattering can improve the model of transmission spectra over the traditional approach of invoking the Beer-Bouguer-lambert's law. The difference is significant with respect to the molecular absorption features that can serve as biosignatures. We have also presented the reflection spectra including non-zero globally averaged surface albedo and these spectra also carry information about the biosignatures and the volatiles. However, obtaining the transmission or reflection spectra from such small-sized planets with thin atmospheres is extremely challenging at present but will be possible in the era of the upcoming big-budget missions like HabEx, LUVOIR, TMT, ELT, etc. Our models will play a significant role in the habitability study of the Earth-like planets using transmission, reflection spectra and phase dependent linear polarization. In this paper we demonstrate that atmospheric cloud can significantly affect both the transmission and reflection spectra. The use of polarimetry can allow us to study the properties of the clouds in great detail and reduce the overshadowing effects of clouds. The coronagraphic instruments of those upcoming missions in the upcoming decades will be able to directly image the Earth-like planets in the habitable zones around their host stars. Leveraging the polarimetric instruments in conjunction with these coronagraphic instruments, we will be able to conduct a phase curve study of such planets. Our vector phase curve models show the contrast required to resolve these planets from their host stars and also predict the maximum observable reflected flux and degree of polarization. Evidently, the surface albedo and the clouds significantly dictate the nature of the phase dependent light curves. Our approximate globally averaged Lambertian representation of the surface albedo has allowed us to simplify the calculations to some extent and develop an understanding of the effect of surface albedo on the reflection spectra and phase dependent light curves. However, in our upcoming work, we will consider individual surface components and their wavelength-dependent reflection matrices to calculate the spectra and the light curves more accurately. Figure 6: Same as figure 4 but with single scattering of the incident radiation. Finally, our models should be useful in designing the instruments onboard the upcoming missions, selecting the science targets, as well as extracting the planetary properties from the spectra and phase curves, once obtained.
2306.14009
Boosting Multitask Learning on Graphs through Higher-Order Task Affinities
Predicting node labels on a given graph is a widely studied problem with many applications, including community detection and molecular graph prediction. This paper considers predicting multiple node labeling functions on graphs simultaneously and revisits this problem from a multitask learning perspective. For a concrete example, consider overlapping community detection: each community membership is a binary node classification task. Due to complex overlapping patterns, we find that negative transfer is prevalent when we apply naive multitask learning to multiple community detection, as task relationships are highly nonlinear across different node labeling. To address the challenge, we develop an algorithm to cluster tasks into groups based on a higher-order task affinity measure. We then fit a multitask model on each task group, resulting in a boosting procedure on top of the baseline model. We estimate the higher-order task affinity measure between two tasks as the prediction loss of one task in the presence of another task and a random subset of other tasks. Then, we use spectral clustering on the affinity score matrix to identify task grouping. We design several speedup techniques to compute the higher-order affinity scores efficiently and show that they can predict negative transfers more accurately than pairwise task affinities. We validate our procedure using various community detection and molecular graph prediction data sets, showing favorable results compared with existing methods. Lastly, we provide a theoretical analysis to show that under a planted block model of tasks on graphs, our affinity scores can provably separate tasks into groups.
Dongyue Li, Haotian Ju, Aneesh Sharma, Hongyang R. Zhang
2023-06-24T15:53:38Z
http://arxiv.org/abs/2306.14009v4
# Boosting Multitask Learning on Graphs through ###### Abstract. Predicting node labels on a given graph is a widely studied problem with many applications, including community detection and molecular graph prediction. This paper considers predicting multiple node labeling functions on graphs simultaneously and revisits this problem from a multitask learning perspective. For a concrete example, consider overlapping community detection: each community membership is a binary node classification task. Due to complex overlapping patterns, we find that negative transfer is prevalent when we apply naive multitask learning to multiple community detection, as task relationships are highly nonlinear across different node labeling. To address the challenge, we develop an algorithm to cluster tasks into groups based on a higher-order task affinity measure. We then fit a multitask model on each task group, resulting in a boosting procedure on top of the baseline model. We estimate the higher-order task affinity measure between two tasks as the prediction loss of one task in the presence of another task and a random subset of other tasks. Then, we use spectral clustering on the affinity score matrix to identify task grouping. We design several speedup techniques to compute the higher-order affinity scores efficiently and show that they can predict negative transfers more accurately than pairwise task affinities. We validate our procedure using various community detection and molecular graph prediction data sets, showing favorable results compared with existing methods. Lastly, we provide a theoretical analysis to show that under a planted block model of tasks on graphs, our affinity scores can provably separate tasks into groups. Multitask Learning; Boosting; Modeling Task Relationships + Footnote †: journal: KDD '23, August 6-10, 2023, Long Beach, CA, USA necessarily help performance (see Figure 4, Section 3.2). Moreover, naively computing all pairwise affinities requires fitting \(T^{2}\) models given \(T\) tasks, which is costly even for tens of tasks. The main contribution of this paper is to design an efficient algorithm to cluster tasks into similar groups while accounting for higher-order transfer relationships. One can compare the performance of a multitask model, such as a GNN trained on all tasks, against several multitask GNNs, each trained for a task group. In Section 5, we show that this approach yields the best results among a wide set of baselines on various real-world datasets. Our method can be viewed as a _boosting_ procedure (Kipf and Welling, 2017) and can be used on top of any graph learning algorithm. **Approach.** We outline the overall procedure; See also Figure 1 for an illustration. Given \(T\) tasks, we first compute a _task affinity score_\(\theta_{i,j}\) for every pair of tasks \(i\) and \(j\). A higher value of \(\theta_{i,j}\) indicates task \(j\) transfers better to task \(i\) while also accounting for the presence of other tasks. Conceptually, \(\theta_{i,j}\) is similar to the feature importance score in random forests when hundreds of other features are available. This _higher-order_ task affinity score can also be used to predict whether a set of tasks transfer positively or negatively to a target task. Given the affinity score matrix \([\theta_{i,j}]_{T\times T}\), we use a spectral clustering algorithm (Wang et al., 2017; Wang et al., 2018) to separate tasks into similar groups, which is more suitable for joint training. Specifically, our algorithm optimizes the sum of task affinity scores within groups through spectral clustering. Next, we describe the steps to estimate the affinity score matrix. A naive approach is to compute each entry individually, requiring \(O(T^{2})\) complexity. We design an efficient sampling procedure that only requires \(O(T)\) complexity. Concretely, we sample \(n=O(T)\) random subsets from \(\{1,2,\ldots,T\}\) of a fixed size \(\alpha\) (in practice, \(\alpha=5\) suffices). We fit an MTL model for each subset and evaluate its prediction loss for each task in the subset; Let \(f_{i}(S)\) denote the prediction loss of task \(i\), given a subset \(S\subseteq\{1,2,\ldots,T\}\), which we evaluate on a holdout set. Thus, \(f_{i}(S)\) measures the information transfer from \(S\) to \(i\). Then, we compute \(\theta_{i,j}\) as the average among all subsets including \(i,j\): \[\theta_{i,j}=\frac{1}{n_{i,j}}\Big{(}\sum_{1\leq k\leq m\ \{i,j\}\subseteq S_{k}}f_{i}(S_{k}) \Big{)},\ \ \text{for all}\ 1\leq i,j\leq T. \tag{1}\] To rigorously justify the rationale behind our affinity scores, we conduct a theoretical analysis in a stochastic block model style setting, where tasks follow a well-separated structure. We prove that under this planted model, the affinity score matrix \([\theta_{i,j}]_{T\times T}\) exhibits a block-diagonal structure, each block corresponding to one cluster. With this characterization, we show that the spectral clustering algorithm provably recovers the underlying task structure. **Summary of Contributions.** The contribution of this work is threefold. First, we design a task affinity score to measure higher-order task relationships and estimate the scores with an efficient sampling procedure, which only requires fitting \(O(T)\) MTL models. Second, we propose a spectral clustering step to find task groups based on the affinity score matrix. We provide recovery guarantees for this clustering procedure, showing that the affinity scores can be used to provably group related tasks in a planted model. Third, we validate the benefit of our boosting approach using various community detection and molecular graph prediction datasets. The experimental results show that our approach improves test accuracy over various community detection and MTL baselines. **Organization.** The rest of this paper is organized as follows. We first outline related work in Section 2. In Section 3, we provide empirical grounding for the claim that accounting for negative transfer among tasks is crucial for MTL on graphs. Our boosting procedure is described in Section 4, followed by a thorough empirical study of its performance in Section 5. Finally, in Section 6, we describe the theoretical analysis of our algorithm. In Appendix A, we provide complete proofs for our theoretical results. In Appendix B, we describe additional experimental results left from main text. ## 2. Related Work ### Modeling task relationships in MTL The importance of learning from a pool of disparate data sources is well-recognized in the data mining literature (Beng et al., 2017). However, naively combining several heterogeneous data sources can result in negative interference between their feature representations (Zhu et al., 2018). Researchers have designed methods to extract shared information from different tasks. For instance, explicit regularization applied to the representations of all tasks can help encourage information Figure 1. Overview of our boosting procedure: (1) We sample random subsets of tasks, each subset containing a fixed number of tasks. (2) For each subset \(S_{k}\), for \(k=1,2,\ldots,n\), we fit a multitask learning (MTL) model on the combined data sets of all tasks in \(S_{k}\), using a graph neural network (GNN) as the shared encoder. After fitting the MTL model, we evaluate its prediction performance for each task \(i\in S_{k}\), denoted as \(f_{i}(S_{k})\). (3) We compute an affinity score \(\theta_{i,j}\) by averaging task \(i\)’s scores among all subsets as in equation (1), where \(n_{i,j}\) is the number of subsets including both \(i\), \(j\). This results in a \(T\) by \(T\) affinity matrix, denoted as \([\theta_{i,j}]_{T\times T}\). (4) We apply spectral clustering on this matrix to find clusters of task groups and fit one GNN for each task group. transfer (Liu et al., 2017). These regularization schemes can be rigorously justified for convex hypothesis classes (Zhu et al., 2017). For nonconvex hypothesis spaces such as graph neural networks, explicitly regularizing the feature spaces of all tasks is a non-trivial challenge (Wang et al., 2017; Chen et al., 2018). Ma et al. (Ma et al., 2018) introduce a mixture-of-experts model to capture the task relationships, with each expert being an MTL network. Yu et al. (Yu et al., 2019) design gradient-based similarity measures that can be efficiently computed using the cosine similarity of gradients during training. This can be extended to measure the similarity between two sets of tasks by averaging the gradient of tasks in each set. Recent work (Zhu et al., 2019; Yu et al., 2019) points out that first-order affinity measures deteriorate as a transferability measure when applied to a large set of tasks. **Task Grouping.** Instead of sharing layers and model parameters across all tasks, Kumar and Daume III (Kumar and Daume, 2019) proposes mitigating negative transfers by dividing tasks into several related groups. Our paper takes inspiration from datamodels (Wang et al., 2017), which extrapolates the outcome of deep networks as influence functions. In particular, Ilyas et al. (Ilyas et al., 2017) find that a linear regression method can accurately approximate the outcome of deep nets trained with a subset of samples on popular image benchmarks. Our results (e.g., Figure 5) show that the affinity scores can also accurately predict transfer types in multitask learning. ### Transferable graph neural networks Graph neural networks have emerged as widely used tools for graph learning. Ideally, we want to learn a powerful embedding for all downstream tasks (Zhu et al., 2017; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Zhu et al. (Zhu et al., 2019) analyzes the transferability of GNN from graph \(A\) to graph \(B\) and highlights the complex correspondence between structural similarity and transferability between GNNs. Besides GNN, researchers have also observed negative interference while applying graph embedding to perform transfer learning on Graphs (Zhu et al., 2019). Ju et al. (Ju et al., 2019; Chen et al., 2018) show non-vacuous generalization bounds for graph neural networks in the fine-tuning setting using Hessian. Our paper expands on these prior works in two aspects. First, we consider a multi-label learning setting involving as many as 1000 tasks, whereas the work of Zhu et al. (Zhu et al., 2019) and Gritsenko et al. (Gritsenko et al., 2019) focuses on transfer learning from graph \(A\) to graph \(B\). Second, we consider multiple node prediction tasks on a single graph, which is different from graph pretraining (e.g., Hu et al. (Hu et al., 2019) and Qiu et al. (Qi et al., 2019)) and graph algorithmic reasoning (Zhu et al., 2019). **Multitask Learning Applications for Graph-Structured Data.** Combining multiple graph learning tasks jointly can potentially enhance the performance of single tasks. Our results support this claim in the context of supervised overlapping community detection. Besides, we believe many graph learning tasks can be cast into the multitask learning framework. For instance, consider extracting entity relationships on knowledge graphs; Each entity relation may be viewed as one task. Wang et al. (Wang et al., 2019) find that learning the dependencies of different relations through multitask representation learning can substantially improve the prediction performance. There has also been some study on the trade-off between fairness and accuracy in MTL (Zhu et al., 2019). It is conceivable that the new tools we have developed may benefit these related applications. This is a promising direction for future work. ### Overlapping community detection Identifying community structures is one of the most widely studied problems in network science (Krizhevsky et al., 2014). A common approach to finding communities given a seed set is to measure the local connectivity of a subgraph using spectral graph properties (e.g., the conductance of a cut). Yang and Leskovec (Yang and Leskovec, 2015) describe an efficient algorithm using non-negative matrix factorization for finding overlapping communities. Whang et al. (Whang et al., 2016) finds local clusters by identifying low conductance set near a seed. These approaches use the connectivity of edges to compute spectral properties. Besides, higher-order structures from hypergraphs are found to be useful for overlapping community detection (Chen et al., 2018; Chen et al., 2018). Lastly, community detection can be cast in the correlation clustering framework, which does not require specifying the number of communities (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Our paper is closely related to the work of Chen et al. (Chen et al., 2018). The difference is that we formulate the problem of predicting community labeling via multitask learning, whereas Chen et al. (Chen et al., 2018) consider a multi-class classification setup. Our formulation is more suitable when we are dealing with a large number of overlapping communities. This is a novel perspective on community detection to the best of our knowledge. Our results, compared with strong baselines including VERSE embedding (Wang et al., 2018) and BigClam (Bigig et al., 2018), suggest that modeling higher-order task relationships can significantly improve empirical performance for multitask learning. ## 3. Investigating Task Relationships We investigate task relationships in the setting of overlapping community detection. We demonstrate that negative transfer is widespread across tasks and persists in large models. We show that in the higher-order regime, task relationships are not monotone or submodular. Motivated by these considerations, we propose a task grouping problem for conducting MTL on graphs. ### Setup and background **Problem setup.** We conduct an empirical study using multiple overlapping community detection tasks as a concrete example. Given a graph \(G=(V,E)\), we have a list of \(T\) communities as subgraphs of \(G\). Let \(C_{1},C_{2},\ldots,C_{T}\) denote the vertex set of these communities. During training, we are given a vertex subset of each community as seed sets. For every \(i=1,2,\ldots,T\), deciding whether a node \(v\in V\) belongs to \(C_{i}\) is a binary classification task. Note that this formulation differs from supervised community detection (Chen et al., 2018) but is more suitable for overlapping community detection. This formulation is an example of multi-label learning, which is a special case of multi-task learning (see, e.g., Fig. 1b of Zhang and Yang (Zhang and Yang, 2019)). Casting multi-label learning into this more general formulation provides new perspectives in solving the problem. * The prediction of membership in \(C_{i}\) is task \(i\), which is a binary classification task, given \(G\) and node features. * There are \(T\) tasks in total, one for each community. **Datasets.** We use social network datasets with known community labels, including the Amazon, YouTube, DBLP, and LiveJournal networks from SNAP (Wang et al., 2018). We use the 100 largest communities from each network and keep the subgraph induced by the nodes in the 100 communities. For each community detection task, we randomly sample 10% of the nodes from the community in the training set, together with 10% of nodes outside the community as negative samples. We randomly sample another 20% of nodes as the validation set and treat the rest as a test set. We evaluate the performance of each task using the F1 score on the test set. See Table 1 for details statistics of the four datasets. **Models.** We consider an MTL model that consists of a single encoder to obtain shared representations and a separate prediction layer for each task. We train this model by minimizing the average loss over the training data of all the tasks. For our experiments in this section, we use the SIGN model (Zhou et al., 2017) as the encoder, which is more efficient to train than GCN. The encoder involves 3 layers, each with a fixed width of 256 neurons. Our choice of this encoder is without loss of generality, and our observations also apply to other encoders. We construct the node features from the VERSE embedding (Wang et al., 2017), which encodes personalized PageRank vectors known as useful features for community detection (Beng et al., 2017). **Negative transfer on graphs.** A common phenomenon with multitask learning is negative transfer (Zhu et al., 2017), meaning that combining one task with another worsens performance compared with training a task separately. We show that negative transfer occurs during MTL on graphs. We take 100 tasks from the YouTube dataset. First, we fix a randomly chosen task \(i\) as the target task and use the rest as source tasks. Then, we train a GNN for task \(i\), and 99 MTL models, each combining one source task with task \(i\). The performance gap between the MTL and STL models indicates the transferability from a source task to task \(i\). Figure 2 above shows the results of this experiment, repeated over four randomly chosen target tasks. The bars above zero correspond to _positive transfers_ as MTL performance exceeds STL, while bars below zero correspond to _negative transfers_. We observe that both positive and negative transfers appear in all four settings. **Structural differences.** Why do negative transfers happen during multitask learning on graphs? A common belief in the MTL community is that this is due to differences in the task labels (Zhu et al., 2017; Wang et al., 2017; Wang et al., 2017). We argue that graph neural networks involve another kind of heterogeneity due to the graph diffusion process. We appeal to a connection between GNN propagation and personalized PageRank (PPR) (Zhu et al., 2017; Wang et al., 2017; Wang et al., 2017), positing that dramatically different PPR structures among communities will induce different graph diffusion for GNNs. In Figure 3, we visualize the PPR vectors of four randomly chosen tasks from the Youtube dataset. Within each subfigure, each row corresponds to the PPR vector of one node that belongs to a particular community. We plot the PPR vectors of a set of nodes from the same community. Clearly, PPR vectors differ dramatically for nodes from different communities, suggesting that the diffusion processes are highly heterogeneous. We also observe that tasks that yield positive transfers tend to have higher similarity between their PPR vectors. Detailed results are described in Appendix B.1. **Will larger models address negative transfers?** A natural approach to address negative transfer is to increase the model size, but this does not account for the above structural differences. To verify this, we use the first target task from Figure 2 and select the source task in the rightmost bar with the strongest negative transfer. We gradually increase the number of neurons in the hidden layer from 32, 64, 128, 256, 512, 1024, to 2048, corresponding to larger model capacities. Figure 3(a) shows the results. We observe consistent negative transfers, i.e., the accuracy improvements stay negative. We have also observed the same result on a more powerful GNN with attention, i.e., GAMLP (Wang et al., 2017). See Appendix B.2 for these results. ### How do task relationships behave? Next, we study the multitask learning performance involving more than two tasks. At the extreme, this would involve \(2^{T}\) combinations of task subsets. To be precise, given any subset of tasks \(S\subseteq\{1,\dots,T\}\), let \(f_{i}(S)\) denote the MTL performance of combining data from all tasks in \(S\), evaluated on task \(i\), for each \(i\in S\). **Q1: Is \(f\) monotone?** To better understand how \(f\) behaves, we pick a target task \(t\) and measure \(f_{t}(\{t\})\). Then, we add new tasks to be combined with \(t\). We only add a task \(i\), if task \(i\) is beneficial for task \(t\), i.e., \(f_{t}(\{t,i\})\geq f_{t}(\{t\})\). Figure 3(b) shows the result of applying the above setting to the first target task. We observe that after adding more than two positive source tasks, the MTL performance decreases. This shows that \(f_{t}(\cdot)\) is not monotone. Figure 3. In each subfigure, we visualize the personalized PageRank vectors of a set of nodes in one community. They differ dramatically across non-overlapping communities. Figure 2. This figure illustrates the widespread negative transfer effect among tasks by noting that MTL performance can dip below STL for four separate (randomly selected) target tasks. We fix a target task \(i\) for each plot, then randomly pick ten source tasks (out of 100) and for each source task \(j\) train an MTL with \(i\) and \(j\); we report the MTL accuracy for \(i\) minus \(i\)’s STL accuracy. Thus, bars above zero indicate positive transfers from source to target tasks, while bars below zero indicate negative transfers. **Q2: Is \(f\) submodular?** A function \(f(\cdot)\) is submodular if for any two subsets \(S\subseteq T\subseteq\{1,2,\dots,T\}\) and any single task \(x,f(\{x\}\cup T)-f(T)\leq f(\{x\}\cup S)-f(S)\). Since we only add positive source tasks, it is natural to ask if this also mitigates negative transfer. In Figure 3(c), we find that adding positive source tasks does not always help, implying that \(f\) is not submodular. We explain this due to the presence of negative source tasks, which diminishes the effect of positive tasks. The takeaway is that \(f\) is not monotone or submodular, motivating our approach to extrapolate \(f\) via sampling. ### Task grouping for multitask graph learning Our goal is to obtain a set of networks where each network is trained on a subset of tasks. The objective is to optimize the overall performance of all tasks after combining the networks. To approach this problem, we consider finding subsets of tasks, namely task groups, such that the negative transfers between groups are minimized and the positive transfers of tasks within each group are maximized. We want to divide \(\{1,2,\dots,T\}\) into possibly overlapping subsets of tasks. Let \(\mathcal{S}\) denote the collection of subsets. Given \(\mathcal{S}\), the performance of task \(i\) is the highest one among all networks: \[L_{i}(\mathcal{S})=\max_{X\in\mathcal{S}}f_{i}(X).\] Thus, the overall performance of all tasks on a solution is \(\sum_{i=1}^{T}L_{i}(\mathcal{S})\). Suppose there is a limited inference budget \(b\), which is the number of MTL models we can deploy in inference. We want to find at most \(b\) groups that achieve the highest performance: \(\sum_{i=1}^{T}L_{i}(\mathcal{S})\). To address this problem, we need to evaluate the multitask learning performance for all subsets of tasks, which amounts to a total of \(2^{T}\) combinations. Because task relationships are highly non-linear, we need a more efficient procedure to capture transfer relationships. More generally, finding the optimal task groups is NP-hard via a reduction from the set-cover problem (see, e.g., (Zhou et al., 2019)). ## 4. Our approach We now present our approach to optimize multitask model performance through grouping tasks that utilize higher-order task affinities. Recall the steps in our pipeline from Figure 1: (1,2) repeatedly sample a random subset of tasks, and evaluate the model's performance that combines the tasks in each subset. (3) Average the multitask learning performances over subsets that involve two specific tasks, yielding the task affinity score. (4) Then, we use these task affinity scores to group tasks using a spectral clustering algorithm on the matrix of task affinity scores. ### Estimating higher-order task affinities **Notations.** Suppose we are given a graph \(G=(V,E)\) where \(|V|=N\) and \(|E|=M\), with node features \(X\in\mathbb{R}^{N\times d}\). There are \(T\) semi-supervised tasks on the graph. For each task \(i\), we are given a set of nodes \(\hat{V}^{(i)}\) with known labels \(Y^{(i)}\). The goal of task \(i\) is to predict the labels for the rest of the nodes on the graph \(V/\hat{V}^{(i)}\). Note that the set \(\hat{V}^{(1)},\dots,\hat{V}^{(T)}\) can be either overlapped or disjoint with each other. The objective is to optimize the average prediction performance over the \(T\) tasks. Let \(\phi\) be the encoder network shared by all tasks. Let \(\psi_{1},\dots,\psi_{T}\) be prediction layers for tasks \(1,\dots,T\) that map feature vectors to task outputs. When the input is a graph, we consider using graph neural networks as the encoder \(\phi\). Given \(S\subseteq\{1,2,\dots,T\}\), let \(\phi^{(S)}\) and \(\hat{\psi}^{(S)}_{i}\), for \(i\in S\), be the trained model on the combined dataset of \(S\). We evaluate the prediction loss on each task's validation dataset. Let \(\bar{V}^{(i)}\) denote a set of nodes in \(V\), which is used as a validation set for the task \(i\). Let \(\ell\) be an evaluation metric, e.g., the cross-entropy loss. We define multitask learning performance for any \(i\in S\) as: \[f_{i}(S)=\frac{1}{|\bar{V}^{(i)}|}\sum_{o\in\bar{V}^{(i)}}\ell\Big{(}\psi^{(S )}_{i}\big{(}\phi^{(S)}(X_{o};G)\big{)},Y^{(i)}_{o}\Big{)} \tag{2}\] **Measuring task affinity scores.** Our approach measures higher-order task affinities to indicate how well a task transfers another task when combined in a subset. We show that such task affinity measures can be estimated by training \(n\) models, where \(n\) only needs to grow linearly to the number of tasks \(T\). Moreover, our measure gives a more accurate prediction of higher-order multitask transfer results than previous notions of task affinity. We view task affinity as a transferability measure from a source task to a target task. Given a task \(i\in\{1,\dots,T\}\) as a target task, denote the affinity of another task \(j\) to \(i\) as \(\theta_{i,j}\). To model the relations of higher-order transfers. we define the task affinity score \(\theta_{i,j}\) as the average MTL prediction loss \(f_{i}(S)\) on target task \(i\) over subsets that contain both task \(i\) and \(j\). We emphasize that the affinity scores account for the presence of other tasks. Also note that a higher value of \(\theta_{i,j}\) indicates higher usefulness of task \(j\) to task \(i\). We estimate the task affinity scores through a sampling approach. Conceptually, this is analogous to graph embedding methods that optimize embeddings to approximate proximity scores. Similarly, we sample random subsets from tasks \(1\) to \(T\) and estimate the task affinity scores on the sampled subsets using this procedure: Figure 4. (3(a)) We show that there exists a consistent negative transfer even after increasing the model size. (3(b)) The MTL performance of a target task starts to decrease as we add more “positive” source tasks. (3(c)) Under the presence of a negatively-interfering source task, the benefit of adding more “positive” tasks also diminishes. 1. Sample \(n\) subsets from tasks \(1\) to \(T\), denoted as \(S_{1},\ldots,S_{n}\). We sample each subset from the uniform distribution over subsets with size \(\alpha\). In other words, among all subsets of \(\{1,2,\ldots,T\}\) with size \(\alpha\) (note there are \(\binom{T}{\alpha}\) of them), we pick one uniformly at random, with probability \(1/\binom{T}{\alpha}\). 2. Evaluate prediction loss \(f_{i}(S)\) for every task \(i\in S_{k}\) and every subset \(S\in\{S_{1},\ldots,S_{n}\}\) by training a multitask model on \(S\). 3. Estimate the task affinity scores \(\theta_{i,j}\) by averaging the MTL performances \(f_{i}\) over subsets containing both task \(i\) and \(j\): \[\theta_{i,j}=\frac{1}{n_{i,j}}\Big{(}\sum_{1\leq k\leq m:\{i,j\}\subseteq S_{k }}f_{i}(S_{k})\Big{)}.\] (3) where \(n_{i,j}\) is the number of subsets that contain tasks \(i,j\). In particular, when \(i\) and \(j\) are the same, we set \(\theta_{i,i}\) as the average of \(f_{i}(S)\) over all \(S\) having \(i\). To summarize, the above procedure yields a \(T\) by \(T\) affinity score matrix, denoted as \(\Theta=[\theta_{i,j}]_{T\times T}\). ### Finding task groups by spectral clustering Since the affinity scores serve as a proxy of higher-order task relationships, we optimize MTL performance by finding task groups with the highest task affinity scores within each group. Our task grouping algorithm, as described below, contains two major steps. The complete procedure is given in Algorithm 1. **Input**: \(T\) tasks; Training and validation sets of the task. **Require**: The size of each subset \(\alpha\); Number of sampled subsets \(n\); Inference budget \(b\); Multitask learning algorithm \(f\). **Output**: \(b\) trained multitask models. 1. For \(k=1,\ldots,n\), sample a random subset \(S_{k}\) from \(\{1,2,\ldots,T\}\) with size \(\alpha\); evaluate \(\mathbf{f}(S_{k})\) following equation (2). 2. Estimate the task affinity scores \(\Theta\) following equation (3). 3. Generate task groups \(\mathbf{S}^{\star}=\{S_{1},\ldots,S_{b}\}\) by applying spectral clustering on a symmetric matrix constructed from \(\Theta\). 4. Train \(b\) multitask models for each task group \(S_{1},\ldots,S_{b}\). **Algorithm 1** Task Grouping Using Higher-Order Task Affinities First, we construct a transformed affinity score matrix for clustering. Since the sum of affinity scores between two tasks \(i\) and \(j\) within a group is \((\theta_{i,j}+\theta_{j,i})\), we define a symmetric matrix \(\mathbf{A}_{1}=(\mathbf{\Theta}+\mathbf{\Theta}^{\top})/2\). Additionally, for each group, we find auxiliary source tasks that yield positive transfer to the group. This is achieved by viewing the matrix \(\Theta\) as directional task relationships, with source tasks represented along the rows and target tasks along the columns. To find a set of source tasks that yield the highest affinity scores to a set of target tasks, it is that to consider the symmetrized matrix \(\left[\begin{array}{cc}\mathbf{0}&\mathbf{\Theta}\\ \mathbf{\Theta}^{\top}&\mathbf{0}\end{array}\right]\). Thus, based on the affinity score matrix \(\Theta\), we construct a symmetric matrix: \[\mathbf{A}=\left[\begin{array}{cc}\mathbf{A}_{1}&\mathbf{0}\\ \mathbf{\Theta}^{\top}&\mathbf{0}\end{array}\right].\] Second, we apply spectral clustering algorithms (e.g., Shi and Malik (Shi and Malik, 2018) and Ng et al. (Nguyen et al., 2019)) on \(\mathbf{A}\) and merge the clustered target and source tasks in one group of final task groupings. Afterward, we train one multitask model for each group by combining all the data from that group. ## 5. Experiments We now evaluate our approach empirically on various community detection and molecular graph data sets. First, we show that our task affinity scores can be estimated efficiently and be used to predict negative transfers more accurately than first-order task affinities. Second, we apply our approach to overlapping community detection tasks on several datasets with ground-truth community labels: our approach outperforms the naive MTL by **3.98%** and task grouping baselines by **2.18%**. Third, we evaluate our approach on molecular graph prediction tasks and show a **4.6%** improvement over prior MTL methods. Lastly, we provide ablation studies to show that our approach is stable under various settings. The code for reproducing our experiments is available at [https://github.com/NerdsResearch/boosting-multitask-learning-on-graphs](https://github.com/NerdsResearch/boosting-multitask-learning-on-graphs). ### Results for predicting negative transfers **Experiment setup**. We use task affinity scores \(\theta_{i,j}\) for predicting negative transfers as follows. Given a target task \(i\) and a subset of tasks \(S\) containing \(i\), we predict whether the subset \(S\) transfers negatively to a task \(i\), i.e., the MTL prediction loss \(f_{i}(S)\) of training task \(i\) with subset \(S\) is worse than the STL loss of task \(i\). We set up the prediction as binary classification. For each task \(i\), input feature for a subset \(S\) is the task affinity scores of tasks in \(S\) to task \(i\): \(\mathbbm{1}_{S}\circ[\theta_{i,1},\ldots,\theta_{i,T}]\). The label is whether the subset \(S\) transfers negatively to a task \(i\). Then, we fit a logistic regression model that maps the features to the binary labels. We fit \(T\) models in total and evaluate the average F1-score over the \(T\) models. We evaluate the above prediction on the Youtube dataset with \(T=100\) tasks and estimate task affinities of order 5, 10, and 20 (which means that the size of sampled subsets is \(\alpha=5\), 10, or 20). We use the transfer results on \(n=2000\) task subsets to fit logistic regression models and evaluate the predictions on 500 new task subsets that do not appear in training. **Results.** First, we illustrate the convergence of F1-score of negative transfer prediction when increasing the sample size \(n\), as shown in the right of Figure 5. We observe that with \(n\leq 2000=20T\), using higher-order task affinity scores predicts negative transfers with \(F_{1}\)-score above 80%. This result consistently holds for sampling subsets of different sizes. Second, we compare our approach to two previous notions of affinity scores. One involves computing first-order task affinity through the effect of one task's gradient on another task's loss (Shi and Malik, 2018). Another approximates the higher-order task transfers by averaging the first-order task affinity scores (Shi and Malik, 2018). Figure 5 on the left shows that the \(F_{1}\)-score from previous measures gradually gets worse; Ours are accurate for subsets of size ranging from 2 to 20. **Run time analysis.** Then, we present the run time of our approach. Our approach requires training \(n\) networks, one for each random subset. We find that using \(n\leq 20T\) samples suffice for estimating the task affinity scores to convergence. In contrast, previous methods (Shi and Malik, 2018; Shi and Malik, 2018) estimate task affinities for every pair of tasks, resulting in training on \(O(T^{2})\) task pairs. Concretely, we report the run time of our approach in Figure 6, evaluated on a single NVIDIA RTX GPU. Compared with the two previous methods, our approach requires **3.7\(\times\)** less running time, averaged over the four data sets. **Speed up training.** In practice, we can further speed up training with early stopping and downsampling. In our experiments, we have found that this leads to a significant speed-up, and the overhead on top of Naive MTL is only up to 2-3x. In Table 5 of Appendix B.3, we report the running time of our approach by incorporating the speed-up methods, as compared with all MTL baselines. ### Results for overlapped community detection **Baselines.** We compare our approach with three classes of baseline approaches, which are selected with the goal of being representative in terms of relative improvement. The first class of methods is a set of popular methods for community detection, including: * BigClam (Srivastava et al., 2015). * Louvain clustering (Krizhevsky et al., 2014). * Network embedding methods including Node2Vec (Krizhevsky et al., 2014) and VERSE (Vaswani et al., 2017). We use a logistic regression classifier on the node embedding for each community. * GNN-based community detection methods including MinCutPool (Krizhevsky et al., 2014) and Deep Modularity Networks (Vaswani et al., 2017). Second, we consider two baseline methods that optimize all tasks using a shared model: * Naive MTL (Krizhevsky et al., 2014) trains all tasks jointly in a shared model. * Mixture-of-Experts (Wang et al., 2017) that trains multiple expert models on all tasks and uses a gating network to combine model outputs for each task in a weighted manner. Third, we consider four task grouping baselines that find task groups and train a network on each group. * Forward selection: Start from all empty groups. Enumerate all tasks, add the task to each existing group, and assign it to the group resulting in the best average performance. * Backward selection: Start from a group with all tasks and other groups as empty. Enumerate all tasks in the first group, combine the task with the rest groups, and assign the task to the group resulting in the best average performance. * First-order task affinity that evaluates the effect of one task's gradients on another task's loss (Krizhevsky et al., 2014). * Averaging the first-order task affinity scores to approximate higher-order task affinities (Vaswani et al., 2017). We note that Fifty et al. (Krizhevsky et al., 2014) and Standley et al. (Vaswani et al., 2017) use a branch-and-bound algorithm to search for task groups but do not scale to one hundred tasks for the data sets. To compare these two methods, we use their task affinity scores but apply the spectral clustering procedure to find task groups. **Implementations.** For all MTL approaches, we use a 3-layer SIGN model as the encoder, with a width of 256 neurons. We use the VERSE embedding as the input node features. We compare approaches for splitting tasks into \(b=20\) groups. We use the same amount of model parameters for other MTL baselines. In the evaluation, we report the macro F1-score of predicted communities to ground-truth community labels on the test set. For our approach, we set the size of the subset as \(\alpha=10\) and the number of samples as \(n=2000\). We ablate the two parameters in Section 5.4 and find that the performance of our approach remains stable while varying them. We set the MTL performance metric \(f_{i}(S)\) as the (negative) cross-entropy loss on the validation set. We apply the spectral clustering algorithm as in (Vaswani et al., 2017; Wang et al., 2017) to find task groups on the symmetric adjacency matrix constructed from task affinity scores. **Results.** Table 1 reports the evaluation on four social networks with ground-truth community labels. First, we find that VERSE embedding achieves the best performance among all the node embedding methods. Thus, we use the VERSE embedding as a node feature for conducting MTL on graph neural networks. Due to the space constraint, we report the results of other community detection methods in Appendix B.4. * _Benefit of task grouping:_ Compared with methods that optimize a joint model on all tasks, task grouping consistently performs better than the naive MTL and Mixture of Experts. Our approach outperforms them by **3.98%** on average. * _Benefit of modeling higher-order relationships:_ Compared with forward and backward selection, our approach achieves an average improvement of **2.18%** over the datasets. Moreover, Figure 5. We use task affinity scores from tasks in a subset \(S\) to task \(i\) to predict whether training with subset \(S\) decrease the STL performance of task \(i\). Left: Compared with two first-order task affinity scores, our higher-order task affinity scores achieve consistently better F1-score for predicting negative transfers of combining up to \(\alpha=20\). Right: The F1-score for predicting negative transfers converges when the sampled subsets \(n\) reach \(2000\). Results consistently hold for different subset sizes. Figure 6. Comparing the runtime for computing higher-order vs. first-order task affinity (Vaswani et al., 2017; Krizhevsky et al., 2014). we compare our approach with clustering by two first-order task affinities. The results show that our approach outperforms them by **2.49%** on average. This validates the advantage of using higher-order task affinities over first-order task affinities. The results are shown in Table 6 of Appendix B.4. ### Results for molecular graph prediction Next, we apply our approach to molecular graph prediction tasks, including two multi-task regression data sets from TUDatasets (Zhou et al., 2017) and one multi-task classification dataset from OGB (Zhou et al., 2017). In the graphs, nodes represent 3D coordinates of atoms in molecules, and edges encode distances between atoms. Each task corresponds to predicting a specific molecular property. We use a 6-layer GIN model as the encoder, with a width of 64. We evaluate the mean absolute error (MAE) on the regression datasets and the average precision (AP) on the classification dataset. Table 2 compares our approach with MTL baselines, including naive MTL, Mixture of Experts, and forward/backward selection. We find that on these three data sets, our method still outperforms the baselines relatively by **4.6%** on average. ### Ablation studies **Number of task groups \(b\).** We discuss how the number of task groups is determined in our approach. We hypothesize that a larger number of task groups gives greater flexibility and tends to have better performance. Ideally, we can generate \(T\) task groups, each for a particular target task, and select helpful source tasks for the target task in each group. We validate the hypothesis by varying the number of task groups between 5, 10, 20, and 100. The results validate that more group achieves better performance. Interestingly, we also find that using 20 groups achieves comparable results to using 100 groups. Thus, we set \(b=20\) in our experiments. The details are reported in Appendix B.5. **Subset size \(\alpha\).** Recall that we collect MTL prediction losses through sampling random subsets of a size \(\alpha\). We evaluate the performance of our approach by varying the size \(\alpha\in\{5,10,20\}\). First, we observe similar convergence results using different sizes, as shown in Figure 5. Next, we apply algorithm 1 with different values of \(\alpha\). We notice that the performance are comparable. Using \(\alpha=10\) achieves slightly better performance than the other two. We posit that the reason why using a larger \(\alpha\) does not help is that the number of related tasks in our community detection data sets is limited. **Number of samples \(n\).** We further explore how \(n\) affects the algorithm results. Our observation in Figure 5 is that collecting \(n=20T\) is sufficient for task affinity scores to converge. Meanwhile, using a smaller \(n\) can also achieve near 80% F1-score for predicting negative transfers. Thus, we test the performance of algorithm 1 by varying \(n\in\{1000,1500,2000\}\). We observe that using \(n=1000\) still achieves comparable performance as using \(n=2000\). The performance difference is within 0.5%. ## 6. Theoretical analysis In this section, we aim to develop a principled understanding of our higher-order affinity scores \([\theta_{i,j}]_{T\times T}\). To this end, we study a planted model in a theoretical setup, where the tasks are assumed to follow a block structure. We note that planted models have been widely used to study graph clustering (Beng et al., 2017). In this setting, we ask: * Do our affinity scores provably capture higher-order task relationships? * Could the affinity scores be used to separate related tasks from each block successfully? We provide a positive answer to both questions in a theoretical setup, where the labels of each task have been drawn from a linear model. Our finding is that for two tasks from the same group in the planted model, their affinity scores will be provably higher than two tasks from different groups. To describe this result, we first formally introduce the setup. **Setup.** Suppose we are learning \(T\) tasks. For each task, \(i\) from 1 up to \(T\), let the node labels of this task be given by a vector \(\tilde{Y}^{(i)}\), all of which are observed on a set of nodes denoted as \(\tilde{X}\). Let \(m\) denote the size of the observed set of nodes. We focus on regression \begin{table} \begin{tabular}{l c c c c} Dataset & Amazon & Youtube & DBLP & LiveJournal \\ Nodes & 3,225 & 16,751 & 57,368 & 18,433 \\ Edges & 20,524 & 104,513 & 420,122 & 1,397,580 \\ BigClam & 27.30 \(\pm\) 0.26 & 18.84 \(\pm\) 0.18 & 13.46 \(\pm\) 0.11 & 22.50 \(\pm\) 0.31 \\ Louvain clustering & 60.95 \(\pm\) 0.19 & 29.03 \(\pm\) 0.34 & 36.73 \(\pm\) 0.34 & 64.08 \(\pm\) 0.17 \\ Node2Vec & 39.05 \(\pm\) 0.10 & 32.44 \(\pm\) 0.18 & 28.72 \(\pm\) 0.10 & 50.40 \(\pm\) 0.29 \\ VERSE & 61.00 \(\pm\) 0.32 & 38.17 \(\pm\) 0.12 & 53.48 \(\pm\) 0.24 & 58.71 \(\pm\) 0.48 \\ Naive MTL & 87.08 \(\pm\) 4.04 & 43.42 \(\pm\) 2.24 & 67.95 \(\pm\) 2.28 & 82.56 \(\pm\) 3.21 \\ Multi-Gate MoE & 88.92 \(\pm\) 6.65 & 44.65 \(\pm\) 4.28 & 68.83 \(\pm\) 4.06 & 83.08 \(\pm\) 4.89 \\ Forward Select & 90.45 \(\pm\) 3.63 & 47.62 \(\pm\) 2.84 & 68.58 \(\pm\) 2.96 & 86.19 \(\pm\) 2.61 \\ Backward Select & 90.41 \(\pm\) 5.98 & 47.68 \(\pm\) 3.01 & 68.63 \(\pm\) 2.97 & 85.91 \(\pm\) 4.18 \\ **Alg. 1 (Ours)** & **92.66 \(\pm\) 4.85** & **49.62 \(\pm\) 2.26** & **70.68 \(\pm\) 2.65** & **88.43 \(\pm\) 2.70** \\ \end{tabular} \end{table} Table 1. Macro F1-score of community detection tasks on four social networks. We compare our approach with graph embedding methods, MTL optimization methods, and feature subset selection methods. For each experiment, we report the averaged result over three random seeds, including the standard deviations. \begin{table} \begin{tabular}{l c c c} Dataset & QM9 & Alchemy & OGB–molpcba \\ Metric & MAE (\(\downarrow\)) & MAE (\(\downarrow\)) & AP (\(\uparrow\)) \\ Graphs & 129,433 & 202,579 & 437,929 \\ Nodes per graph & 18.0 & 10.1 & 26.0 \\ Edges per graph & 18.6 & 10.4 & 28.1 \\ Tasks & 12 & 12 & 128 \\ Groups & 3 & 3 & 20 \\ Naive MTL & 0.081 \(\pm\) 0.003 & 0.103 \(\pm\) 0.001 & 27.03 \(\pm\) 0.23 \\ Multi-Gate MoE & 0.079 \(\pm\) 0.003 & 0.100 \(\pm\) 0.001 & 28.38 \(\pm\) 0.34 \\ Forward Select & 0.077 \(\pm\) 0.002 & 0.099 \(\pm\) 0.001 & 28.72 \(\pm\) 0.21 \\ Backward Select & 0.073 \(\pm\) 0.002 & 0.095 \(\pm\) 0.004 & 28.50 \(\pm\) 0.16 \\ **Alg. 1 (Ours)** & **0.067 \(\pm\) 0.003** & **0.090 \(\pm\) 0.001** & **29.73 \(\pm\) 0.12** \\ \end{tabular} \end{table} Table 2. Test performance on multitask molecular graph prediction datasets. We compare our approach with MTL optimization methods and feature subset selection methods. We report the averaged result over three random seeds, including the standard deviations. tasks in the analysis. Thus, the values of \(\tilde{Y}^{(i)}\) are all real values. To further simplify the analysis, we consider a one-layer linear graph diffusion layer as \(f(X,G)=P_{G}X\), where \(P_{G}\) (e.g., the normalized graph Laplacian matrix) denotes the diffusion matrix of the graph neural network, and \(X\) denotes the matrix of node features. We assume that \(X\) is drawn from an isotropic Gaussian distribution, and \(P_{G}\) is full rank. We measure the loss of this GNN against the label vector \(\tilde{Y}^{(i)}\) using the mean squared error (MSE): \[\ell_{i}(W)=\frac{1}{m}\left\|\tilde{P}_{G}\tilde{X}W-\tilde{Y}^{(i)}\right\|_ {2}^{2}. \tag{4}\] where \(\tilde{P}_{G}\) denotes the propagation matrix restricted to the set of nodes in \(\tilde{X}\). Based on our algorithm, we first show that the relevance score \(\theta_{i,j}\) admits a structure that measures the distance between \(\tilde{Y}^{(i)}\) and \(\tilde{Y}^{(j)}\). When we sample a set of tasks \(S\subseteq\{1,2,\ldots,T\}\) with a total of \(\alpha\) tasks, then we average their per-task losses as \[\ell_{S}(W)=\frac{1}{\alpha}\sum_{i\in S}\ell_{i}(W). \tag{5}\] **Notations.** We follow the convention of big-O notations for stating the result. Given two functions \(h(n)\) and \(h^{\prime}(n)\), we use \(h(n)=\mathrm{O}(h^{\prime}(n))\) or \(h(n)\lesssim h^{\prime}(n)\) to indicate that \(h(n)\leq C\cdot h^{\prime}(n)\) for some fixed constant \(C\) when \(n\) is large enough. **Characterization of affinity scores.** Minimizing equation (4) over \(W\) leads to a closed form solution on \(W-\)let us denote this as \(\tilde{W}_{S}\), which we can then plug into task \(i\)'s loss \(\ell_{i}(\tilde{W}_{S})\). We then average the value of \(\ell_{i}(\tilde{W}_{S})\) for subsets \(S_{1},S_{2},\ldots,S_{n}\) that include \(j\) as part of the subset. This gives the relevance score \(\theta_{i,j}\): \[\theta_{i,j}=\frac{1}{n_{i,j}}\sum_{1\leq k\leq n:\ \{i,j\}\leq S_{k}}\ell_{i} \big{(}\tilde{W}_{S_{k}}\big{)}. \tag{6}\] In the following result, we derive an explicit form of the score \(\theta_{i,j}\). For any matrix \(A\in\mathbb{R}^{m\times n}\), let \(A^{\dagger}\) be its Moore-Penrose inverse. **Lemma 6.1**: _In the setting described above, let the projection matrix \(\tilde{\Sigma}\) be given by \(\tilde{\Sigma}=\tilde{P}_{G}\tilde{X}\big{(}\tilde{X}^{\top}\tilde{P}_{G}^{ \top}\tilde{P}_{G}\tilde{X}\big{)}^{\dagger}\tilde{X}^{\top}\tilde{P}_{G}^{\top}\). For any \(1\leq i,j\leq T\), we have that the relevance score \(\theta_{i,j}\) is equal to_ \[\theta_{i,j}=\frac{1}{n_{i,j}\cdot m}\sum_{1\leq k\leq n:\ \{i,j\}\leq S_{k}} \left\|\tilde{\Sigma}\cdot\left(\frac{1}{\alpha}\sum_{I\in S_{k}}\tilde{Y}^{( I)}\right)-\tilde{Y}^{(i)}\right\|_{2}^{2}. \tag{7}\] Proof of Lemma 6.1. From equation (5), \(\tilde{W}_{S}\) is the quadratic minimization of \(L_{S}(W)\), expressed as \[\tilde{W}_{S}=\left(\tilde{X}^{\top}\tilde{P}_{G}^{\top}\tilde{P}_{G}\tilde{X }\right)^{\dagger}\tilde{X}^{\top}\tilde{P}_{G}^{\top}\Bigg{(}\frac{1}{\alpha} \sum_{I\in S}\tilde{Y}^{(I)}\Bigg{)}.\] We have that for any subset \(S\subseteq\{1,2,\ldots,k\}\), \[\ell_{i}(\tilde{W}_{S})=\frac{1}{m}\left\|\tilde{\Sigma}\cdot\left(\frac{1}{ \alpha}\sum_{I\in S}\tilde{Y}^{(I)}\right)-\tilde{Y}^{(i)}\right\|_{2}^{2}. \tag{8}\] and for any \(1\leq i,j\leq n\), by definition (cf. equation (1)) \[\theta_{i,j}=\frac{1}{n_{i,j}\cdot m}\sum_{1\leq k\leq n:\ \{i,j\}\leq S_{k}} \left\|\tilde{\Sigma}\cdot\left(\frac{1}{\alpha}\sum_{I\in S_{k}}\tilde{Y}^{( I)}\right)-\tilde{Y}^{(i)}\right\|_{2}^{2} \tag{9}\] Hence, the proof of equation (7) is completed. **Block structure of affinity scores.** Next, we show that under a separation condition between the label vectors \(Y^{(1)},Y^{(2)},\ldots,Y^{(T)}\), our algorithm can be used to separate the tasks into separated groups provably. More precisely, let \(\Sigma=P_{G}X(X^{\top}P_{G}^{\top}P_{G}X)^{\dagger}X^{\top}P_{G}^{\top}\). **Assumption 6.2**: _Suppose that the label vectors can be separated into \(C\) groups such that:_ * _For any_ \(\mathsf{two}\) _different_ \(i\) _and_ \(j\) _within each group,_ \[\left\|\Sigma\Big{(}Y^{(i)}-Y^{(j)}\Big{)}\right\|_{2}\leq a.\] * _For any_ \(\mathsf{two}\) _different_ \(i^{\prime}\) _and_ \(j^{\prime}\) _from different groups,_ \[\left\|\Sigma\Big{(}Y^{(i^{\prime})}-Y^{(j^{\prime})}\Big{)}\right\|_{2}\geq b.\] Now we are ready to state a structural characterization on the affinity scores \([\theta_{i,j}]_{T\times T}\). **Theorem 6.3**: _Suppose the features and the label vectors of every task are given based on the setup specified within this section and satisfy Assumption 6.2. Assume that the propagation matrix \(P_{G}\) is full rank. Let \(\epsilon\) be a fixed constant that does not grow with \(n\) and \(m\), and \(\delta\) be a value that is less than one._ _When \(m=O\big{(}d^{2}\log^{4}d\log^{2}(T\delta^{-1})\epsilon^{-2}\big{)}\), \(n=O\big{(}\log(T\delta^{-1})\epsilon^{-2}\big{)}\), and \(b^{2}-a^{2}\geq O\big{(}d^{4}\log^{6}d\log^{4}(T\delta^{-1})\epsilon^{-4} \big{)}\), then with probability at least \(1-\delta\) over the randomness of the training samples, the affinity scores \([\theta_{i,j}]_{T\times T}\) satisfy the following block structure: For any \(1\leq i,j,k\leq T\) such that \(i,j\) come from the same group, but \(i,j^{\prime}\) come from different groups, then_ \[\theta_{i,j^{\prime}}-\theta_{i,j}\geq\frac{\epsilon}{2}. \tag{10}\] Our result characterizes the block structure of the affinity scores. The affinity scores are lower within the block of nodes from the same community. On the other hand, the affinity scores would be higher for any pair of nodes that cross two communities. Based on this characterization, it is clear that by applying a spectral clustering algorithm over \([\theta_{i,j}]_{T\times T}\), one could recover the group structures specified under Assumption 6.2. For future work, it would be interesting to strengthen our analysis with relaxed assumptions further and extend it to more general losses and planted models. The complete proof can be found in Appendix A. ## 7. Conclusion This paper studied multitask learning on graphs and designed a generic boosting procedure that improves MTL by finding related tasks and training them in groups. We first estimate higher-order task affinities by sampling random task subsets and evaluating multitask performances. Then, we find task groups by clustering task affinity scores. Experiments show that our higher-order task affinity scores predict negative transfers from multiple tasks to one task. On various community detection data sets, our approach improves over previous MTL methods. The theoretical analysis further demonstrates that using our task affinity scores provably separates related and unrelated tasks. Our work opens up interesting questions for future work. For example, can we incorporate various node or link prediction tasks to enhance community detection in multitask learning? Can we apply recent developments in correlation clustering (Shou et al., 2018; Wang et al., 2019) to determine the number of clusters? **Acknowledgement.** Thanks to the anonymous referees for providing constructive feedback that led to significant improvement in our work. Thanks to Ruoxuan Xiong, Xiaojie Mao, and Yi Liu for fruitful discussions during various stages of this work. DL is supported by a start-up fund from Khoury College of Computer Sciences, Northeastern University.
2310.00942
Easier Said Than Done: The Failure of Top-Level Cybersecurity Advice for Consumer IoT Devices
Consumer IoT devices are generally assumed to lack adequate default security, thus requiring user action. However, it may not be immediately clear to users what action to take and how. This uncertainty begs the question of what the minimum is that the user-base can reliably be asked to do as a prompt to secure their devices. To explore this question, we analyze security actions advocated at a national level and how these connect to user materials for a range of specific devices. We identify four pieces of converging advice across three nation-level initiatives. We then assess the extent to which these pieces of advice are aligned with instruction materials for 40 different IoT devices across five device classes (including device manuals and manufacturer websites). We expose a disconnect between the advice and the device materials. A stunning finding is that there is not a single assessed device to which all four top pieces of converging advice can be applied. At best, the supporting materials for 36 of the 40 devices provide sufficient information to apply just two of the four pieces of advice, typically the installation and enabling of (auto)updates. As something of a contradiction, it is necessary for a non-expert user to assess whether expert advice applies to a device. This risks additional user burden and proxy changes being made without the proposed security benefits. We propose recommendations, including that governments and researchers alike should declare their own working models of IoT devices when considering the user view.
Veerle van Harten, Carlos Hernández Gañán, Michel van Eeten, Simon Parkin
2023-10-02T07:18:49Z
http://arxiv.org/abs/2310.00942v1
# Easier Said Than Done: The Failure of Top-Level Cybersecurity Advice for Consumer IoT Devices ###### Abstract Consumer IoT devices are generally assumed to lack adequate default security, thus requiring user action. However, it may not be immediately clear to users what action to take and how. This uncertainty begs the question of what the minimum is that the user-base can reliably be asked to do as a prompt to secure their devices. To explore this question, we analyze security actions advocated at a national level and how these connect to user materials for a range of specific devices. We identify four pieces of converging advice across three nation-level initiatives. We then assess the extent to which these pieces of advice are aligned with instruction materials for 40 different IoT devices across five device classes (including device manuals and manufacturer websites). We expose a disconnect between the advice and the device materials. A stunning finding is that there is not a single assessed device to which all four top pieces of converging advice can be applied. At best, the supporting materials for 36 of the 40 devices provide sufficient information to apply just two of the four pieces of advice, typically the installation and enabling of (auto)updates. As something of a contradiction, it is necessary for a non-expert user to assess whether expert advice applies to a device. This risks additional user burden and proxy changes being made without the proposed security benefits. We propose recommendations, including that governments and researchers alike should declare their own working models of IoT devices when considering the user view. ## 1 Introduction Currently, users of smart home devices (e.g., smart TVs, home appliances) are expected to configure and maintain these devices [1, 2]. Many national-level initiatives have emerged, to advise consumers on how to secure their devices and to check security configurations. There has been a laudable drive to assess users' abilities to secure their smart devices (e.g., [3, 4, 5]). Conventionally, the usability of advice is tested by conducting a study where users' ability to apply the advice is tested. These kinds of research designs are in-depth, facilitating only a small set of advice or recommendations to be tested. Further, usability studies typically test advice which has already been qualified as applicable. In the past few years new studies have emerged that zoom out and look at the overall landscape of large corpora of security advice in the hundreds [6, 1], some specifically for IoT [7], evaluating properties such as clarity and actionability. Such work can answer questions about the properties of security advice as a phenomenon (and its over-production [6]), though it cannot determine if advocated actions can be applied in specific cases or environments. This can only be done by looking at the advice itself rather than involving users since that is not feasible for thousands of pieces of advice. Between these two areas of work is a large gap which we need to address, especially for IoT security, to determine what it is that we should expect - and encourage - smart home device users to do, to sufficiently secure their devices. These recommen dations need to function for an enormous variety of devices. Here, we approximate the user experience by looking at support materials for specific devices. This can scale to a larger set of devices than a classic user study can, while not being restricted to the advice text itself, as the large-scale studies are. To improve our understanding of IoT security advice, with its enormous heterogeneity of devices, we propose a study design that operates in the middle ground. Here we evaluate the current foundation of cybersecurity guidance for IoT devices and assess whether top pieces of advice aimed at the general public align with the properties of IoT devices as described by manufacturers and third-party sources. We select advice from three countries in the top 10 of the Network Readiness Index (NRI) of 2021 [8]: the United Kingdom, the United States, and the Netherlands. We investigate two research questions: (i) What is the baseline of convergent advice to users for securing consumer IoT devices?, and (ii) How does the identified baseline of convergent advice align with the content of support materials that users have access to, such as instruction manuals, videos, and organic search results? We found divergence in what was advised to users, but also four convergent pieces of advice related to passwords and updates for IoT devices. We explore the degree to which these four pieces of convergent advice can be applied using the support materials for 40 IoT devices across five device classes (Sections 5 and 6). Instead of purchasing each device, we utilized a scalable approach by analyzing resources that average users have at their disposal, specifically the accompanying manual, quick guide, manufacturer websites, instruction videos and organic search queries (as a means to capture features which were not mentioned in these materials). The latter includes 746 browser search results, and 626 YouTube results (212 and 76 from manufacturers, respectively), which were analyzed manually. Our contributions are as follows: * and prompted by - the available device materials and online resources for a sample of 40 smart home devices. For the convergent pieces of public-level advice that were selected, we find that there is not a single device where we can apply all four pieces of advice. This means that as general advice, it is fundamentally not applicable and not fit-for-purpose, and hence is not correct advice for many devices. This is a finding that device-agnostic user studies and deep-dive, device-specific investigations have not exposed; * it is not the case that any effort spent on security is worthwhile if it does not achieve its specific aim. We characterize a contradiction, wherein a user is required to have expertise about their device before they can understand whether security recommendations aimed at helping non-expert users apply to their devices; * **Scalable Internet-driven analysis of diverse consumer IoT devices.** In exploring our aim of identifying the basic steps that users of consumer IoT devices can follow, we develop and utilize a method for examining available advice in relation to device features. It scales better than user studies and expert examinations with physical devices, while still being able to evaluate the actionability of advice for individual devices, rather than being confined to the advice text itself, as the large-scale studies of security recommendations have been. Given the great diversity of IoT devices, there is an urgency to find approaches that are accessible to a wider range of researchers, which are less costly and resource-exorbitant. Here, we use the accessibility of online device advice to map the existence of specific security-related device features. We revisit our research questions in the Discussion (Section 7), including recommendations such as surfacing researchers' and policymakers' own mental models of IoT devices, and a push for consistent terminology to connect high-level advice to device features. We close the paper with Conclusions (Section 8). ## 2 Background and Related Work Here we outline the security challenges faced by smart device owners, then turn to various lines of research on security advice. ### IoT threat landscape The adoption of IoT devices in homes is increasing. Using the definition of Silverio-Fernandez et al. [9], we define an IoT device as a context-aware electronic device capable of performing autonomous computing and connecting to other devices for data exchange, in either a wired or wireless manner. Several issues with IoT devices have been identified, such as unauthorized data collection, surveillance, and hacking [10, 4, 11, 12, 3, 13]. Once attackers realized the potential to gain unauthorized access to such systems, they started experimenting with different ways to exploit related vulnerabilities. Numerous malicious scripts, tools, and malware emerged. Malware families such as Mirai and Gafgyt [14, 15, 16, 17] are well-known. Smart devices also have the potential to be co-opted within the home to monitor and control domestic partners, as tech-abuse [18]. Numerous governments have declared baseline expectations and design principles for IoT device security. Nevertheless, these efforts are not immediate, as there are many manufacturers and product types, and a lack of reliable data on the security practices of manufacturers [16]. ### Advice as a prompt to take action The security of consumer devices will continue to require the involvement of end users, for the foreseeable future [19, 6, 20, 21]. It has been observed that end users have a relatively limited understanding, and potentially erroneous or incomplete mental model, of smart devices and their associated data-processing activities [4, 11, 22, 13]. Where device users may appear to 'ignore' security advice, their attitude may actually be rational when factoring in their daily activities, and to what extent manufacturers provide adequate user support [23, 24, 12, 25, 26]. For example, prior to being prompted, only around half of the respondents in a study by Emami et al. [27] expressed privacy or security concerns; this number increased to almost all respondents once prompted about these topics. This suggests that privacy and security could be latent concerns for users and must be prompted in an appropriate way, 'from outside'. Many end users are interested in protecting their devices, but struggle due to lack of knowledge about security risks and protection methods [28, 27]. According to Haney et al. [11], end users perceive they have some responsibility in securing their IoT devices, but to do so successfully requires collaboration with manufacturers and governments/regulatory organizations [29]. This all suggests that users' success in securing their smart devices is to some extent reliant on the efforts of other stakeholders in the wider consumer device ecosystem. The adoption of security practices can be strongly encouraged by media, family, and peers offering cybersecurity advice [30, 31], especially when unfavorable security situations are depicted with relatable people [32, 33, 25]. Nonetheless, confronted with the overload of advice [6] and limits to user time and effort, end users may leave their devices in a less-than-secure state [34, 12, 22]. People have limited time to devote to security depending on the perceived security benefit of applying a piece of advice [24, 25, 2]. Other reasons for users to reject cybersecurity advice range from excessive marketing material, advice not seeming to be reliable, lack of trust in the advice source, or because the user has not yet had a negative experience [29, 35]. ### Quality of consumer advice The quality and effects of the formulation of cybersecurity advice have been explored in several works. As Reeder et al. [1] point out, varying computing contexts make it challenging to derive helpful general advice. There is widespread cybersecurity advice of which most is found online, resulting in an overload of disorganized advice [6]. Some pages give a sequential list of steps, assuming that the reader has a certain level of technical skill to determine how to perform them. In contrast, pages may otherwise contain so much advice that it could become overwhelming, making it difficult to know where to get started. As a response to this problem, Ion et al. [12] outline characteristics of sound security advice while Turner et al. [2] more specifically focus on IoT devices and consider the quality of advice, how well it is written, and the extent to which it can reach end users. In line with this, [1] states that 'general advice' should be: Effective, Actionable, Consistent, and Concise. There, conciseness is discussed regarding the number of pieces of advice a person needs. The applicability of cybersecurity advice, however, is not taken into consideration. This issue is touched upon in only a limited sense [23, 4, 11], noting a lack of available information about the security features of IoT devices, and the provision of advice on cyber hygiene by manufacturers (concluding that manufacturers do not provide a comprehensive manual or support page). Smart home device users have also expressed dissatisfaction with the lack of support [36], struggling to find useful information on either the manual or support page of a device manufacturer to improve the security of their device. Regarding the applicability of advice for consumer IoT devices, [36] demonstrates the impact of uncertainty, with end users checking for a password on their IoT device; if they cannot find it, they still do not know if it exists. As demonstrated by Reeder et al. [1], good advice is not a universal truth as it is highly contextual; some pieces of advice may be effective for some people with particular computing environments, but not for others [36, 34, 29, 6, 1, 37]. In conclusion, there is an overproduction of advice, diversity of smart home devices, and expectation for smart home users to take action; there is a gap in supporting qualifying of which advice applies to which specific devices (and in what way) and the extent to which users are supported to determine what they can do to secure their smart devices. ## 3 Public-level Advice - Methodology This section addresses our first research question, 'what is the baseline of convergent advice to users for securing consumer IoT devices?'. We consider governments and public bodies as acting to provide broadly applicable yet workable advice to the public. ### Data selection Using the top 10 of the Network Readiness Index (NRI) of 2021, we selected countries that offer specific IoT advice to citizens [8]. These are the United Kingdom, the United States, and the Netherlands (NL) (Dutch-language text is translated by the authors). We selected these three countries as representative of public-facing advice about IoT devices at nation-scale. Countries were also selected based on the authors having collective knowledge of the IoT and regulatory landscape of those countries. Per country, we explored the information provided for IoT devices and documented each distinct piece of cybersecurity advice. The UK and the Netherlands define IoT devices as any device that can be connected to the Internet. The US has an even broader definition, and defines an IoT device as: US - "Any object or device that sends and receives data automatically through the Internet. This rapidly expanding set of "things" includes tags (also known as labels or chips that automatically track objects), sensors, and devices that interact with people and share information machine to machine. [38]." ### Data analysis A 'codebook'-style thematic analysis [39] was conducted to compare the pieces of advice with each other. Pieces of advice were discussed at regular codebook meetings within the author team, to identify overlaps and discuss unclear cases. An inductive approach was applied in which, per country, all pieces of advice that were given on their governmental website regarding securing IoT devices were identified. In the next stage, the gathered pieces of advice per country were considered side-by-side by the first author. Where a sufficient overlap was identified between pieces of advice of different governments, these were clustered together as one convergent piece of advice. This process was discussed in iterations with the other authors. For example, the following three pieces of advice were clustered together to formulate the convergent advice "Change default password(s) to new strong password(s)": UK - "Consider the factory set password a placeholder. You should immediately change it the moment you start using the new device. Otherwise, anyone who previously had access to the factory settings password can access your device." "You should make your passwords as un-guessable as possible for an outsider [40]." US - "Some Internet-enabled devices are configured with default passwords to simplify setup. These default passwords are easily found online, so they don't provide any protection. Choose strong passwords to help secure your device [38]." NL - "Wijzig het standaardwachtwoord en stel een sterk wachtwoord in [41]."(_"Change the default password and set up a strong password"_) ## 4 Public-level Advice - Results Similar to the findings of [1] regarding online security advice, for consumer IoT devices we found a wide spread of security advice. A total of 30 pieces of security advice were uncovered (Table 1). Various definitions for passwords, updates, and consequences of applying these were observed. For example, within the governmental advice, there were differences between countries in whether it was declared that a device could have one, or more, 'default' passwords. One explanation for this is that each country emphasizes different security aspects. The advice of the US, for example, focuses on securing the router. In contrast, the Dutch advice emphasizes the installation of updates through their campaign "Doe je updates" (_"Do your updates"_). Despite these different focus points, seven pieces of convergent advice were uncovered. We focused on advice about securing individual IoT devices. Within this we note that although the router serves as a gatekeeper for the network, we do not consider it an IoT device. This focus resulted in putting to one side any converging advice that is not directly applicable to devices, and instead applies to the broader home network, for example, using a password manager and disabling use of the Universal Plug and Play (UPnP) protocol on the router. These pieces of advice can be considered additional measures users can take to secure their IoT devices, which play a vital role in improving the level of security of the IoT ecosystem, but are outside the scope of this research. Because we did not consider advice that obstructed actual use of the device, the advice to only connect the device to the Internet if necessary was also excluded as that defeats the purpose of using the device securely. As a result, four converging pieces of public-level advice remained: 1. Change the default password to a new strong password. 2. Use different passwords for different devices 3. Install updates. 4. Activate automated updates/set a periodic reminder in your calendar. These pieces of advice were the most widely communicated top-down advice for each country and will be referred to as top pieces of advice for the remainder of this paper. An overview of the source text used for each piece of advice can be found in the Appendix. The four general pieces of convergent advice, or themes, are: to change the default password to a \begin{table} \begin{tabular}{r c c c} \hline **Advice** & \multicolumn{2}{c}{**UK US NL**} \\ \hline **DEFAULT CREDENTIALS** & & & \\ Change default to strong password & ✓ & ✓ & ✓ \\ Do not reuse passwords & ✓ & ✓ & ✓ \\ Use a password manager & ✓ & ✓ & ✓ \\ Change default username & & ✓ & \\ **ROUTER** & & & \\ Disable UPnP & ✓ & ✓ & ✓ \\ Change default password router & & ✓ & ✓ \\ Don’t use WPS & ✓ & ✓ & \\ Disable remote management & & ✓ & ✓ \\ Change SSID a.k.a. network name & ✓ & & \\ Install a network firewall & ✓ & & \\ Reduce wireless signal strength & ✓ & & \\ Turn off network when not in use & ✓ & & \\ Activate WPA2 & & ✓ & \\ Monitor for unknown device connections & & ✓ & \\ Use the router provided by the ISP & & ✓ & \\ Use router intended for small businesses & ✓ & & \\ \hline **UPDATES** & & & \\ Install updates & ✓ & ✓ & ✓ \\ Activate automated updates & ✓ & ✓ & ✓ \\ **NETWORK CONNECTIVITY** & & & \\ Only connect device to internet if necessary & ✓ & ✓ & ✓ \\ Use an Ethernet cable instead of Wi-Fi & & ✓ & \\ **OTHER** & & & \\ Create unique accounts for each user & & ✓ & \\ Use multifactor authentication & ✓ & ✓ & \\ Enable encryption features & ✓ & ✓ & \\ Switch off sensors if not necessary & ✓ & & ✓ \\ Switch device off and not leave in standby & & ✓ & \\ Download apps from built-in app. stores & & ✓ & \\ Use antivirus software & ✓ & & \\ Install a firewall for IoT devices & ✓ & & \\ Regularly back up your data & ✓ & & \\ Remove unnecessary services and software & ✓ & & \\ \hline \end{tabular} \end{table} Table 1: Governmental Advice Comparison - The above overview shows each piece of cybersecurity advice found per country. These pieces were grouped together when various government recommendations were deemed to sufficiently overlap. This resulted in certain pieces of advice receiving more than one checkmark. new strong password, to use different passwords for different devices, install updates, and activate automated updates. These top pieces of advice also align with what security experts choose as the most essential advice, as identified in existing IoT research [23] (and e.g., signposted in recent US government initiatives for smart home device security [42]), but also in general cybersecurity [6, 1]. This further implies that such advice is commonly held, and would be what reaches consumers from various community or expert channels (not just public-level advice). The public-level advice on the websites of the US and the UK give a summary of the cybersecurity advice they deem essential on one dedicated page [38, 40]. In these pages, words are highlighted that, when clicked, redirect users to webpages that give more specific information about a particular subject. These forwarded pages contain general information, for example, on how to set a strong password aimed at computing devices and online services. The primary goal of the public-level advice of the selected countries is to make devices more resilient against outside threats. The threats that are mentioned are attackers, botnets, or cyber-criminals that try to break into the device with the goal to retrieve personal data, cause damage to the device, or use the device to attack other devices (in the case of a botnet attack). The UK advice also warns about risks of the misuse of personal information by the manufacturer of the device. In upcoming sections, we will further detail how these countries frame the different pieces of advice, and explore what this means for efforts to apply the device. ### Changing default password When looking at the piece of advice to change the default credentials, the selected countries generally refer to changing the default password(s), as soon as possible. Only the US also recommends changing the default username. Although all the selected countries use the word _default_ to refer to the password they deem important to be changed, the UK uses this term interchangeably with the words 'factory set password' across separate pages. The latter wording more strongly emphasizes that the advice refers to credentials set during device production. The advice of the US also suggests to consider default passwords as already public since they can be found online. In sum, users are advised to change a password on a device immediately, thereby also inferring that there is a password for the device itself. As the main objective of top pieces of advice is to make devices more resilient against outside threats, this seems to be a password that enables access over a network. What is striking is that the advice of the UK and NL seems to assume that there is one default password per device that needs to be changed, while the US speaks of multiple passwords per device. Similar differences can be observed within the scientific literature where some works (e.g., [23, 14]) speak in terms of _the one_ default password, while others (e.g., [43, 4, 16]) more generally speak of default passwords. These interpretations suggest that each device has _at the least_ one set of default credentials that can be changed with little clarity as to how to be sure that a password is the one relating to network access and not another one. #### 4.1.1 Setting strong password The second piece of advice is to change the default password to a strong password. All selected countries consider a password _strong_ if it contains upper- and lowercase letters, digits, and special characters and does not include personal information. However, the minimal length differs, varying from a minimum of 8 to 12 characters. Some advice, such as the UK and the US, discourages dictionary words and encourages random strings of letters and digits. In contrast, top pieces of NL advice include examples of sentences with dictionary words1. Footnote 1: Parallel advice in the UK also follows this approach, as at [https://www.ncsc.gov.uk/blog-post/three-random-words-or-thinkrandom-0](https://www.ncsc.gov.uk/blog-post/three-random-words-or-thinkrandom-0). The latter highlights that there may also be differences in advice available _within_ countries ### Using different passwords The reuse of passwords is strongly discouraged by all countries, emphasizing the importance of using 'unique' passwords for devices. While the US frames this in terms of what users should not do, stating not to reuse passwords, the UK and NL emphasize actions that a user should follow: use different passwords for different devices. When changing an existing password, the US and NL directly state to set a strong and unique password, while the UK speaks of creating an 'un-guessable password for an outsider.' ### Installing updates As for installing updates, the security advice of all countries uses an assertive tone, urging to 'apply' or 'carry out' (firmware/software) updates or patches as soon as they are available. When the word 'patches' is used, it is always explained that these refer to updates. In the three countries, the advice implies that the primary goal of an update is to improve the security functionality of the device, to protect against outside threats. The UK, for example, notes: "Firmware updates allow manufacturers to install software patches in case a security vulnerability is detected." Taken together, this resulted in the theme 'installing updates to improve the security of IoT devices'. Though updates might include security elements that improve a device's security, this is not guaranteed as an update could potentially only - or also - include enhancements to features, such as modifying the interface or menus. ### Enabling auto-updates We generally find that users are instructed to check if a device offers the option to automatically update and manually check for updates if this is not the case. Regarding automatic update functionality, wording includes auto-update features, automatic updates, automatic updating, and applying updates automatically. The UK and NL specifically instruct users to enable them, while the US is less direct and states to take advantage of automatic options when available. In the absence of auto-update capability, the US and NL governmental top pieces of advice recommend that end users periodically check for updates. The advice of the UK appears to assume that devices that lack an auto-update feature can be configured to notify users of new updates. Only the NL advice mentions and emphasizes a device's companion app as a way to install updates and specifies _where_ and _how_ to look for updates. Within the NL advice, it is recommended, for example, to first consult the manual to check if the device is supported with updates (and supports walking through checks for enabling automatic updates through an accompanying app). Where devices do not support automatic updates, it is recommended to check the settings of the companion app for update notifications, and if these are not there, to check the manufacturer's website. When details concerning update support are lacking, it is recommended to contact the manufacturer directly. Although the US advice covers "How do I set up automatic updates," the accompanying information similarly relies on users investigating features, such as "Turn on and confirm automatic updates" and "How you turn on automatic updates can differ depending on the software and the device" without going into more detail. ## 5 Device Materials - Methodology Here we explore our second research question, 'How does the identified baseline of convergent advice align with the content of instruction manuals and other related materials for the selected consumer IoT devices (as signals, or prompts) as provided - or not - by manufacturers?' We draw in the 'top pieces of advice' from the advice provided by public-level bodies in the previous section. ### Data selection - devices and materials In this stage, the goal is to test the presumption that the top pieces of advice align with available IoT de vice features. We employ a method that utilizes information sources that end users can access, in place of a physical inspection of each actual device. We chose to rely on device documentation and Internet searches, rather than the costly and non-scalable alternative of directly purchasing each and every device, because IoT devices continue to proliferate into evermore product types and designs. As a field, we need to explore approaches that allow us to move with this proliferation. Requiring physical access to a device for analysis would severely restrict research and pose problems of generalization. Beside this scaling problem, approaches based on physical devices have their own drawbacks. Cognitive walkthroughs, for example, where the researcher steps through device features, have issues with ecological/external validity [44] in terms of asserting to represent the user journey [45]. Our work targets a middle-ground, or precursor step, to identify signals that qualify public-level advice as applicable to specific devices. Our approach includes third-party sources via YouTube and web search - YouTube and top search content often responds to user needs (for missing information about devices) or relates to actual users documenting their experiences of device use (filling in gaps in support). We aim to strike a balance, by systematically exploring resources that users would have access to, bridging the top pieces of advice and device functionality. Because the emphasis of our study is on advice to consumers, we restricted our choice of IoT devices to those utilized within a domestic setting. We selected 40 popular IoT devices across five categories, to ensure a diverse selection of commonly used IoT devices: smart entertainment, smart health, smart security, smart assistants, and smart home appliances. The release dates of the devices ranged from 2015 to 2021. Per category, we selected the devices from brands that were sold in all three countries, and listed in the top lists of frequently purchased devices per device category on the websites of popular retailer companies: Amazon, CoolBlue, and Bol.com, since Figure 1: The examination methods utilized in this paper are shown in the above diagram. The two flow charts illustrate the analysis process for each source consulted for each IoT device. Keywords were used for the loop in the first chart and search queries for the second chart. these will have most of the market share. Although we examined websites available in the Netherlands, the devices we cataloged are produced by international brands, and are available in many countries. To generate an overview of known device functionality, we first consulted the manual and quick guide if available (not all devices had a 'quick guide', for example). We also consulted the website of the manufacturer and their YouTube channel (if available). We do not assume that all features are documented in the provided materials - we also used a browser search for third-party search results (specifically the Google search engine) and YouTube, as manufacturer materials may not include all pertinent information about the functionality of a device. These third-party sources consisted of blogs, vlogs, forums, retailer- and news-websites, and YouTube channels. For the online sources, we used the following search queries to find more information about default passwords and how to change them, drawn from Section 4: 1. $DEVICE_NAME default password 2. $DEVICE_NAME factory set password 3. $DEVICE_NAME changing the default password 4. $DEVICE_NAME changing the factory set password Here, $DEVICE_NAME is replaced by one of the 40 devices of which an overview can be found in Table 6. To find more information about the extent to which devices are supported by updates, we used the following search queries (for search (2), this included where the search engine resolved the term to 'automatic'): 1. $DEVICE_NAME updates 2. $DEVICE_NAME auto updates 3. $DEVICE_NAME security updates ### Data collection - accompanying material and online platforms Data was gathered between December 2021 and the end of March 2022. We searched for videos on YouTube, the search results in the form of webpages through Google browser search, and the website of each manufacturer, for which the first five results were checked for each search query when possible. In the case of using the search engine on the manufacturer's webpage (if there was one), rarely more than one result appeared per search query. Interestingly, on many occasions, more search results showed up for manufacturer websites through a Google browser search than when directly using the search engine on the manufacturer's website. Accounting for overlapping results, the total number of unique search results on Google was 746, and 626 on YouTube. 212 of the search results on Google and 76 on YouTube were websites and videos created by the manufacturers directly. The remainder contained websites and YouTube channels from third parties. Within the online manuals, quick guides, written transcripts of YouTube videos, and webpages, we searched for the existence of the keywords "password" and "update." When these keywords were found in the text, it was read in order to understand the context and significance around these keywords. See Figure 1 for an overview of the analysis process. One author followed the same approach of codebook thematic analysis [39] as for the resources of the top pieces of advice, involving coding and regular discussion, toward determining whether resource content supported one of the four pieces of advice. For codebook-style thematic analysis one coder is sufficient [39, 46]. ## 6 Device Materials - Results We first checked which resources were provided by the manufacturer, finding that all devices in our search provide some form of documentation, such as a manual (31 devices), quick guide (15), or both (6) (see Table 2). The texts were first examined using the terms _default password_ or _factory-set password_ as based on the top pieces of advice, but in none of the cases were these terms used. Interestingly, it was not uncommon to find webpages on manufacturer websites via the Google search engine (results which did not appear when using the search engine on the manufacturer website directly). A supporting overview of device coverage of advice is \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **INFO DP APPLICABLE** & & & & & & & \\ Changing DP to strong DP & 0 (0) & 0 (0) & 0 (0) & 0 (0) & 0 (0) & 0 (0) \\ **INFO PWD REUSE** & & & & & & & \\ Discourage reuse PWDs & 0 (0) & 0 (0) & 0 (0) & 1 (1) & 0 (0) & 0 (0) & 0 (0) \\ **INFO DP NOT APPLICABLE** & & & & & & & \\ Changing to strong PWD & 0 (0) & 0 (0) & 0 (0) & 0 (0) & 0 (0) & 0 (0) \\ Changing DP & 0 (0) & 0 (0) & 0 (0) & 0 (0) & 2 (2) & 4 (5) & 7 (11) \\ Changing/setting a PWD & 2 (2) & 2 (2) & 7 (7) & 2 (4) & 15 (32) & 8 (13) & 11 (12) \\ Mentioning DP & 0 (0) & 0 (0) & 4 (4) & 0 (0) & 1 (1) & 1 (1) & 2 (3) \\ Mentioning PWD & 0 (0) & 10 (10) & 14 (15) & 5 (6) & 20 (44) & 15 (17) & 29 (68) \\ **NO INFO PWDs** & & & & & & & \\ No info on PWD & 13 (13) & 19 (19) & 0 (0) & 17 (32) & 37 (203) & 28 (57) & 36 (150) \\ Info diff. device/PNF & 0 (0) & 0 (0) & 40 (136) & 1 (5) & 11 (18) & 4 (5) & 12 (22) \\ \hline **TOTAL UNIQUE** & 15 (15) & 31 (31) & 40 (162) & 20 (48) & 40 (300) & 38 (98) & 40 (266) \\ **SOURCES** & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Default Password (DP) Results Overview - The table provides an overview of the number of unique devices (max. 40) that are accompanied by informative material on passwords (PWD) per source category. The number of sources is listed in parentheses. The rows under “INFO DP APPLICABLE” and “INFO PWD REUSE” indicate the number of sources that have enough information to follow advice 1 and 2, which are to change default passwords to strong passwords and discourage password reuse. No sources for any device support the first piece of advice, and only one source (from the manufacturer’s YouTube channel) for one device supports the second. The rows under “INFO DP NOT APPLICABLE” and ”NO INFO PWDs” show sources that do not have enough information to follow the advice and are included to give insight into the availability of information on (default) passwords and the frequency of lack of information to apply the pieces of advice. For example, none of the quick guides for the 40 devices mention default passwords at all. in Appendix, Table 6. ### Support for changing default password The term _default password_ was only used for four devices on manufacturers' websites when using their search engine (Table 2, third column, 'Mentioning DP'). Third-party results, such as YouTube channels and Google search results which are not the manufacturer, tended to include more information on default passwords, although still for the minority of devices (10)2. Footnote 2: Although Table 2 shows 12 devices for third-party results that contain information about (changing) default passwords, two of these contained information on YouTube and Google which brings the total of devices that refer to default passwords to 10 #### 6.1.1 Potential for many default passwords The phrase _factory-set password_ did not appear in our examination, and the results that did appear primarily included information on factory-resetting devices. When a default password was mentioned, it did also refer to the default password on the Wi-Fi router, as was the case for e.g., Ring doorbell (Smart Security) and Bose smart speaker (Smart Assistants). Overall, Wi-Fi routers played a vital role within the provided resources, as the Wi-Fi password was the most mentioned password in our dataset. The primary goal of the documentation, when mentioned, was to connect the device to the Internet rather than securing the network. There are then challenges in determining if the default password for securing use of a device on the network is regarded as even being on the device itself. #### 6.1.2 Qualifying a password as the default password For none of the devices that mention a default password, more than one default password is mentioned. This finding did not change when including the YouTube and Google results from third parties. Even with the sources mentioning a default password, it was still not possible to qualify the advice. It was, for example, not clear if it was possible to set a strong password for devices other than those that only accept numerical passwords. Furthermore, for the sources where the term 'default password' was used, the instructions on how to change these were not always included, as shown in Table 2 in categories referring to "Info DP". In these cases, the information was mostly limited to providing information as to what device the default password is for, when someone forgets the password, or has had to reset the device. Mostly, when a default password was mentioned - and in cases where no further instruction on how to change it was provided - it referred to a password on the device without declaring related security functions or benefits. #### 6.1.3 Having multiple default passwords for securing a device Our analysis shows that a great variety of passwords can be changed, which are mostly not referred to as the 'default' password. An encouraging example of manufacturers recognizing this challenge is the Reolink doorbell (Smart Security), which dedicates a webpage to describing the distinctions between the Password for the Reolink App, the Reolink Client, and the Reolink Cameras (Smart Security). The password(s) that can be changed on devices may still represent one or several default passwords even while the phrase "default password" is missing. In these cases, any password on the device could or could not be a default password, making it difficult to determine whether it is possible to change the default password as advised, as shown by the Reolink example. On top of that, even when the wording 'default password' is used, it does not necessarily provide security benefits against outside threats, like in the cases where default passwords refer to a 4-digit parental code (as for two smart TVs, the Samsung UE49MU8000 and the LG UHD TV 43UP80 (Smart Entertainment). As another example, the default password of two smart TVs only allowed for setting a 4-digit code. Even if it had been possible to set a strong password in these instances, it still would not protect end users from outside threats that were described in the top pieces of advice sources. This is because these kinds of passwords, as a class of parental controls, only protect against the change of TV settings by unauthorized members within the home, and are not associated with the root control of the device (that could potentially be exploited over the network). #### 6.1.4 A difference between strong and stronger passwords If the existing password did not meet the recommendations for a strong password, the advice to set a strong password can provide security benefits, but only if the device supports these requirements. How \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **INFO UPDATES APPLICABLE** & & & & & & \\ Manually installing updates & 1 (1) & 6 (6) & 4 (6) & 5 (5) & 16 (31) & 8 (9) & 18 (21) \\ Enabling/forced auto-updates & 0 (0) & 6 (6) & 5 (6) & 1 (1) & 9 (16) & 12 (17) & 9 (28) \\ **INFO UPDATES NOT APPLICABLE** & & & & & & \\ Info about auto-updates & 0 (0) & 1 (1) & 2 (2) & 0 (0) & 3 (4) & 8 (13) & 16 (20) \\ Contains info about updates & 1 (1) & 1 (1) & 6 (7) & 1 (1) & 21 (45) & 13 (26) & 34 (83) \\ **NO INFO UPDATES** & & & & & & \\ No info about updates & 13 (13) & 17 (17) & 0 (0) & 18 (31) & 38 (207) & 26 (48) & 30 (109) \\ Info diff. device/PNF & 0 (0) & 0 (0) & 28 (33) & 1 (1) & 11 (26) & 4 (7) & 11 (26) \\ \hline **TOTAL UNIQUE** & 15 (15) & 31 (31) & 40 (54) & 21 (39) & 40 (329) & 38 (120) & 40 (287) \\ **SOURCES** & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: (Auto-)Update Results Overview - The table provides an overview of the number of unique devices (max. 40) that are accompanied by informative material or not on (auto) updates per source category. The number of sources is listed between parentheses. The rows under “INFO UPDATES APPLICABLE” indicate the number of sources that have enough information to follow advice 3 and 4, which are to manually install updates and the enabling of auto-updates. It’s worth noting that all devices have at least some sources that mention updates, but that the majority of sources do not provide information on how to install or enable (auto)updates. The rows under “INFO UPDATES NOT APPLICABLE” and “NO INFO UPDATES” show sources that do not have enough information to follow the advice and give insight into the availability of information on (auto) updates and the frequency of lack of information to apply the pieces of advice. ever, as shown in Table 2, we did not find a single resource confirming that it is possible to set a password that complied with the requirements of the top pieces of advice for setting a strong password. Sources providing information on setting or changing a password, not limited to digits only, indicate a requirement of using a minimum number of characters and allowing for letters, and digits to be set. For eight devices it was mentioned _how_ to change the default password, see row "Changing DP" in Table 2. However, this did not mean it could be changed to a password that contained upper-and lowercase letters, digits, and special characters (as often advised, subsubsection 4.1.1). For the NVIDIA media player (Smart Entertainment), an explanation of how to change the default password was found, but it was not made clear if it was possible to apply the criteria of the top pieces of advice for setting a strong password. This marks the difference between being able to make a password sufficiently strong, or as strong as the device allows it to be. In some cases, it is mentioned to use upper- and lower-case letters, but never in combination with the use of special characters. The latter is even actively discouraged when creating a Govee smart light (Smart Entertainment) user account. They note, "Passwords should be 8-20 characters using both letters and numbers only. Do not include special characters or symbols." The Anova smart oven (Smart Home Appliances) and Withings smart scale (Smart Health) require lowering the router's security as these can only connect to it if the router's password lacks special characters. It can be concerning that all devices seem to allow, and some even force, setting a password that does not meet the requirements of the top pieces of advice. This permission for the use of weak passwords is, unfortunately, a finding that is not uncommon (see also [47]). ### Support for using unique passwords Regarding the use of unique passwords, only in the case of the Reolink doorbell (Smart Security) is there one YouTube video from the official channel of Reolink where the reuse of passwords is actively discouraged, which shows that the theme where passwords are understood as a way to protect against unauthorized access is shared to some extent by manufacturers, see Table 2. #### 6.2.1 Attempt to apply password advice scenarios Taken together, the range of advice we observed in relation to changing the default password to a new strong password could, in practice, translate to the following scenarios: 1. The end user changes a default password with security benefits against outside threats. Of the 6 manufacturers where a default password was mentioned in their material, 3 could potentially offer security benefits to outside threats, however it was not clear if these could be changed, let alone to a strong password. 2. The user changes a password called the _default password_ by the manufacturer that does not add any security benefits regarding online threats, which was the case for 2 manufacturers ('default password' was a parental PIN code). 3. There is no information about a default password, which was the case for 30 devices, so the end user does nothing. 4. There is no information about a default password, so the end user changes a password that they can find for the device, that does not confer security benefits to outside threats. This would be possible, especially if the device has multiple accounts associated with it which each have a password or PIN, where potentially only one confers the security benefits described in the top pieces of advice. 5. There is no information about a default password, as was the case for 75% of the analyzed sources; the end user changes a password that does have security benefits. 6. There is no information about a default password, so the user changes a password for another device, such as changing the Wi-Fi password on the router. At best, this could inadvertently result in a security benefit. 7. There is no information about a default password, but the end user searches for one anyway under the assumption that there is one. This may leave the user in an ambiguous state, either satisfied that they have checked, or concerned that they have not been able to secure the device. Scenario 2 above could potentially lead to end users feeling they have applied the advice while expecting additional security benefits, even though this is not the case, leaving them feeling more secure than they are. Alternatively, in Scenarios 3-6, where end users cannot be sure that the advice applies to their device because the information is missing, it is much less assured that the intended security benefit will happen, appearing more like a 'folk' security behaviour [48]. The opposite is also possible, where end users can feel less secure when not able to apply advocated advice. There is existing evidence of the potential for users to be unsure of the security features and benefits provided by their IoT devices [3]. ### Support for installing updates When checking the quick guides of devices, only the Garmin smart scale (Smart Health) and Wink smart home hub (Smart Assistants) mentioned update support. The first showed what the icon looks like when the device is installing an update and when an update was successful or failed, while the other briefly described how to update the app. Although still low, consulting the manual offered a higher chance of finding information about the provision of updates, as this was provided for 15 devices. Including manufacturers' webpages and YouTube channels resulted in a stark increase to 34 devices that offer update-support, which grew further to all devices when including non-manufacturer webpages and YouTube channels. However, this information wasn't always easy to find, as seen in Tables 3 and 5. Similar to Table 2, most device materials in Table 3 were not aligned with the top pieces of advice demonstrating that features existed for devices but were not adequately documented. There were significant differences between device categories regarding the amount of sufficient information provided. It was, for example, hard to find information on updates for smart home appliances if compared to smart assistants; see Table 4. In cases where update information was found, it was not always sufficient to apply the advice to manually install updates. In other cases, information on manually installing updates was missing or not intended for the user to do, as with the LG smart refrigerator (Smart Appliances), where a YouTube video showed that only LG Authorized Service could update the software. The purpose of an update is broader than is the case for a password and primarily serve to repair, improve, or add functionalities. The general assumption in the advice is that security issues are fixed through updates. We then also checked for the mention of security updates. Similar to [23], 11% of the results mentioned security updates, usually limited to disclosing a patch or update applied. ### Support for enabling autopdates For devices where we found information on security updates/security patches (as a subset of our sample, as in Table 5), information about auto-updates was also provided in half of these cases. Most manufacturers encourage enabling automatic updates with the promise that they will improve the device's performance. Only for the Samsung smartwatch (Smart Health), it was clearly stated that automatic update support is not provided (which would help to reduce uncertainty, a noted use concern elsewhere [49]). The promise to make the device more secure is also mentioned but less frequently. It shows that manufacturers, leading to the development of the fourth theme, mostly present updates as a way to improve the functionality and usability of IoT devices. This mirrors findings elsewhere [4], wherein users appear not to relate device updates to security functionality. The top pieces of advice imply that users need to enable automatic updates manually. However, 12 of the 16 devices seem to update themselves by default when connected to the internet (the remaining four mention that it can be switched on). This automation translates the advice from enabling updates to declaring no need for users to take action (which could be just as useful to know). This minimization of end user involvement connects to the ethical considerations described by van Steen [50], as a decision is made by the manufacturer that is not always (clearly) communicated to users and restricts freedom to make their own choices. For devices where it is clear that updates are separate for the device and companion app, this does not imply they both support automatic updates by default. The Google smart doorbell (Smart Security), for example, updates automatically by default, whereas the accompanying app does not. ## 7 Discussion Returning to our first research question, we identified four themes of converging public-level advice for consumer IoT devices: change default passwords; enable automated updates; and install (manual) updates. Within our limited sample, the terminology was inconsistent across advice sources, but with overlap on some points. When considering our second question and the content of manufacturer materials, the current way that pieces of general advice are formulated may seem reasonable at first glance, but does not connect with the features indicated by the manufacturer-provided information for devices. This requires the user to infer there is a connection, where relying on some existing knowledge of terms and what they mean has its short \begin{table} \begin{tabular}{l c c c c c c} \hline Smart Entertainment & 0(0) & 4(4) & 3(2) & 2(2) & 16(5) & 4(3) & 6(5) \\ Smart Health & 1(1) & 3(3) & 1(1) & 1(1) & 7(2) & 3(3) & 6(4) \\ Smart Security & 0(0) & 2 (2) & 2(2) & 1(1) & 9(4) & 6(6) & 7(4) \\ Smart Assistants & 0(0) & 1(1) & 6(4) & 1(1) & 14(7) & 11(6) & 27(7) \\ Smart Home Appliances & 0(0) & 2(2) & 0(0) & 1(1) & 1(1) & 2(1) & 3(3) \\ \hline **TOTAL UNIQUE SOURCES** & 1(1) & 12(12) & 12(9) & 6(6) & 47(19) & 26(19) & 49(23) \\ \hline \end{tabular} \end{table} Table 4: Applicable (Auto-)Update support per IoT device category - The above table represents the distribution of sources per device category that support the applicability of the advice to install or enable auto-updates. The number of unique devices (max. 40) to which these sources apply is listed between parentheses. comings [1]. For example, for some devices not every credential could be used to log into the device via the network. This misses opportunities to leverage _prompts_[51] during device configuration to increase security. Update features existed for all examined devices (as determined in our broader search of third-party sources), but were barely documented by manufacturers, with an 11% likelihood of discovering this information inside our dataset when only focusing on manufacturer-provided material. See Table 3. In device materials, there is then a lack of explicit declaration of the existence (or not) of security features. Combining our two research questions, informing our aim of identifying _basic non-trivial advice_ for consumer IoT users to follow, the public-level advice that we examined mostly could not be directly related to our set of selected (popular) devices, meaning that it is not targeted, correct advice for increasing the security of the device - features are often not there, not confirmed, or not mentioned. Based on the sources analyzed, none of the devices seem to support all four top pieces of advice at once. Device materials rarely mentioned passwords, then relying on software/apps to provide just-in-time prompts and signaling, for both the purpose and expectations for password security, in the absence of explanation elsewhere. Compounding these issues, terminology discrepancies also existed between the top pieces of advice for passwords and device materials. For example, where 'default' passwords were mentioned in device materials, it could also refer to the router/Wi-Fi password. Consequently, in many cases there is no direct route from public-level advice to device materials to user actions. ### Acknowledging the mismatch Current advice bodies offer generic pieces of cybersecurity advice that, despite the IoT environment being highly diverse, convey a sense that IoT devices are alike. It is not that the advice loses detail because it is generalized [1], but because it is _selectively applicable_; by not considering the diversity of IoT devices, \begin{table} \begin{tabular}{l c c c c} \hline \hline 1. All sources make mention updates. & 0 & 0 & 4 & 0 \\ 2. More than half of the sources make mention of updates. & 2 & 3 & 10 & 0 \\ 3. Half of the sources make mention of updates. & 5 & 2 & 5 & 0 \\ 4. Less than half of the sources make mention of updates. & 1 & 0 & 1 & 0 \\ 5. Only one of the sources makes mention of updates. & 5 & 1 & 0 & 1 \\ \hline **TOTAL DEVICES** & 13 & 6 & 20 & 1 \\ \hline \hline \end{tabular} \end{table} Table 5: (Auto-)Update Effort Score - For each source it is checked if and what information was provided regarding update support using the manufacturers’ and non-manufacturers’ sources for each device. All devices contain at least one source that provides information regarding update support; however, for one device, the kind of update support, whether manual, automatic, or both, was not provided. there is a lack of assurance and signaling whether advice is _relevant to specific devices_ before it is applied; this is left to the user. The inconsistent applicability of IoT advice to devices sits alongside it appearing to be applicable where it is not. This results in an intervention that works for devices where it applies, but has a different effect - or potential unintended side-effects [52] - for devices which the advice does not match to, in terms of whether those features exist for those devices. The reliance on users to qualify IoT advice also brings unintended side effects [53]. Government advice is not 'wrong' for compliant devices, but our analysis suggests such devices are, by far, in the minority. _proxy changes_[52] could mean that'some' security was improved, e.g., a child-protection PIN as we saw for some devices, but not security against network-based attacks. The burden appears to be on the user to determine - to almost know in advance, despite being assumed to be non-experts - which specific devices advice applies to. Any users of the 40 devices in our study would need to somehow know enough to decide - while lacking confirmatory information - that at least two and often three of the top pieces of advice do not apply to their device (as we found in our sample). For example, there may be multiple passwords or not, of which one or more may be a 'default', relating to network accessibility or not, with a capacity to be of variable attainable strength. Recent research already highlights users being in a gulf between assuming features do not exist and not being aware of them (as with smart device updates [4, 5]), with advice not being specific enough to find the relevant feature. Transparency in how features work can inform such user decisions around smart device security features [54]. For instance, users whose devices rely on manual updates may not know that they do [55], assuming instead that update installation is automated. Prior work has also evidenced that users who assume a device has a password may explore a range of sources and still not be able to determine if a password feature exists [36]. There is a mismatch between current, diverse devices and a future-looking regulatory regime (e.g., EU RED [56], and also e.g., the UK Code of Practice for Consumer IoT [57]). Further, there is a lack of consideration for the impact on user behaviors while generalized IoT advice and device features remain out of alignment. ### Limitations Our choice to analyze support materials, rather than the physical devices themselves, has certain pros and cons. A downside is that we did not verify first-hand the features of the selected devices, or infer their features through companion apps. Some features may be brought to the user's attention through just-in-time notifications on the companion app, for instance; however, this would only emphasize the reliance on the user to realize the relevance of any security prompts that exist, for lack of signaling from support sources. There is a difference between a device having particular features or not, and whether this is mentioned clearly in any associated instruction materials. We chose to rely on device documentation and Internet searches, rather than the costly and non-scalable alternative of directly purchasing each and every device. Actual devices could be assessed by the researcher, but related approaches such as cognitive walkthroughs have their own issues with ecological validity [44, 45] and consistency in mapping device features. ### Recommendations Based on our findings, we arrive at the following recommendations: * **Researchers / Policymakers: surface your own mental models of device features as well as those of users.** Significant progress has been made in understanding users' perceptions of device functionality, for instance, in exploring the IoT threat models perceived by users [3]. In our examination of public-level advice and device materials, we found a disparity that top pieces of advice appeared to assume that features exist, which we then did not find evidence of for all of our selected devices (as in Table 2 and Table 3). This suggests a generic model of a consumer IoT device, that has mostly escaped scrutiny, as the kind of 'common standard' that, e.g., Blythe et al. [23] anticipate or expect. It is important to document assumptions about the functionality of devices that advice is being provided for and that users interact with, as part of data-gathering. For instance, a user may struggle to find the default password on a device not because of a lack of security knowledge, but because the feature is not provided for their device(s) (as we found for many devices, see Table 2). Steps in this direction are being made, e.g., documenting participants' devices [3, 13]. We must connect devices to specific responses from users (and researcher assumptions of device capabilities), to understand if users are struggling to use a feature because of its difficulty, or its absence. * **Policymakers: match advice to groups of devices.** It may be possible to identify distinct classes or groups of devices which are at the very least _more likely_ to have the features that advice refers to. This would act as a shortcut that removes the need for users to determine if their device would benefit from the advice. We found, as in Table 4, that e.g., information on (automatic) update support was documented more for smart assistants than for smart home appliances. * **Manufacturers: declare the existence of security features.** We posit that if a security feature is included in a device, it should be made known that it exists and how it works. As demonstrated by the results in Table 2, when a default password was mentioned, information on how to change it was lacking. Also, instructions for installing updates or setting auto-updates was not always provided (Tables 3 and 5). The key is not to assume that users will know about the existence of features (irrespective of knowing how to use them). This relates to encouraging manufacturers to be more open about device functionality [49], but ideally in a way that also relates conveying security to customers (so they can both use and trust the device). Act first with an assumption that the feature is new to the user and that concrete steps are needed [58], rather than it being familiar [59, 49]. * **Policymakers / Manufacturers: consistent, approachable terminology.** We found, for example, that 'default password' had different meanings and functionalities depending on the type of device (see subsection 4.1), and that the advice offered by the studied countries differed in the number of credentials a device may use. It would benefit the disparate activities of researchers, policymakers, and manufacturers to have a narrower body of terminology that links advice to device materials and features, as a two-stage process of advising users. Prior work identifies this as a manufacturer responsibility [60], where there is also a role for policymakers in ensuring the applicability of advice terminology to device features. ## 8 Conclusion We identified four pieces of convergent advice to users across three representative countries (UK, US, and the Netherlands). No device was found to which all four pieces of converging governmental advice could be applied, suggesting that the advice was developed with an exemplar device in mind which at best is not like the majority of devices on the consumer market. These findings question the value of high-level advice campaigns by governments or industry. Broadly speaking, future research will address these issues by exploring the degree to which users can be provided with more specific cybersecurity advice for their IoT device while also considering the requirements this places on other stakeholders such as policymakers maintaining advice and manufacturers of specific devices.
2307.12989
Physics-Driven Cost Optimization and Advanced Research & Development (R&D) Strategies for Small Modular Reactors (SMRs) in Leading Nuclear Energy Nations
Small Modular Reactors (SMRs) are compact nuclear power plants that offer various advantages for energy generation in a sustainable way. With their smaller and simplified designs, SMRs provide increased flexibility, lower capital costs, and enhanced safety features compared to traditional large-scale reactors by reducing the area accommodation drastically. This paper explores the development and potential of SMRs in leading nuclear energy nations, including the United States, India, Canada, China, and Russia. Along with the physics behind it, cost optimization and advanced research & development (R&D) strategies employed to enhance the performance, safety, and finance of SMRs are discussed here. By reviewing case studies, cost reduction potentials, and technological advancements made, this study deals with the significant role of numerous factors in shaping the future of SMRs. The content presented in this research paper will not only contribute to the scientific understanding of SMRs but also provide valuable insights for policymakers, stakeholders and researchers in advancing sustainable nuclear energy solutions. Keywords: Small modular reactors (SMRs), nuclear energy, cost optimization, modularity, performance, safety, economic viability, technological advancements.
Rashid. Momin
2023-07-13T13:01:32Z
http://arxiv.org/abs/2307.12989v1
Physics-Driven Cost Optimization and Advanced Research & Development (R&D) Strategies for Small Modular Reactors (SMRs) in Leading Nuclear Energy Nations. ###### Abstract: Small Modular Reactors (SMRs) are compact nuclear power plants that offer various advantages for energy generation in a sustainable way. With their smaller and simplified designs, SMRs provide increased flexibility, lower capital costs, and enhanced safety features compared to traditional large-scale reactors by reducing the area accommodation drastically. This paper explores the development and potential of SMRs in leading nuclear energy nations, including the United States, India, Canada, China, and Russia. Along with the physics behind it, cost optimization and advanced research & development (R&D) strategies employed to enhance the performance, safety, and finance of SMRs are discussed here. By reviewing case studies, cost reduction potentials, and technological advancements made, this study deals with the significant role of numerous factors in shaping the future of SMRs. The content presented in this research paper will not only contribute to the scientific understanding of SMRs but also provide valuable insights for policymakers, stakeholders and researchers in advancing sustainable nuclear energy solutions. _Keywords_: Small modular reactors (SMRs), nuclear energy, cost optimization, modularity, performance, safety, economic viability, technological advancements. ## Introduction: Small Modular Reactors (SMRs) have emerged as a promising solution for sustainable and efficient nuclear energy generation by a chance to replace the traditional 3\({}^{\mathrm{rd}}\) generation Reactors. These are nuclear power plants which offers numerous advantages like space accommodation while enhancing safety features, showing greater flexibility in deployment as it can be factory made, and potentially having lower capital costs compared to traditional large-scale reactors. Harnessing the power of physics and employing advanced research and development (R&D) strategies are integral to optimizing SMR designs also to improve its performance, and achieve cost efficiency for sustainable investments. This research study aims to explore the physics-driven cost optimization and advanced R&D strategies for SMRs in leading nuclear energy nations, with a wide focus on enhancing the economic feasibility, safety, and performance of these innovative and versatile reactors by leveraging the fundamental principles of physics. SMRs represent a paradigm and an effective shift in the field of nuclear energy generation, providing a modular and scalable approach that can replace the conventional large-scale reactor designs. With power outputs ranging to 300 MW, these reactors occupy significantly smaller land areas compared to their larger counterparts reducing the accommodation drastically. SMR's designed to burn plutonium can help the world get rid of other grade plutonium while SMR's also supports the economic, environmental and social pillars of sustainable development. SMR's utilize nuclear fission, a process in which the nucleus of a uranium atom is bombarded with neutrons, leading to the splitting of the nucleus into smaller fragments and releasing a tremendous amount of energy. The raw material used for fission reactions in SMRs typically consists of low-enriched uranium (LEU), which undergoes controlled chain reactions to sustain the release of heat energy. The principles of Physics play a vital role in optimizing various aspects of SMR performance which can be from working, cooling and even design. Reactor design optimization, it heavily relies on computational modelling and analysis to explore different geometries. Fuel compositions, and coolant options can be developed more. These simulations, incorporating sophisticated mathematical models such as Monte Carlo methods and computational fluid dynamics (CFD), enable scientists and engineers to understand and optimize the behaviour of neutrons, thermal distribution, and coolant flow patterns which are essential. Moreover, advanced physics principles are instrumental in enhancing safety features and addressing potential risks associated with SMRs as they also work around potentially harmful substances. Thorough understanding of the physics behind thermal expansion, pressure dynamics, and heat removal processes allows for the design and implementation of robust safety mechanisms along with passive cooling systems, and containment structures. These measures ensure safe shutdowns and prevent the release of radioactive materials during normal operation or emergency scenarios. Leading nuclear energy nations, including the United States, Canada, China, and Russia, have recognized the potential of SMRs and are investing significant resources in their development. These countries are undertaking extensive R&D efforts to advance SMR technology, accelerate its deployment, and establish its viability. By applying physics-driven cost optimization and advanced R&D strategies, they aim to overcome technical challenges, reduce costs, and pave the way for a sustainable nuclear energy future. ## Definitions and glossary: In order to facilitate a comprehensive understanding of the concepts and terminology employed in the field of Small Modular Reactors (SMRs) and nuclear energy, the following definitions and glossary are provided: 1. Small Modular Reactors (SMRs): are defined as nuclear reactors generally 300 MW equivalent or less, designed with modular technology using module factory fabrication, pursuing economies of series production and short construction times. This definition, from the World Nuclear Association, is closely based on those from the IAEA and the US Nuclear Energy Institute. Some of the already-operating small reactors mentioned do not fit this definition, but most of those described do fit it. PWR types may have integral steam generators, in which case the reactor pressure vessel needs to be larger, limiting portability from factory to site. Hence many larger PWRs such as the Rolls-Royce UK SMR have external steam generators. (World Nuclear Association, March, 2023) 2. Nuclear Fission: A nuclear reaction in which the nucleus of an atom, typically uranium-235 (U\({}^{235}\)) or plutonium-239 (Pu\({}^{239}\)), is bombarded by neutrons, leading to the splitting (fission) of the nucleus into smaller fragments. This process releases a significant amount of energy. 3. Low-Enriched Uranium (LEU): Low Enriched Uranium (LEU) is the basic material to fabricate nuclear fuel. It consists of uranium hexafluoride that is a white-grey, waxy solid at standard temperature and pressure. LEU is made by enriching naturally occurring uranium to improve its ability to produce energy. Enrichment increases the concentration of uranium atoms that can split to produce heat and then generates electricity. (IAEA, 2020) 4. Reactor Core: The central region of an SMR where nuclear fission reactions occur. It typically contains fuel assemblies composed of enriched uranium or plutonium, along with control rods that regulate the chain reactions. 5. Control Rods: Rods made of materials such as boron or cadmium that are inserted or withdrawn into the reactor core to control or adjust the rate of the nuclear fission reactions. Coolant: A substance, such as water, gas, or liquid metal, that circulates through the reactor core to absorb heat produced during the nuclear fission process. The coolant carries the heat away from the core, allowing it to be converted into useful energy. 7. Passive Cooling System: Passive cooling of buildings can be defined in several ways. One way is to consider any treatment of the building which reduces its cooling load, such as solar control, minimizing internal heat gain, etc., as passive cooling techniques. In a chapter on passive cooling in Advances in Solar Energy, written by Santamouris (2005), the author has included in the chapter such subjects as lowering the urban temperatures, shading of windows, and envelope's exterior colours of low solar absorptivity. (Givoni, Agust 2011), (Santamouris, 2005) 8. High-Level Radioactive Waste (HLW): The highly radioactive byproduct produced during the operation of nuclear reactors. HLW requires long-term management and disposal methods due to its potential hazards to human health and the environment. Glossary of Shortforms: * SMR: Small Modular Reactor * LEU: Low-Enriched Uranium * U\({}^{235}\): Uranium-235 * Pu\({}^{239}\): Plutonium-239 * HLW: High-Level Radioactive Waste ## 1 Literature review: The literature review focuses on the emerging field of small modular nuclear power reactors, highlighting their advantages, design considerations, safety aspects, and potential applications. The conclusions drawn by Esam M.A. Hussien and Aleksey Rezvoi shed light on the opportunities and challenges associated with these innovative reactor designs. Hussien's critical review emphasizes the advantages of small modular reactors (SMRs), defying the conventional wisdom of "economy of scale" while offering the "economy of multiples." These reactors enable incremental capacity buildup without requiring large upfront investments. The knowledge gained from early small reactors, combined with advancements in design, testing, and operation, can benefit emerging SMRs. However, the concept of modularity in SMR design and construction is evolving and somewhat controversial. While modular designs are seen as simpler and more flexible, challenges exist in modularizing power-intensive mechanical systems, potentially leading to over-designed and less efficient systems. The flexibility of modular designs may be constrained by specific interfaces and constraints. Nonetheless, the scale modularity of SMRs, coupled with lower power, still offers the advantage of the "economy of multiples." Rezvoi's research focuses on the importance of accurately analysing the flow stability specific to LW-SMR modular design. It highlights the uncertainties and potential errors that can arise during the modelling of LW-SMR instabilities. The article suggests that adjustments, additional reviews, and revisions of safety assurance reports may be required in the early stages of LW-SMR design development. While the modularity of SMRs holds the potential for cost savings through standardization and shorter on-site construction time, challenges remain. Flow obstructions, temperature imbalances, coolant loss, and chemical reactions can hinder the effectiveness of passive primary cooling by natural circulation. Therefore, robust and effective modular structures, along with their associated connections, need further studies and development. Both articles emphasize the inherent and passive safety aspects of SMRs and their potential in various applications, such as electricity generation, nuclear waste disposal, and steam production for industries. SMRs offer adaptability, flexibility, and the ability to reduce carbon footprints in various sectors. However, challenges exist in selecting the most suitable design among the numerous reported options, requiring careful considerations by propriators, particularly as first-of-a-kind owners. Regulatory agencies, such as the Canadian Nuclear Safety Commission, recognize the novel approaches and uncertainties associated with SMR technologies, suggesting a graded approach to safety and security that is risk-informed. In conclusion, the literature review highlights the advantages and challenges of small modular nuclear power reactors. The analysis presented in the reviewed articles emphasizes the need for further research, development, and careful considerations in design, safety assurance, and regulatory processes. The evolving nature of modular designs and the importance of accurate analysis in stability and safety aspects underscore the complexity and ongoing efforts in advancing SMR technology. (Hussein, 2020), (Rezvoi, 2023) ## Methodology: This study employed a mixed-methods research approach to investigate the impact of small modular reactors (SMRs) on energy efficiency. The research design involved both quantitative data collection through a survey and qualitative data collection through interviews with industry experts. Sample: A purposive sampling technique was used to select participants for the survey and interviews. The survey targeted a sample of 20 energy and finance professionals also professors from various sectors, including science, engineering, commerce and policy. For the personal interviews, a diverse group of 10 industry professors of science and commerce with in-depth knowledge and experience in SMRs and finance were asked for their opinions. Data Collection: The survey questionnaire consisted of structured close-ended questions to collect quantitative data on participants' perceptions of SMRs and their potential impact on energy efficiency. The interviews I conducted using a semi-structured format, allowing for open-ended discussions to gain insights into the experts' perspectives, challenges, and opportunities related to SMRs. Data Analysis: Quantitative data collected from the survey I analysed using descriptive statistics, including frequency distributions and measures of central tendency. The qualitative data obtained from the interviews was transcribed and subjected to thematic analysis to identify common themes and patterns related to the research objectives. Limitations: It is important to note that this study has several limitations. The sample size may restrict the generalizability of the findings. Additionally, the self-reported nature of the survey responses may introduce response bias. Despite these limitations, the study's findings provide valuable insights into the potential impact of SMRs on energy efficiency. ## Results: Results: In this section, I present the results of my study on the physics-driven cost optimization for Small Modular Reactors (SMRs). The analysis was conducted using a combination of physics-based modelling and cost analysis techniques. 1. Cost Optimization Analysis: I initially examined the impact of different design configurations on the cost efficiency of SMRs. Through extensive simulations and sensitivity analyses studies, I identified key factors that significantly influence cost optimization. Our findings indicate that optimizing the reactor core size and fuel assembly design can lead to substantial cost savings. Additionally, incorporating advanced materials with enhanced thermal properties can improve overall efficiency and reduce operational costs. The power uprating of an SMR by 20% resulted in \(\sim\)15% savings in the overnight unit capital cost. Overall, if built by an inexperienced vendor and work force, the two SMRs' overnight costs were higher than large reactors, since significant on-site labour still remains while losing economy of scale. However, the single-unit SMR had significantly less total person-hours of onsite labour, and if built by an experienced workforce, it could avoid cost-overrun risks associated with megaprojects. (W.R. Stewart, March 2022) 2. Operational Parameters Optimization: To further enhance cost efficiency, I explored the optimization of operational parameters. By analysing the effect of various parameters, such as coolant flow rate, operating temperature, and power output, I identified optimal operating conditions that minimize costs while maintaining performance and safety standards. Key economic figures of merit are evaluated under optimized and constant (i.e., time-invariant) operations to demonstrate the benefit of the optimization, which also suggests the economic viability of the considered NHESs under the proposed operations optimizer. (Jun Chen, 2016) These findings align with previous research that emphasizes the importance of balancing performance and cost considerations in SMR design. The energy conversion module of the space nuclear reactor is Brayton cycle with regeneration. The working fluid of the Brayton cycle (He-Xe gas mixture) also acts as the coolant of the nuclear reactor. (Hao Qin, 2021,) 3. Integration of Physics-Based Models and Cost Analysis: My study demonstrates the successful integration of physics-based models and cost analysis techniques. By combining reactor physics simulations with cost estimation algorithms, I achieved a comprehensive understanding of the cost drivers and optimization strategies for SMRs. This integration enables decision-makers to make informed choices regarding design modifications, fuel selection, and operational practices, leading to cost-effective and sustainable SMR deployments. 4. Comparison with Conventional Reactors: As part of our analysis, I compared the cost efficiency of SMRs with that of conventional larger reactors. The results show that SMRs have the potential to offer greater cost advantages, primarily due to their modular design, shorter construction timelines, and reduced upfront capital investment. These findings support the growing interest and investment in SMR technologies globally. By reviewing an article published on comparison of SMR with conventional reactors I found that: When evaluating the competitiveness of SMRs versus large reactors, the various individual factors can be grouped into two classes: - Factors which are either applicable to SMRs only or are critically affected by the difference in design and approach brought in by the SMRs (SMR specific factors) - Factors which affect SMRs and large plants in a comparable way (common factors). Even for the common factors, a comparative quantitative evaluation might not be straightforward. Still, there are general characteristics which pretty much envelope the entire SMR spectrum. They are: - Simplicity, reduced type and number of components. SMRs are generally new designs which try to simplify existing solutions. Their safety characteristics tend to be enhanced because passive and intrinsic safety is better enabled by the smaller size; enhanced safety, if properly accounted for, translates into a cheaper design. - Specific O&M costs because of their vastly enhanced safety, SMRs have the potential to attain licensing without the need for emergency response, which will eliminate personnel training and infrastructure. Some SMRs, like the integral configuration PWRs, have extended, up to four years, maintenance intervals and integral shielding which dramatically decrease the personnel routine exposure and ALARA costs. (P. Trucco, 2007) ## Discussion: The results of my study on Small Modular Reactors (SMRs) reveal several key insights that highlight the advantages and potential of SMRs as a sustainable and cost-effective solution for nuclear energy. These findings align with the works of Rezvoi (2023) and Hussien (2020), further reinforcing the positive outlook for SMR technology. Firstly, the cost optimization analysis demonstrated that optimizing the reactor core size and fuel assembly design significantly impacts cost efficiency in SMRs. This finding is consistent with Rezvoi's work, which emphasizes the importance of accurate modelling and design considerations to achieve cost-effective SMR configurations. By leveraging physics-based modelling and cost analysis techniques, we were able to identify design parameters that offer substantial cost savings while maintaining performance and safety standards. Additionally, incorporating advanced materials with enhanced thermal properties emerged as a key factor in improving overall efficiency and reducing operational costs in SMRs. This finding aligns with Hussien's review, which emphasizes the potential of advanced materials to enhance the performance of SMRs. By leveraging these materials, SMRs can achieve higher thermal efficiencies and reduce operational expenses, making them an economically attractive option for sustainable energy generation. Furthermore, the optimization of operational parameters in my study revealed that careful adjustment of coolant flow rate, operating temperature, and power output can contribute to cost reduction without compromising performance and safety. This finding supports the idea presented by Hussien that optimizing operational parameters is crucial in achieving cost efficiency in SMR designs and by also reviewing the work of (W.R. Stewart, March 2022). Comparatively, the literature review authors' works provided valuable insights into the inaccuracies in flow stability analysis specific to LW-SMR modular designs (Rezvoi, 2023) and the critical review of emerging SMRs (Hussien, 2020). While these reviews shed light on specific aspects of SMRs, the study further extends the findings by proposing a physics-driven approach to cost optimization, offering a comprehensive perspective on the economic viability of SMRs. Along with Operational Parameters Optimization which was cited from (Hao Qin, 2021) Overall, my findings demonstrate that SMRs offer significant advantages in terms of cost efficiency, enhanced safety features, and operational flexibility. By integrating physics-driven cost optimization techniques, SMRs can be further optimized to achieve optimal performance and economic viability. These insights provide important considerations for policymakers, industry stakeholders, and researchers in their pursuit of sustainable and affordable nuclear energy solutions. It is important to acknowledge that, like any technological innovation, SMRs also face challenges and limitations. Factors such as licensing processes, public perception, and infrastructure requirements should be taken into account when considering the widespread deployment of SMRs. However, the potential benefits offered by SMRs in terms of cost efficiency, safety, and adaptability make them a promising avenue for future nuclear energy development. ## Conclusion: In conclusion, my research focused on the physics-driven cost optimization for Small Modular Reactors (SMRs). Through a combination of physics-based modelling and cost analysis techniques, we demonstrated the potential of SMRs as a cost-effective and sustainable solution for nuclear energy generation. Our analysis revealed that optimizing design configurations, incorporating advanced materials, and optimizing operational parameters are key factors in achieving cost efficiency in SMRs. By leveraging the inherent physics of SMRs, we identified strategies to minimize costs while maintaining performance and safety standards. The findings of our study align with existing literature, emphasizing the significance of accurate modelling, design considerations, and operational optimization in realizing the economic viability of SMRs. Our research contributes to the growing body of knowledge supporting the adoption of SMRs as a promising alternative to conventional larger reactors. SMRs offer several advantages, including enhanced safety features, flexibility in deployment, and the potential for cost savings. The integration of techniques provides valuable insights into the economic viability of SMRs, enabling stakeholders to make informed decisions regarding design modifications, operational practices, and fuel selection. It is important to acknowledge the challenges and limitations associated with SMRs, including licensing processes, public perception, and infrastructure requirements. However, the potential benefits offered by SMRs in terms of cost efficiency, safety, and adaptability make them a promising avenue for future nuclear energy development. In conclusion, our research demonstrates that SMRs have the potential to play a significant role in achieving a cleaner, more secure, and sustainable energy future. Further research, collaboration, and policy support are essential to realize the full potential of SMRs and ensure their successful integration into the energy landscape.
2301.09941
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
As reinforcement learning methods increasingly amass accomplishments, the need for comprehending their solutions becomes more crucial. Most explainable reinforcement learning (XRL) methods generate a static explanation depicting their developers' intuition of what should be explained and how. In contrast, literature from the social sciences proposes that meaningful explanations are structured as a dialog between the explainer and the explainee, suggesting a more active role for the user and her communication with the agent. In this paper, we present ASQ-IT -- an interactive tool that presents video clips of the agent acting in its environment based on queries given by the user that describe temporal properties of behaviors of interest. Our approach is based on formal methods: queries in ASQ-IT's user interface map to a fragment of Linear Temporal Logic over finite traces (LTLf), which we developed, and our algorithm for query processing is based on automata theory. User studies show that end-users can understand and formulate queries in ASQ-IT, and that using ASQ-IT assists users in identifying faulty agent behaviors.
Yotam Amitai, Guy Avni, Ofra Amir
2023-01-24T11:57:37Z
http://arxiv.org/abs/2301.09941v1
# ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents ###### Abstract As reinforcement learning methods increasingly amass accomplishments, the need for comprehending their solutions becomes more crucial. Most explainable reinforcement learning (XRL) methods generate a static explanation depicting their developers' intuition of what should be explained and how. In contrast, literature from the social sciences proposes that meaningful explanations are structured as a dialog between the explainer and the explainee, suggesting a more active role for the user and her communication with the agent. In this paper, we present ASQ-IT - an interactive tool that presents video clips of the agent acting in its environment based on queries given by the user that describe temporal properties of behaviors of interest. Our approach is based on formal methods: queries in ASQ-IT's user interface map to a fragment of Linear Temporal Logic over finite traces (LTLf), which we developed, and our algorithm for query processing is based on automata theory. User studies show that end-users can understand and formulate queries in ASQ-IT, and that using ASQ-IT assists users in identifying faulty agent behaviors. ## 1 Introduction Reinforcement Learning (RL) has shown impressive success in recent years; e.g., mastering Go or achieving human-level performance in Atari games (Silver _et al._, 2016; Mnih _et al._, 2015). However, current training techniques are complex and rely on implicit goals and indirect feature representations, and thus largely produce black-box agents. In order for such trained agents to be successfully deployed, in particular in safety-critical domains such as healthcare, it is crucial for them to be trustworthy; namely, both developers and users need to understand, predict and assess agents' behavior. This need has led to an abundance of "explainable RL" (XRL) methods (Dazeley _et al._, 2021) designed to elucidate black-box agents. Existing approaches to XRL are for the most part static. That is, they provide the user with some information about the agent's decision-making. For example, local explanations might show saliency maps depicting the agent's attention, a causal explanation, or an explanation of the reward function. Global explanations might describe the agent's policy by presenting a simplified representation (e.g., a decision tree), or by demonstrating the behavior of the agent through policy summaries. Common to all of these approaches is that the users do not have a way to interact with the provided information or pose questions that they are interested in. Following the literature on explanations from the social sciences (Miller, 2018), we aim to develop _interactive XRL methods_ that allow for a dialog between the explainer (system) and the explainee (user): the user repeatedly poses queries for the system to answer. Interactive explanations have recently been identified as a significant future direction for system intelligibility and enhancing user engagement (Abdul _et al._, 2018). Increasing evidence also points towards interaction and exploration as means to reduce over-reliance on AI recommendations, which occurs even when explanations are provided (Boucnca _et al._, 2021). In this work, we develop "ASQ-IT", an interactive XRL tool that aims to assist users to comprehend an agent in a global manner. Inspired by policy summarization approaches that demonstrate the behavior of an agent in selected world-states (Amir _et al._, 2019), our tool generates clips of the agent interacting with its environment. The user controls which clips will be presented by formulating queries that specify properties of clips of interest. The interaction with the tool resembles a dialogue: the user enters a query, and receives clips that match it; the user can then refine her query, and the process continues. For instance, for a self-driving car agent, the user might formulate a query for examining the agent's ability to switch lanes by specifying a start lane and end lane and our tool will output clips of the agent making this transition. The main challenge in developing an interactive tool is the interaction with human users (especially laypeople). Indeed, unless constrained, study participants pose vague and informal queries that are hard for a tool to process. A tool's interface must strike the right balance between expressivity and usability. We address these challenges as follows. _i)_ We develop a simple logic that can express common properties of clips. Note that clips are sequential, thus our logic must reason about temporal behaviors. An established logic to reason about such properties is Linear Temporal Logic (Pnueli, 1977), and our logic relies on its finite counterpart called LTLf [4]. _ii_) Laypeople cannot be expected to produce logic formulas, thus we develop a simple user interface that maps directly to our logic. _iii_) We assume access to a library of agent execution traces. We develop an efficient automata-based algorithm to search this library for clips that answer a user's query. Our paper makes the following contributions: It introduces ASQ-IT, an **A**gent **S**ystem **Q**ueries **I**nteractive **T**ool that enables users to describe and generate queries towards an agent and receive answers as explanations-through-demonstration of their behavior. We present results from two user studies. The first user study shows that laypeople, with no training in logic or RL, are able to comprehend and generate meaningful queries to ASQ-IT. The second study shows that users with some AI background can identify faulty agent behaviors using ASQ-IT and that using ASQ-IT led to improved performance compared to a static policy summary baseline. ## 2 Related Work This work relates to two main areas of research, which we discuss in this section: (1) explanations in sequential decision-making settings and (2) interactive explanations. Explanations in sequential decision-making settingsIn this paper, we focus on the problem of explaining the behavior of agents operating in sequential decision-making settings. Work in this area is typically concerned with explaining policies learned through Reinforcement Learning. RL explanation methods can be roughly divided into two classes. _Local_ explanations focus on explaining specific agent decisions [11, 12, 13, 14], e.g., by showing what information a game-playing agent attends to in a specific game state [1], or generating causal explanations [15]. In contrast, _global_ explanations aim to convey the agent's policy rather than explain particular decisions. One approach to global explanations is to generate a proxy model of the policy that is more interpretable, e.g., through policy graphs [10] or decision trees approximating the policy [11]. In this paper, we utilize the idea of extracting demonstrations of agent behavior as a global explanation [1] to answer queries posed by users, such that they can interactively explore the agent's policy and its characteristics. Interactive explanationsSome early works on decision-support systems provided users with interactive explanation methods. For example, MYCIN [12], a system for clinical decision-support, allowed its users to pose "why" and "how" questions and responded by revealing the rules that led to a particular inference. Such explanations are more difficult to provide in current systems that do not use a logic-based representation. Few works in interpretable machine learning also designed interactive explanations for supervised learning models. For instance, TCAV is a method that enables users to test whether the model relies on a user-determined concept in its decision-making [13]. Recently, this approach has been applied to analyzing the chess knowledge of AlphaZero [10]. Interactive XRL has been flagged as a promising research direction in interactive RL research [1]. Most closely related to the problem we discuss are the works of Hayes and Shah [10], Rupprecht _et al._[19] and Cruz and Igarashi [20], each of which introduce systems to help their users debug agent behavior through interactive interfaces. Both works shape the user's interaction through a limited set of action-related questions such as "when a particular action will be taken?" or "why wasn't an alternative action chosen?", while our focus seeks to bestow more freedom for expressivity and exploration. ## 3 Asq-It In this section we describe the implementation of ASQ-IT. This includes both the backend algorithmic approach, as well as the front-end user interface design. Tool usage, an illustrationThe users' main interaction point with ASQ-IT is through the _Query Interface_ (Fig. 1) where they define scenarios and behaviors they wish to observe in the agent's interaction with the environment. The front-end is based on drop-down menus that depend on predefined predicates given by a domain expert. In the back-end, the user's entries are translated into a formal specification that describes the set of traces that the user is interested in. Based on this specification, the interaction-library database is searched for video clips that answer the user's query. These video clips are then presented to the user. Running example: The Highway domainThe domain consists of a multiple-lane highway in which the agent controls a car depicted by a green rectangle. Other uncontrollable cars are depicted as blue rectangles. Cars can accelerate, decelerate, and change lanes (numbered from top to bottom). We consider various agent goals; for example, a combination of not crashing, driving fast, driving in the right lane, etc. The following sections describe the building blocks required for constructing and running ASQ-IT. Figure 1: ASQ-IT Process Flow Diagram. Example output video: [https://bit.ly/3GJV394](https://bit.ly/3GJV394) ### Offline: Obtaining a Database of Clips We assume access to an agent that operates in an MDP setting. Formally, an MDP is a tuple \(\mathcal{M}=\langle S,A,Tr,R\rangle\), where \(S\) is a set of states, \(A\) is a set of actions, \(R:S\rightarrow\mathbb{Q}\) is a reward function, and \(Tr:S\times A\times S\rightarrow[0,1]\) is a probabilistic transition function. An agent is a _policy_\(\pi\), which is a function \(\pi:S\to A\). We do not assume any knowledge of \(Tr\) or \(R\). We assume that \(\pi\) is given, e.g., trained using RL, and we assume that we have the ability to simulate \(\pi\) on \(\mathcal{M}\), e.g., using a simulator. This provides a collection of _traces_, where each trace is a sequence \(s_{1},s_{2},\ldots,s_{n}\) of states, i.e., \(s_{i}\in S\), for \(1\leq i\leq n\). For ease of presentation, we assume that the agent is simulated once to produce one trace. In practice, we collect numerous traces and concatenate them - the more traces collected, the more clips our tool will be able to retrieve in response to user queries. The goal of our tool is to present to the user a sub-trace \(s_{k},\ldots,s_{\ell}\), for \(1\leq k<\ell\leq n\), that the user is interested in. We found that it is infeasible for users to specify a desired behavior directly on the _concrete_ states. Instead, users' queries are formulated on a predefined collection of _predicates_\(P\) that are chosen by a domain expert. Each predicate \(p\in P\) is a function \(p:S\rightarrow\{\texttt{True},\texttt{False}\}\) denoting whether some attribute exists in a state. For example, in the highway domain, the predicate lane-1 returns True iff the agent (green car) is in Lane 1 at a given state and the predicate behind returns True iff the agent is driving behind some blue car. For a concrete state \(s\in S\), we denote by \(P(s)\), the _abstract state_ that consists of the subset of predicates that hold in \(s\), thus \(P(s)=\{p\in P:p(s)=\texttt{True}\}\). For example, for \(P=\{\texttt{lane-1},\texttt{lane-2},\texttt{behind}\}\) and \(P(s)=\{\texttt{lane-1},\texttt{behind}\}\), necessarily at state \(s\), the agent is traveling in Lane 1 _and_ behind a blue car. To summarize, offline, we simulate the agent \(\pi\) on \(\mathcal{M}\) to collect a concrete trace \(s_{1},\ldots,s_{n}\). A domain expert provides a collection of predicates \(P\). Our database consists of both the concrete and abstract trace \(P(s_{1}),\ldots,P(s_{n})\). Queries will be processed on the abstract trace, where an answer to a query is \(P(s_{k}),\ldots,P(s_{\ell})\), and the corresponding concrete trace \(s_{k},\ldots,s_{\ell}\) is presented to the user. ### Front-End: Query Language and Interface One key novelty of ASQ-IT is that it allows users to query for traces that they are interested in. In this section, we describe the formal basis on which our query language is based. We start by surveying the necessary background on Linear Temporal Logic on Finite Traces (LTLf). **Background: Linear Temporal Logic on Finite Traces** An LTLf formula \(\varphi\) over a collection of predicates \(P\) specifies a set of traces; namely, the set of traces that satisfy \(\varphi\). We thus think of \(\varphi\) as a query. That is, by providing \(\varphi\), the user states that she is interested in viewing traces that satisfy \(\varphi\). **Example 1**.: We illustrate the syntax and semantics of LTLf. Let \(P=\{\texttt{lane-1},\texttt{behind}\}\). * The formula \(X\) lane-1 (read "next Lane \(1\)") specifies traces in which the agent is driving in Lane \(1\) in the second position of the trace. No restrictions are imposed afterwards. * The formula lane-1 \(U\) behind (read "Lane \(1\) until behind") specifies traces in which the agent drives continuously in Lane \(1\) until it is behind some blue car. No restrictions are imposed afterwards. * The formula \(F\) lane-1 (read "eventually Lane \(1\)") specifies traces in which the agent visits Lane \(1\) at least once, e.g., traces that end with the green car in Lane \(1\). Formally, the syntax of LTLf is defined recursively. Each \(p\in P\) is an LTLf formula. If \(\varphi_{1}\) and \(\varphi_{2}\) are LTLf formulas, then so are \(\varphi_{1}\wedge\varphi_{2}\), \(\neg\varphi_{1}\), \(X\varphi_{1}\) (read "next \(\varphi_{1}\)"), and \(\varphi_{1}U\varphi_{2}\) (read "\(\varphi_{1}\) until \(\varphi_{2}\)"). We use the abbreviation \(F\varphi\) (read "eventually \(\varphi\)") for the formula True\(U\varphi\). The semantics of LTLf is defined by induction on the structure of the formula. Consider an LTL formula \(\varphi\) over \(P\) and an abstract trace \(\eta=\sigma_{1},\ldots,\sigma_{k}\), where \(\sigma_{i}\in 2^{P}\), for \(1\leq i\leq k\). We say that \(\eta\) satisfies \(\varphi\), denoted \(\eta\models\varphi\), when: * If \(\varphi=p\in P\), then \(\eta\models\varphi\) iff \(p\in\sigma_{1}\). * If \(\varphi=\varphi_{1}\wedge\varphi_{1}\) then \(\eta\models\varphi\) iff \(\eta\models\varphi_{1}\) and \(\eta\models\varphi_{2}\). * If \(\varphi=\neg\varphi_{1}\) then \(\eta\models\varphi\) iff \(\eta\not\models\varphi_{1}\). * If \(\varphi=X\varphi_{1}\) then \(\eta\models\varphi\) iff \((\sigma_{2},\ldots,\sigma_{k})\models\varphi_{1}\). * If \(\varphi=\varphi_{1}U\varphi_{2}\) then \(\eta\models\varphi\) iff there is an index \(1\leq i\leq k\) such that \((\sigma_{i},\ldots,\sigma_{k})\models\varphi_{2}\) and for each \(1\leq j\leq i\), we have \((\sigma_{j},\ldots,\sigma_{k})\models\varphi_{1}\). **Nondeterministic finite automata** Our algorithm to process queries is based on automata. A deterministic automaton (DFA, for short) is a tuple \(\mathcal{A}=\langle\Sigma,Q,\delta,q_{0},Acc\rangle\), where \(\Sigma\) is an alphabet, \(Q\) is a set of states, \(\delta:Q\times\Sigma\to Q\) is a transition function, \(q_{0}\in Q\) is an initial state, and \(Acc\subseteq Q\) is a set of accepting states. A run of \(\mathcal{A}\) on a word \(w=\sigma_{1}\sigma_{2}\ldots\sigma_{k}\), where \(\sigma_{j}\in\Sigma\), for \(1\leq j\leq k\), is \(r=r_{0},r_{1},\ldots,r_{k}\), where \(r_{i}\in Q\), for \(0\leq i\leq k\), where \(r\) starts in an initial state, i.e., \(r_{0}=q_{0}\), and respects the transition function, i.e., for each \(i\geq 1\), we have \(r_{i}\in\delta(r_{i-1},\sigma_{i})\). We say that \(r\) is _accepting_ if it ends in an accepting state, i.e., \(r_{k}\in Acc\), and that \(\mathcal{A}\)_accepts_\(w\) if there is an accepting run on \(w\). The _language_ of \(\mathcal{A}\), denoted \(L(\mathcal{A})\), is the set of words that it accepts. **Theorem 1**.: _[_Giacomo and Vardi_,_ 2013_]_ _Consider an LTLf formula \(\varphi\) over a set of predicates \(P\). There is a DFA \(\mathcal{A}_{\varphi}\) over the alphabet \(\Sigma=2^{P}\) whose language is the set of traces that \(\varphi\) recognizes. That is, for every trace \(\eta\in\Sigma^{*}\) we have \(\eta\in L(\mathcal{A})\) iff \(\eta\models\varphi\)._ **A Logic for Expressing Queries** ASQ-IT is intended for laypeople in logic. That is, we do not assume that its users are capable of producing LTLf queries. In order to make ASQ-IT accessible, we develop a restricted query language, which is a fragment of LTLf. The query interface is built so that each query provided by a user maps to a formula in our language. We designed our language to be both accessible and expressive enough based on pilot studies so that users are capable of expressing properties of interest. We provide experimental evidence of its usability and effectiveness (see Sections 4.1). Developing accessible fragments of logics is common practice in verification (e.g., [4, 10]). Let \(P\) be a set of predicates. A _query_ is based on the following components: * A description of the start and end state of the trace. These are given as propositional formulas \(\phi_{s}\) and \(\phi_{e}\) over the predicates \(P\). For example, when \(\phi_{s}=\neg\texttt{lane-1}\land\texttt{behind}\), in any trace returned to the user, in the first position of a trace the green car is not in Lane \(1\) and behind some car. * A constraint on the trace between \(\phi_{s}\) and \(\phi_{e}\), which is given as a third propositional formula \(\phi_{c}\) over \(P\). Below, we describe several constraints that we implemented in our query interface. * The constraint \(\phi_{c}\)_changes_ is written in LTLf as \((\phi_{s}\land\phi_{c})\wedge X\,F(\neg\phi_{c}\wedge F\phi_{e})\). For example, for \(\phi_{c}=\texttt{lane-2}\) (depicted in the query interface in Fig. 1), the query represents traces that start with the agent driving in Lane \(2\) and at some point in the trace, the agent changes lanes. * The constraint \(\phi_{c}\)_stays constant_ is written in LTLf as \((\phi_{s}\land\phi_{c})\wedge X(\phi_{c}U\phi_{c})\). For example, for \(\phi_{s}=\texttt{lane-1}\land\texttt{behind}\), \(\phi_{e}=\texttt{lane-4}\), and \(\phi_{c}=\texttt{behind}\), the query represents traces that start with the agent driving in Lane \(1\) behind some car and ends when the agent is in Lane \(4\), and it drives behind some car throughout the whole trace. * The constraint \(\phi_{c}\)_changes into_\(\phi_{c}^{\prime}\) is written in LTLf as \((\phi_{s}\land\phi_{c}\land\neg\phi_{c}^{\prime})\wedge X\,F(\neg\phi_{c} \land\phi_{c}^{\prime}\land F\phi_{e})\). For example, for \(\phi_{c}=\texttt{lane-1}\) and \(\phi_{c}^{\prime}=\texttt{lane-2}\), the query represents traces that start with the agent driving in Lane \(1\) and at some point switches to Lane \(2\). #### 3.2.2 Query Specification Interface We conducted pilot studies to guide an iterative design process of the query specification interface, as well as the underlying LTLf fragment we chose to implement. This process resulted in the design of a simple interface using drop-down menus (see Figure 1). The drop-down menu is designed to clearly and simply guide users toward possible state specifications for constructing their queries. Predicates, i.e. state attributes, are grouped into types to reduce cognitive load and avoid excessive options. For instance, all lane specifications appear under one drop-down, as these are mutually exclusive. **Remark 1**.: As we describe next, our backend is capable of processing general LTLf queries. Thus, it requires minimal effort to enhance the expressivity of the query interface as long as queries are mapped to LTLf. For example, previous versions of our tool allowed specifying intermediate states, e.g., a user might be interested to view a "zig zag" behavior: traces that start from Lane \(1\), visit Lane \(4\), and end in Lane \(1\). In LTLf, such behavior is specified as a concatenation of queries as described above. ### Backend: Processing User Queries Recall that offline, we collect a trace \(s_{1},\ldots,s_{n}\) of the agent operating in its environment, and a domain expert provides a collection of predicates \(P\) with which we obtain an abstract trace \(P(s_{1}),\ldots,P(s_{n})\). In addition, we assume that a user provides an LTLf query \(\varphi\). We describe an algorithm to process the user's query, formally stated as follows. **Problem:** Given an LTLf query \(\varphi\), find a sub-trace \(P(s_{k}),\ldots,P(s_{\ell})\) that satisfies \(\varphi\). **The algorithm** Consider a trace \(\eta=P(s_{1}),\ldots,P(s_{n})\) over \(2^{\not{P}}\) and an LTLf formula \(\varphi\). We construct two DFAs \(\mathcal{A}_{\varphi}\) and \(\mathcal{A}_{F\varphi}\) (read "eventually \(\varphi\)") as in Thm. 1. Note that \(\mathcal{A}_{F\varphi}\) accepts all traces that end in a suffix that satisfies the user's query \(\varphi\). We feed \(\eta\), letter by letter to \(\mathcal{A}_{F\varphi}\) until it visits an accepting state. Suppose that \(\eta^{\prime}=P(s_{1}),\ldots,P(s_{\ell})\) is accepted by \(\mathcal{A}_{F\varphi}\), then we are guaranteed that \(\eta^{\prime}\) has a suffix that satisfies \(\varphi\). Next, we search for the beginning of the suffix, i.e., an index \(k<\ell\) such that \(P(s_{k}),\ldots,P(s_{\ell})\) satisfies \(\varphi\). We read the trace backward, starting from \(P(s_{\ell})\) and until \(P(s_{1})\) while executing \(\mathcal{A}_{\varphi}\) "backward", starting from the accepting states of \(\mathcal{A}_{\varphi}\) and until an initial state is visited. Formally, we maintain a set of states \(Q^{\prime}\subseteq Q\), which is initialized to \(Acc\). When reading a letter \(\sigma\), we update \(Q^{\prime}\) to be \(\{q\in Q:\exists q^{\prime}\in Q^{\prime}\text{ s.t. }q^{\prime}=\delta(q,\sigma)\}\). We terminate once \(q_{0}\in Q^{\prime}\). It is not hard to show that if the algorithm terminates after \(P(s_{k})\) is read, then the suffix \(P(s_{k}),\ldots,P(s_{\ell})\) satisfies \(\varphi\). Moreover, note that \(\eta\) is read (forward) once by \(\mathcal{A}_{F\varphi}\) and read at most once (backward) by \(\mathcal{A}_{\varphi}\), thus the running time is linear in \(n\). **Theorem 2**.: _Consider a collection of predicates \(P\) and a trace \(\eta\) over \(2^{P}\) of length \(n\). Given an LTLf formula \(\varphi\), the algorithm returns a sub-trace that satisfies \(\varphi\), if one exists. The algorithm processes \(\eta\) at most twice._ **Remark 2**.: Once the algorithm finds a trace \(P(s_{k}),\ldots,P(s_{\ell})\) that satisfies \(\varphi\) it restarts from index \(\ell+1\) in search for another query until reaching the end of the database. In our implementation, a query might be answered by numerous clips, dependent on the database size. ## 4 Empirical Evaluation To evaluate ASQ-IT, we conducted two user studies. The first was a usability study with laypeople who have no prior knowledge of AI or reinforcement learning, to examine their ability to understand and formulate queries in ASQ-IT. The second study assessed ASQ-IT's benefits for users with some AI knowledge who may use such a tool for debugging. ### User Study 1: Usability Assessment The goal of this study is to examine laypeople's interaction with ASQ-IT and test its usefulness and effectiveness for generating queries to an agent. #### 4.1.1 Empirical Methodology _Agent._ We trained a policy for 2000 episodes using a double DQN architecture and penalized for collisions. #### 4.1.2 Participants Forty participants were recruited through Prolific (20 female, mean age \(=34.7\), \(\text{STD}=11.29\)), each receiving \(\$4.5\) for their completion of the Task. To incentivize participants to make an effort, they were provided a bonus of 15 cents for each correct answer. Participants whose overall task duration was lower than the mean by more than two standard deviations were filtered out. _Procedure._ First, participants were introduced to the Highway domain and the concept of AI agents. Then commenced an introduction to the ASQ-IT's interface and the process of generating queries for the system. Each explanation was followed by a short quiz to ensure understanding before advancing. As a final step before the task, participants were provided a link to ASQ-IT's interface where they could interact and explore both the interface and the agent. In the task section, participants were tested on their understanding of the interface, query generation, and output through three types of tasks. _i) Movies to Queries (**M2Q**):_ Given an output video, select the correct query that would result in its generation (example in supplementary). _ii) Free Text to Queries (**T2Q**):_ Given textual descriptions of desired behavior, select the correct query. _iii) Queries to Free Text (**Q2T**):_ Given a query, select the correct textual description of the desired behavior. All questions were multiple-choice with four possibilities and a single correct answer and each task type included two questions in ascending difficulty 1. Upon task completion, participants were prompted to provide textual feedback regarding their experience with the system & interface and complete a usability survey [1]. Footnote 1: Full user study available at [https://bit.ly/3GJV394](https://bit.ly/3GJV394) **Results & Discussion** We analyzed participants' responses in terms of objective performance, usability ratings, and textual responses. The quantitative results are summarized in Figure 3 (A,B,C). We discuss the main findings and insights based on users' responses. _Participants were able to comprehend the semantics of our logic & use it to formulate meaningful queries._ Overall, participants were successful in the tasks of interpreting queries and formulating queries (Figure 3A). In all questions, participants did significantly better than a random guess, and in 4 out of 6 questions success rate was \(\approx 90\%\). We find these results highly encouraging: ASQ-IT allows participants with no training in logic to express behavior as formal queries in LTLf and to understand their output. We identified two main causes for incorrect answers: (1) _Agent relations (position):_ Confusing the position of the agent compared to other cars such as mixing "Behind" with "In Front Of" (e.g. is the agent behind another car or is there one behind the agent?), and (2) _Misunderstanding constraints:_ Some participants were not able to understand the use of constraints on the agent's trace and most often chose to ignore these specifications. These alone were responsible for \(\approx 90\%\) of all incorrect answers. _Participants improved throughout the task._ Some participants who struggled with simple questions regarding constraints would manage to solve correctly harder questions that appeared later. Some participants noted that elements of the interface became clearer when asked to answer questions about them. One participant wrote _"I found the instructions quite hard to understand. When a description was provided and you had to complete what you thought was the correct specification, I found this a better way to learn the process."_ _Participant reported high effectiveness scores._ Following Brooke's[1996] system usability scale, participants found ASQ-IT, on average, more effective than not, in all categories (see Figure 3B). Effectiveness is the measurement of a tool's ability to produce the desired outcome. Multiple responses mentioned its usefulness for testing and observing how the agent acts. Others described positively the fact that it was clear to them what videos would be generated by ASQ-IT, so long as the specification was not very complex, and after some initial trial and error phase. Most negative responses mentioned the many options available and the complexity of understanding the interface. However, many participants reported that after some exploration, their experience and understanding greatly improved, suggesting a learning curve in using the tool. _Participants reported an increase in efficiency over tool usage._ Efficiency measures the ease of using a tool. Many participants described some level of uncertainty upon initial interaction with the interface, mainly given the lengthy explanations prior to using it. However, the majority of participants reported quickly understanding once access to ASQ-IT was given and some exploration of the interface was conducted. When asked what would help them interact with the tool, many participants responded that they would prefer the interface to have fewer options and more visual aid for the existing ones. _Expressivity._ When asked to describe what features or behaviors were missing or desired for the highway domain, participants mostly requested the ability to control the agent's speed and distance from other cars, along with the option to specify the positions of other cars and the output video length. When asked what agent behaviors and situations were of interest to them, specifiable or not using ASQ-IT's current interface, participants mostly referred to observing the agent react to critical situations such as obstacles on the road, lane merges or interaction with other cars such as emergency vehicles or evasion of accelerating or braking cars. ### User Study 2: Identifying Agent Faults We conducted a second user study to assess how users interact with ASQ-IT when working on a task and whether using ASQ-IT improves their performance. To this end, we simulated faulty agents and tested participants' ability to debug them through exploration and investigation. The study had two main goals: (1) to understand the process of querying agent behavior using ASQ-IT, and (2) to assess the usefulness of ASQ-IT in a debugging task compared to a static policy summary explanation method [1]. **Empirical Methodology** _Agents._ We trained three agents for 2000 episodes using the double DQN architecture. To simulate a faulty agent, we combined two of the agents \(Agt_{1}\) and \(Agt_{2}\) into one. We choose a _trigger_ event, e.g., the agent is on Lane \(2\) and below a car. Initially, \(Agt_{1}\) operates, and once the trigger event occurs, control is passed to \(Agt_{2}\) (see Fig. 2). Specifically, we used (1) _Plain-TopLane:_\(Agt_{1}\) is a simple agent used in the usability study, and \(\overline{Agt_{2}}\) prioritizes the top-most lane, and (2) _Plain-Collision:_\(Agt_{1}\) is the same simple agent and \(Agt_{2}\) tries to collide with other cars. _Participants._ Since we used a fairly complex debugging task, our target users were people who have some knowl edge of AI. We recruited thirteen participants from the university who have completed at least one AI or machine learning course (2 female, mean age \(=29.3\), STD \(=5.1\)). Participants received \(\$15\) for their participation. The experiment took on average 45 minutes to complete. _Conditions._ Participants were assigned to either the ASQ-IT system or a system that implemented the HIGHLIGHTS policy summarization algorithm [1]. We intentionally recruited more participants for the ASQ-IT condition (8 for ASQ-IT, 5 for HIGHLIGHTS), as we were interested in learning about the interaction with the system. Participants in the ASQ-IT condition interacted with the system through queries which they could construct using drop-down menus (Figure 1 - Query Interface). Submitting a query would provide participants with up to four videos which answer the query, chosen randomly from the set of all such videos. An option to load more videos was available given that more such videos existed. Participants in the HIGHLIGHTS condition were presented with a simple interface that only included a single video and an option to load the next video or return to the previous one. Forty videos were made available this way, appearing in a sorted fashion based on the importance assigned to them by the HIGHLIGHTS algorithm. We made use of the HIGHLIGHTS-DIV variant of the algorithm that also takes into consideration the diversity between videos such that the videos were unique and captured multiple important states and not solely the most important one. We made sure both condition videos were of similar parameters such as FPS and minimum length. _Tasks._ The study consisted of three main tasks: (1) elimination, (2) hypothesis generation, and (3) verification. _Elimination_: Participants explored the _Plain-TopLane_ faulty agent using their assigned explanation system. Participants were required to identify the correct trigger from a list of four options and to describe in free text the behavioral change that occurs following the trigger event. _Hypothesis generation_: Participants were shown two videos of the _Plain-Collision_ faulty agent in which the trigger event and the behavior change appear. They were told what the fault was (i.e., trying to collide with other cars) and were asked to hypothesize what trigger event caused the change in behavior. Participants were also asked to describe how they would use the system to refute or validate their hypothesis. _Verification_: was to try to refute or validate their proposed hypotheses using the explanation system and, if need be, raise new ones. _Procedure._ First, participants were introduced to the Highway domain and its key elements. Next, participants were familiarized with the explanation system they would be using, either the ASQ-IT interface, or an interface for watching HIGHLIGHT videos. During this instructions phase, participants could interact with the system and understand how to work it (this was optional, but all participants chose to do so). When satisfied, participants moved on to the study tasks. All tasks included a confidence rating question on a 1 to 7 Likert scale. Lastly, participants answered an explanation satisfaction survey based on [11], provided textual feedback on the system they used, and answered demographic questions2. All sessions were done in the presence of the first author who encouraged participants to think aloud. The sessions were recorded, including both the computer screen and the audio. Participants' actions in the system were logged. Footnote 2: Full user-study available at [https://bit.ly/3GJV394](https://bit.ly/3GJV394) We assigned success scores to participants based on the relation between their answer and the correct trigger event: (1) No relation: 0 points, (2) Partial relation (e.g, specifying only one of two conditions for the trigger event): 1 point, (3) Exact trigger included (when multiple hypotheses raised): 2 points, and (4) Exact trigger chosen: 3 points. **Results & Discussion** We report the main observations regarding participants' experience and performance with ASQ-IT, and compare it to the use of HIGHLIGHTS. We analyzed participants' activities based on the session recordings and system logs. We report the average scores of participants as well as the average explanation satisfaction ratings in Figure 3 (D,E). As we are mainly interested in the process of using different explanation approaches, we elaborate on qualitative observations made based on the analysis of participants' activities. _ASQ-IT participants revised their hypotheses._ Six out of eight ASQ-IT participants revised the hypotheses they generated in the second task based on the explanation videos outputted by their queries to the ASQ-IT interface, while the other two were confident in theirs and chose to keep them. Meanwhile, only one participant in the HIGHLIGHTS condition made a revision to their original hypothesis. _Most ASQ-IT participants who revised their hypotheses improved their identification of the trigger event._ Out of the six Figure 3: **Top:** Usability Study Results. **Bottom:**Agent Faults Study Results. Figure 2: Simulating a faulty agent. The _Plain-Collision_ setup. ASQ-IT participants who revised their hypotheses, four were able to improve their score on the final answer. The remaining two participants maintained the same score. In contrast, the HIGHLIGHTS participant who revised her initial hypothesis received a lower score for her final answer compared to her initial response. The average change in participant success is illustrated in Figure 2(E). _Participants' method of hypothesis verification differed significantly between conditions._ This was most evident in the _elimination_ task. ASQ-IT participants were able to choose which trigger to inspect, define it as a query and observe videos of the agent in these situations. They were all able to eliminate options until reaching the correct answer. For six out of eight participants, the correct trigger became immediately evident once queried. The two remaining participants required additional queries in order to be convinced before ultimately selecting the correct trigger. Apart from one participant, who struggled initially with the interface, mostly due to confusion regarding the role of the constraint drop-downs, all other ASQ-IT participants resolved the elimination task quickly and reported it as easy. HIGHLIGHTS participants, on the other hand, had no control over the videos they received, and as such were forced to see each movie without knowing which trigger option might appear. Four out of five participants' process involved associating each movie with a possible trigger in the list, while the remaining participant searched videos for noticeable patterns and then compared them to the list. Both processes become both tedious and cognitively overwhelming as the number of options grows, especially when there is no guarantee that any of the trigger options will appear. In referral to their decision process for the final answer, all noted that the task was not easy and that their answers are mostly based on which triggers they have seen most in the videos. _ASQ-IT participants who identified the correct trigger were able to verify it._ Out of five ASQ-IT participants that identified the correct trigger (at some point), four were able to verify it using ASQ-IT and submit the correct answer. A typical verification process involved formulating queries that specified hypothesized trigger events and reviewing the retrieved video clips to see whether these indeed led to the behavior change. Meanwhile, three out of five HIGHLIGHTS participants refuted the correct hypothesis in favor of a more general, but partial answer. This can be associated with the same loss of confidence derived from self-reported lack of control over explanation videos as further discussed below. _ASQ-IT participants calibrated their confidence._ Six out of eight ASQ-IT participants adjusted their reported confidence in a justifiable way based on their interaction with the system. These include two participants that adjusted upwards due to successfully identifying the correct trigger and four participants adjusting downwards based on either the need for revisions or the recognition that the exact answer was not found. The remaining two participants either experienced no confidence change due to recognizing the correct trigger and validating it or were unaware of their partial solution due to confirmation bias which raised their confidence needlessly. While interesting, we take this observation with a grain of salt as there are typically substantial individual differences in confidence and the sample size is small. Four out of five HIGHLIGHTS participants also calibrated their confidence. Three of them lowered their confidence and commented that they were not able to view the videos that they thought would help them validate or refute their hypothesis. That is, in contrast to the ASQ-IT participants who lowered their confidence due to observing information that did not align with their hypothesis, HIGHLIGHTS participants lowered their confidence because the system did not provide them with helpful information. _ASQ-IT participants reported higher explanation satisfaction._ Upon completion of study tasks, HIGHLIGHTS participants reported more frustration and less satisfaction regarding the explanation system they were assigned, as can be seen in Figure 3(D). All HIGHLIGHTS participants mentioned feeling a lack of control regarding the videos they were shown, four participants stated difficulty in validating or refuting their hypotheses, and three reported loss of confidence. One of the participants summarized these difficulties in a concise manner, stating that _"Lack of variance [in videos]... hard to refute hypotheses"_ and _"Lack of consistency [in videos]... hard to validate hypotheses"_. ASQ-IT participants, on the other hand, largely reported a very positive experience with the explanation system. This positive experience was also evident both in participants' feedback section where they suggested features and options they'd like the system to allow in the future, and vocally off-record upon experiment termination. ## 5 Summary and Future Work We developed ASQ-IT -- an XRL interactive tool for querying AI agents that utilizes formal methods. Results from two user studies demonstrate that the tool is usable even to laypeople and that it supported users with no background in temporal logic in an agent-debugging task. In the debugging task, the tool was more useful than a baseline static explanation approach, as it enabled users to specify the information that they wish to explore regarding the agent's policy. Beyond the improvement in participants' objective performance in the task, there were noticeable differences in the process of exploring agent behavior. In particular, participants using ASQ-IT were more engaged, open to new hypotheses and felt more in control compared to participants using the static explanation. These findings highlight the potential benefits of designing more interactive explanation methods. There are several directions that can be explored in future work. A key question in the design of the tool is the balance between expressivity and complexity. It is possible that alternative interface designs could provide better scaffolding for more complex queries, such that users could gradually extend their ability to examine policies. Moreover, it would be interesting to go beyond the specifications of state predicates and develop a language for describing more abstract queries about the behavior of the agent (e.g., allowing users to query for "risky" behaviors). The tool could also be improved by integrating into it a variety of existing explanation methods, such that users could alternate between different pre-specified explanations and specifying their own queries. Such pre-specified explanations may help users identify which aspects of the agent's policy should be explored further.
2302.01695
Symmetric hypergraph states: Entanglement quantification and robust Bell nonlocality
Quantum hypergraph states are the natural generalization of graph states. Here we investigate and analytically quantify entanglement and nonlocality for large classes of quantum hypergraph states. More specifically, we connect the geometric measure of entanglement of symmetric hypergraphs to their local Pauli stabilizers. As a result we recognize the resemblance between symmetric graph states and symmetric hypergraph states, which explains both, exponentially increasing violation of local realism for infinitely many classes of hypergraph states and its robustness towards particle loss.
Jan Nöller, Otfried Gühne, Mariami Gachechiladze
2023-02-03T12:49:32Z
http://arxiv.org/abs/2302.01695v1
# Symmetric hypergraph states: Entanglement quantification and robust Bell nonlocality ###### Abstract Quantum hypergraph states are the natural generalization of graph states. Here we investigate and analytically quantify entanglement and nonlocality for large classes of quantum hypergraph states. More specifically, we connect the geometric measure of entanglement of symmetric hypergraphs to their local Pauli stabilizers. As a result we recognize the resemblance between symmetric graph states and symmetric hypergraph states, which explains both, exponentially increasing violation of local realism for infinitely many classes of hypergraph states and its robustness towards particle loss. ## 1 Introduction Multipartite entanglement is believed to be the key ingredient for many applications such as quantum simulation, metrology, and protocols in quantum information processing. Accordingly, its quantitative and qualitative characterization is of the great importance. However, this task has turned out to be difficult due to the exponentially increasing dimension of the Hilbert space, where these states live. What is more, it is known that an almost entire chunk of this huge Hilbert space is useless for quantum information processing [1, 2, 3]. Consequently, the research has focused on classes of multipartite entangled states, which are easy to characterise, manipulate and have wide-gamut of applications. In fact, symmetries and other kinds of simplifications seem to be essential for a state to be an interesting resource. One of the fundamental ways to explore the structure of multipartite states is to quantify their entanglement using entanglement measures. In this work, we concentrate on geometric measure of entanglement [4, 5], which has become a staple method due to its desirable properties of an entanglement monotone. The geometric measure calculates the distance of a given state from the set of separable pure states. Despite of its simple definition, it is very hard to compute due to a large number of optimization parameters. The geometric measure has been analytically estimated only for a few classes of states [4, 6, 7], upper and lower bounds have been derived [8, 9, 10, 11, 12], and numerical methods have been considered to find states with high entanglement [13, 14, 15]. In order to ease the complexity of the problem, symmetries of a quantum state have been utilized. It was proven that the closest separable state to a symmetric multiqubit state is also symmetric [16]. In this work, we investigate the geometric measure of entanglement of hypergraph states. We mainly focus on classes of fully symmetric states and derive analytic expressions for them. Hypergraph states [17, 18, 19, 20] are generalizations of graph states, which themselves are one of the most prominent classes of useful entangled states [21, 22, 23, 24, 25, 26], with their symmetric representative being the Greenberger-Horne-Zeilinger (GHZ) state. The geometric measure of GHZ states is trivial to calculate and is equal to \(1/2\) for any number of qubits. For graph states in general, however, only limited results are known [27], even though they are simpler to describe and work with due to the stabilizer formalism. Hypergraph states have much richer structure of nonlocal stabilizer, which makes them one the one hand complicated to classify [20, 28, 29, 30], but on the other hand an interesting and robust resources for various information processing tasks [31, 32, 33]. Here we connect the geometric measure of hypergraph states with their local properties. Some symmetric hypergraph states also have local Pauli stabilizers. See Ref. [34] for the full characterization. We give methods to simplify calculations by reducing the number of parameters involved in the optimization when the states have certain properties. As a result derive analytical expressions for the geometric measure. As a by-product of our method, we obtain an interesting way to write down hypergraph states, which explains the resemblance with GHZ states when considering their nonlocal properties. Moreover, we reproduce the proofs of extreme nonlocality in a much more concise and intuitive manner, and further generalize the robustness results to more states and qubits. These findings give a new insight into the structure of hypergraph states and could be used to derive new Bell inequalities and self-testing arguments. The paper is organized in the following way: First, we review hypergraph states and the conditions under which complete symmetric hypergraph states are stabilised by local Pauli operators. We proceed by using these symmetries to find new representations of the respective hypergraph states under local unitary transformations. Subsequently, we exploit these representations to derive results of the geometric entanglement measure. Finally, we give proofs of the extreme violation of local reaslism in hypergraph states and lastly, discuss robustness of entanglement in hypergraph states against particle loss. Figure 1: Quantum hypergraph states. (a) A hypergraph state containing all possible hyperedges containing exactly three vertices. This hypergraph state is called a _complete three-uniform_ hypergraph state. (b) A four-qubit hypergraph state. (c) A hypergraph state with a single hyperedge. ### Notation With \(X,Y,Z\), we denote the three Pauli operators \(\sigma_{x},\sigma_{y},\) and \(\sigma_{z}\), and we use \(P\) as placeholder if we want to refer to them collectively. In that spirit, \(\sqrt{P}_{\pm}\) shall denote the operator which squares to \(P\in\{X,Y,Z\}\) and has eigenvalues \(1,\pm i\). One class of states frequently appearing are different types of GHZ states, for which we use the following shorthand notation: \[|GHZ_{P}^{\pm}\rangle=\frac{1}{\sqrt{2}}\left(|{+_{P}}\rangle^{\otimes N}\pm|{ -_{P}}\rangle^{\otimes N}\right),\] with \(|\pm_{P}\rangle\) being the \(\pm 1\)-eigenstate of \(P\). Here, we usually omit the subscript when referring to the computational basis (\(Z\)), and do the same for the superscript if the relative phase is positive. Many of our calculations will involve the _weight_ of a computational basis element \(|x\rangle=|i_{1},\ldots i_{N}\rangle,\,i_{j}\in\{0,1\}\) which is just \(w(x)=i_{1}+\cdots+i_{N}\). ## 2 Hypergraph states Consider a hypergraph \(H=(V,E)\), defined over a set of vertices \(V\) and a set of hyperedges \(E\), which may connect more than two vertices. From \(H\) we can naturally construct a \(|V|\)-qubit quantum state, defined in the following manner: \[|H\rangle=\prod_{e\in E}C_{e}|{+}\rangle^{\otimes|V|}, \tag{1}\] where \(C_{e}\) gates are generalized \(CZ\) gates on \(|e|\) qubits and are defined as \(C_{e}=\mathds{1}-2|11\ldots 1\rangle\langle 11\ldots 1|\). See Fig. 1 for examples of hypergraphs. We say that a hypergraph state is \(k\)-uniform if all of its hyperedges connect exactly \(k\) vertices. As an example, the hypergraphs in Fig. 1 (a) and (b) are \(k=3\)-uniform and Fig. 1 (c) is a \(k=5\)-uniform hypergraph. Similarly to the graph notation, we say that a hypergrpah state is \(k\)-uniform complete if it contains all hyperedges of size \(k\). Additionally, Fig. 1 (a) is a \(3\)-uniform complete hypergraph and Fig. 1 (c) is \(5\)-uniform complete. Complete hypergraphs correspond to the permutation symmetric states. Finally, we say that a state is \(\mathbf{k}\)-uniform complete, for a vector \(\mathbf{k}=(k_{1},\ldots,k_{m})\), if it contains all hyperedges of cardinality \(k_{i}\) for all \(i=1,\ldots,m\). Like for graph states, hypergraph states have an alternative definition using a stabilizer formalism, however, unlike in the graph state case, these stabilizers are nonlocal, i.e. they are not tensor products of local Pauli operators. Instead they contain nonlocal phase gates. For a vertex \(i\in V\), the associated stabilizer operator is given by the expression: \[h_{i}=X_{i}\bigotimes_{e_{j}\in\mathcal{A}(i)}C_{e_{j}}, \tag{2}\] where \(X_{i}\) is Pauli-\(X\) gate acting on \(i\)-th qubit and \(\mathcal{A}(i)\) is the adjacency set of the vertex \(i\), defined as \(\mathcal{A}(i)=\{e-\{i\}|e\in E\text{ with }i\in e\}\). To put it simpler, the elements of \(\mathcal{A}(i)\) are sets of vertices which are adjacent to \(i\) via some hyperedge. The hypergraph state \(|H\rangle\) is then the unique pure state which is invariant under the action of the group generated by those stabilizer operators. However, there are cases when some hypergraph states have local Pauli stabilizer. To give a simple example, the hypergraph state in Fig. 1 (b) is an eigenstate of operator and less trivially, Fig. 1 (a) is an eigenstate of \(Y_{1}\otimes Y_{2}\otimes Y_{3}\otimes Y_{4}\). In Ref. [34], necessary and sufficient conditions were derived for symmetric hypergraph states to have local Pauli stabilizers and the explicit form of these stabilizers was given. For completeness we paraphrase this result here. **Lemma 1**.: _[_34_]_ _A symmetric \(N\)-qubit \(\mathbf{k}\)-uniform complete hypergraph state is_ 1. \("+1"\) _eigenstate of_ \(X^{\otimes N}\) _iff for_ \(0\leq w\leq N\)__ \[\sum_{i=1}^{m}\binom{w}{k_{i}}=\sum_{i=1}^{m}\binom{N-w}{k_{i}},\quad(\mbox{ mod }2).\] (3) 2. \("+1"\) _eigenstate of_ \(-X^{\otimes N}\) _iff for_ \(0\leq w\leq N\)__ \[\sum_{i=1}^{m}\binom{w}{k_{i}}=\sum_{i=1}^{m}\binom{N-w}{k_{i}}+1,\quad(\mbox{ mod }2).\] (4) 3. \("+1"\) _eigenstate of_ \(Y^{\otimes N}\) _iff for_ \(0\leq w\leq N\)__ \[\sum_{i=1}^{m}\binom{w}{k_{i}}=\sum_{i=1}^{m}\binom{N-w}{k_{i}}+w+\frac{N}{2}, \quad(\mbox{mod }2).\] (5) _These cases are in fact comprehensive when_ \(\max_{i}k_{i}>2\)_, meaning that, unless we are dealing with graph states, symmetric hypergrpah states can have only one of these three Pauli stabilizers._ We add another observation. For even \(N\) we have that \(\binom{w}{2}=\binom{N-w}{2}+w+N/2\). With the palindrome conditions above, it is evident that by adding/removing all hyperedges with cardinality 2 on even-qubit hypergraph states we can map \(X^{\otimes N}\)-stabilised states to \(Y^{\otimes N}\)-stabilised states and vice versa. ## 3 The geometric measure of hypergraph states The _geometric measure of entanglement_ is an entanglement measure, denoted by \(E_{G}\), and is defined to be one minus the maximal squared overlap between a given state and the closest product state: \[E_{G}(|\psi\rangle)=1-\max_{|\phi\rangle=|a\rangle|b\rangle|c\rangle\ldots}| \langle\phi|\psi\rangle|^{2}. \tag{6}\] For a pure state it quantifies the distance to the set of separable states [4, 35]. In general such optimisation problems are difficult to handle analytically, since one needs to optimise over increasing number of parameters in the multipartite case. On the other hand, there are several tricks that one can use to make calculations easier: (i) If the state can be mapped under local unitaries to another state with nonnegative real coefficients, then the optimisation over the new state can be done using real product states. This can be seen by observing that if \(a_{i}\geq 0\) for all \(i\), the triangle inequality \(|\sum_{i}a_{i}b_{i}|\leq\sum_{i}|a_{i}||b_{i}|=\sum_{i}a_{i}|b_{i}|\) is an equality if we also have \(b_{i}\geq 0\) for all \(i\). Note that local unitary operations do not change entanglement properties of a state they are applied to. Due to normalization constraints, this reduces number of optimization parameters by half. (ii) Moreover, if the given multipartite state is permutation symmetric and has more than two parties, one must choose the closest product state to be also permutation symmetric [16]. To sum up, if we have permutation symmetric states with nonnegative coefficients, the problem can be reduced to a one-parameter optimization. In this section we show that for certain classes of hypergraph states we can use the tricks above to evaluate their entanglement. Let us consider an example of the four-qubit thee-uniform complete hypergraph state \(|H_{4}^{3}\rangle\) given in Fig. 1 (a). One can directly check that the unitary matrix \(U^{\otimes 4}\) with \[U=\left(\begin{array}{cc}\cos(t)&\sin(t)\\ -\sin(t)&\cos(t)\end{array}\right) \tag{7}\] and parameter \(t=1/2\arctan[1/2(\sqrt{5}-1)]\) transforms the hypergraph state \(|H_{4}^{3}\rangle\) to: \[|S_{H_{4}}\rangle= \frac{1}{8}(3-\sqrt{5})\big{(}|0000\rangle+|1111\rangle\big{)}+ \frac{1}{8}(1+\sqrt{5})(|0011\rangle+\mbox{perm.}). \tag{8}\] Then, using a single parameter optimization methods, one can analytically calculate that \(E_{G}(|H_{4}^{3}\rangle)=\frac{25-3\sqrt{5}}{32}\approx 0.571619\). This value was previously derived in Ref. [20], but only numerically. The technique of mapping a state to all positive coefficients can be used for wider classes of hypergraph states. We can consider the following lemma as one of the examples. **Lemma 2**.: _Let \(|H_{N}\rangle\) be an \(N\)-qubit hypergraph state corresponding to a hypergraph with one vertex which is contained in all hyperedges. Then the geometric measure of entanglement of \(|H_{N}\rangle\) can be calculated over real product vectors._ Figure 2: Entanglement of the \(N\)-qubit hypergraph states with a single \(N\)-cardinality hyperedge. The blue dots give the value of geometric measure of entanglement for \(|H_{N}^{N}\rangle\) for \(3\) to \(10\) qubits. One can analytically check that the overlap maximization function has one global maximum. However, the expressions involved for the values are too cumbersome, thus, here we give their corresponding numerical values. As expected the entanglement quickly goes to zero as the number of qubits grow. The red dots (the lower ones) correspond to the lower bounds obtained in Ref. [36], where the optimization is considered over bipartite states, instead of product ones. Proof.: After relabelling, we may assume that the vertex featured in all hyperedges corresponds to the first site. Then we can rewrite the state as \[|H\rangle=\frac{1}{\sqrt{2}}(|0\rangle|+\rangle^{\otimes N-1}+|1\rangle|\widetilde {H}\rangle), \tag{9}\] where \(|\widetilde{H}\rangle\) is some hypergraph state over the remaining vertices. We now apply the Hadamard operator \(\mathcal{H}\) to the first site: \[\mathcal{H}_{1}|H\rangle=\frac{1}{\sqrt{2}}(|+\rangle^{\otimes V}+|-\rangle| \widetilde{H}\rangle). \tag{10}\] Every computational basis entry appears with coeficcient \(\frac{+1}{\sqrt{2}^{N+1}}\) in the first summand and with \(\frac{\pm 1}{\sqrt{2}^{N+1}}\) in the second, and the contributions either add up to zero or \(\frac{1}{\sqrt{2}^{N-1}}\). Thus, the resulting state has a real optimal product state overlap and since the Hadamard gate is also real, so does the original state \(|H\rangle\). As an application we can directly calculate the geometric measure of entanglement of \(N\)-qubit hypergraph states \(|H_{N}^{N}\rangle\) with a single hyperedge connecting all \(N\) vertices (see Fig. 1 (c) for an example). The overlap of \(|H_{N}^{N}\rangle\) with a symmetric state \((a|0\rangle+b|1\rangle)^{\otimes N}\) with \(a,b\in\mathbb{C}\) is \(\frac{1}{\sqrt{2}^{N}}\left((a+b)^{N}-2^{N}\right)\), and therefore \[E_{G}(|H_{N}^{N}\rangle)=1-\frac{1}{2^{N}}\max_{|a|^{2}+|b|^{2}=1}((a+b)^{N}- 2b^{N})^{2}. \tag{11}\] We can rewrite the hypergraph state by conditioning on the first qubit: \[|H_{N}^{N}\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle|+\rangle^{\otimes N-1}+|1 \rangle|H_{N-1}^{N-1}\rangle\right). \tag{12}\] Now, according to the previous observation, we can reduce the optimisation in Eq. (11) to run only over real numbers \(a,b\) with \(a^{2}+b^{2}=1\). In Fig. 2 the values of geometric measure of entanglement are plotted for different \(N\). As expected, the state becomes very close to the product state when \(N\) increases. Our results are also compared to the previously known lower-bounds, calculated in Ref. [36]. Lemma 2 can also be used in non-symmetric cases to halve the number of parameters to optimize over. To give an example, consider the hypergraph state \(|H_{4}\rangle_{b}\) in Fig. 1 (b). If one applies Hadamard gates on qubits \(1\) and \(2\) one obtains the following state: \[|\widetilde{H}_{4}\rangle_{b}=\frac{1}{2}(|0000\rangle+|0001\rangle+|0010 \rangle+|1111\rangle), \tag{13}\] for which the analytic value of the geometric measure of entanglement, \(E_{G}(|H_{4}\rangle_{b})=(5-\sqrt{5})/8\) can be directly calculated using derivatives of the two-variable function. We now consider a more systematic approach to analytically calculate the geometric measure of entanglement for several classes of symmetric hypergraph states. Specifically, we subsequently focus our attention to hypergraph states which exhibit local Pauli-symmetries. It turns out that for low hyperedge cardinalities, we can use local square roots of these Pauli stabilizers to map hypergraph states to real positive coefficient vectors. A well-known example of this is the GHZ state. The fully-connected \(2\)-uniform hypergraph states, which are stabilised by \(X^{\otimes N}\) can be mapped to the \(N\)-qubit GHZ state up to a global phase using square roots of the local Pauli-\(X\) stabilizer: \[\sqrt{X}_{+}^{\otimes N}|H_{N}^{2}\rangle=\pm\frac{1}{\sqrt{2}}(|0\ldots 0 \rangle+|1\ldots 1\rangle). \tag{14}\] In all the other cases, a relative phase occurs, which can be corrected with the additional application of local Clifford phase gates. We generalize this result to fully-connected three-uniform complete hypergraph states \(|H_{N}^{3}\rangle\) and write them in a very convenient way. **Lemma 3**.: _The even-qubit three-uniform complete hypergraph states can be mapped to a superposition of the GHZ state and all odd weight vectors after applications of square roots of local Pauli operators, corresponding to the respective stabilisers._ \(\mathbf{N}\equiv\mathbf{2}\mod\mathbf{4}\)_:_ \(|H_{N}^{3}\rangle\) _is stabilised by_ \(X^{\otimes N}\) _and with_ \(|\widetilde{H}_{N}^{3}\rangle=\sqrt{X}_{+}^{\otimes N}|H_{N}^{3}\rangle\) _we have_ \[|\widetilde{H}_{N}^{3}\rangle=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt {2}^{N}}\sum_{w(x)\,\mathrm{odd}}|x\rangle. \tag{15}\] \(\mathbf{N}\equiv\mathbf{0}\mod\mathbf{4}\)_:_ \(|H_{N}^{3}\rangle\) _is stabilised by_ \(Y^{\otimes N}\) _and with_ \(|\widetilde{H}_{N}^{3}\rangle=\sqrt{Y}_{+}^{\otimes N}|H_{N}^{3}\rangle\)_, we have_ \[|\widetilde{H}_{N}^{3}\rangle=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt {2}^{N}}\sum_{w(x)\,\mathrm{odd}}(-i)^{w(x)-1}|x\rangle. \tag{16}\] The proof of the lemma is given in A.2. Additionally, in A, we derive most generally the state vector after application of the square roots of local Pauli-\(X\) and \(Y\) operators, see Eq. (A.9) and Eq. (A.18) for the closed formulae. In the \(Y\)-stabilised case, we can get rid of the alternating signs on the odd weights by applying an extra \(\sqrt{Z}_{+}\) on each site, however we now pick up an imaginary part in the process. The negative sign in front of the GHZ state occurs when the number of qubits is \(N=8k+4\) or \(N=8k+6\), respectively. By applying local Pauli-\(Z\) to every site, and neglecting the global sign, we can always achieve without loss of generality that the sign in front of the GHZ part is positive. For \((3,2)\)-uniform hypergraph states we can derive very similar results, see Lemma 9 in A.1 for the exact statement. From the new way of writing the three-uniform complete hypergraph state we can derive an analytical expression for the geometric measure of entanglement for \(|H_{N}^{3}\rangle\). **Proposition 1**.: _The geometric measure of entanglement of an \(N\)-qubit three-uniform complete hypergraph state with \(N\equiv 2\mod 4\), i.e. it is stabilised by \(X^{\otimes N}\), can be obtained using the following expression:_ \[E_{G}(|H_{N}^{3}\rangle)=\frac{3}{4}-\frac{1}{\sqrt{2}^{N}}-\frac{1}{2^{N}}. \tag{17}\] Proof.: Using the identity in Eq. (15), we can express the overlap of \(|\widetilde{H}_{N}^{3}\rangle\) with \((\cos(\theta)|0\rangle+\sin(\theta)|1\rangle)^{\otimes N}\) by \[f_{N}(\theta):= \langle\widetilde{H}_{N}^{3}|\psi(\theta)\rangle^{\otimes N}\] \[= \frac{1}{2}\left(\cos^{N}(\theta)+\sin^{N}(\theta)+\left(\frac{1}{ \sqrt{2}}(\cos(\theta)+\sin(\theta))\right)^{N}-\left(\frac{1}{\sqrt{2}}(\cos( \theta)-\sin(\theta))\right)^{N}\right)\] \[= \frac{1}{2}\left(\cos^{N}(\theta)+\sin^{N}(\theta)+\sin^{N}\left( \theta+\frac{\pi}{4}\right)-\sin^{N}\left(\theta-\frac{\pi}{4}\right)\right).\] In order to determine the maximum of \(f_{N}(\theta)\), we rewrite \[f_{N}(\theta)=\frac{1}{2}\left(\sum_{j=0}^{3}\cos^{N}(\theta+\frac{\pi j}{4}) \right)-\cos^{N}(\theta+\frac{\pi}{4}).\] As shown in A, Lemma 10, the sum \(\sum_{j=0}^{3}\cos^{N}(\theta+\frac{\pi j}{4})\) becomes maximal at \(\theta\in\frac{\pi}{4}\mathbb{Z}\). We observe, that for \(\theta=\frac{\pi}{4}\) the negative contribution \(\cos(\theta+\frac{\pi}{4})\) is zero, and since \(N\) is even, thereby also minimal. We can then conclude that \[\max_{\theta}f_{N}(\theta)=f_{N}\left(\frac{\pi}{4}\right)=\frac{1}{2}+\frac{ 1}{\sqrt{2}^{N}}.\] Inserting this value, the geometric measure can be exactly calculated using the equality Eq. (17). Thus, the geometric measure of entanglement for the three-uniform complete hypergraph state quickly approaches \(3/4\). Next we treat the Pauli-Y stabilized case. **Proposition 2**.: _The geometric measure of an \(N\)-qubit three-uniform hypergraph state with \(N\equiv 0\mod 4\), i.e. it is stabilised by \(Y^{\otimes N}\), can be estimated as follows:_ \[\frac{3}{4}-\frac{1}{2^{N}}-\frac{1}{\sqrt{2}^{N}}\leq E_{G}|H_{N}^{3}\rangle \leq\frac{3}{4}-\frac{1}{2^{N}}. \tag{18}\] _Numerical computations for different \(N\) indicate that in both estimates equality does not hold in general, see Fig. 3._ Proof.: Again, the starting point is the identity in Eq. (16). Unfortunately, the considered state does not have all positive coefficients anymore, so the closest product state need not have real coefficients. However, exploiting permutation symmetry, we can assume it to be of the form \(|\psi_{N}(\theta,\varphi)\rangle:=(\cos(\theta)|0\rangle+e^{i\varphi}\sin( \theta)|1\rangle)^{\otimes N}\). The overlap is then computed similarly as before, one just needs to keep track of the occurring phases. It is then straightforward to derive the estimate \[|\langle\widetilde{H}_{N}^{3}|\psi_{N}(\theta,\varphi)\rangle|\leq|f_{N}( \theta)|\leq\frac{1}{2}+\frac{1}{\sqrt{2}^{N}} \tag{19}\] for all \(\phi,\theta,N\), which proves to the lower bound in (18). The upper bound on the geometric measure is obtained by inserting \(\varphi=0,\theta=\pi/4\) Next, we extend our results to five-uniform complete hypergraph states, and more generally to ones with hyperedge cardinality of the type \((2^{r}+1)\) and are stabilized either by \(X^{\otimes N}\) or \(Y^{\otimes N}\) operators with the \("+1"\) eigenvalue. In these cases, the sequence of binomial coefficients \(\binom{w}{2^{r}+1}\) has a nice structure, which allows us to derive local unitary equivalences in the spirit of Lemma 3. We continue by using these equivalences to obtain analytical results on geometric measure of entanglement in the five-uniform case, and conjecture some estimates in the general case. It turns out that the second part of Lemma 3 is a special case of the following result, which is proven in A.3.2. **Lemma 4**.: _Let \(|H_{N}^{\mathbf{k}}\rangle=\frac{1}{\sqrt{2}^{N}}\sum_{x}(-1)^{f(w(x))}|x\rangle\) be a symmetric \(\mathbf{k}\)-uniform \(N\)-qubit hypergraph state, which is stabilised by \(Y^{\otimes N}\). Further, assume that \(f\) is \(2^{r}\)-periodic, \(f(w)=0\) for all even \(w\), and \(N\equiv 0\mod 2^{r-1}\). Then the state can be mapped to a superposition of the GHZ state and some odd weights by applying \(\sqrt{Y}_{+}^{\otimes N}\), i.e._ \[|\widetilde{H}_{N}^{\mathbf{k}}\rangle=\sqrt{Y}_{+}^{\otimes N}|H_{N}^{ \mathbf{k}}\rangle=\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt{2}}|\phi_{ \mathrm{odd}}^{\mathbf{k}}\rangle, \tag{20}\] _where \(|\phi_{\mathrm{odd}}^{\mathbf{k}}\rangle\) is a normalized quantum state depending on \(\mathbf{k}\) which features odd weight contributions only in the computational basis._ The conditions look somewhat artificial at the first sight, however additionally to all single-cardinality hyperedge \(|H_{N}^{k}\rangle\) which are stabilised by \(Y^{\otimes N}\), they encompass e.g. \(24\)-qubit \((5,9)\)-uniform or \(16\)-qubit \((3,5,9)\)-uniform hypergraph states. **Lemma 5**.: _Let \(r\geq 3\), and \(N=l2^{r}+2^{r-1}\). Then the \(k=(2^{r-1}+1)\)-uniform complete \(N\)-qubit hypergraph state is stabilised by \(X^{\otimes N}\) and we have_ \[|\widetilde{H}_{N}^{k}\rangle=\sqrt{X}_{+}^{\otimes N}|H_{N}^{k}\rangle=\frac {1}{\sqrt{2}}|GHZ_{X}\rangle+(-1)^{l}\sum_{w(x)\,\mathrm{odd}}c_{w(x)}|x\rangle, \tag{21}\] _where \(|GHZ_{X}\rangle=\frac{1}{\sqrt{2}}(|+\rangle^{\otimes N}+|-\rangle^{\otimes N})\), and the odd coefficients are given by_ \[c_{w}=\frac{2}{2^{r}}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1}i^{j-1}\frac{\cos^{N -w}\left(\frac{\pi j}{2^{r}}\right)\sin^{w}\left(\frac{\pi j}{2^{r}}\right)}{ \cos\left(\frac{2\pi j}{2^{r}}\right)}. \tag{22}\] **Lemma 6**.: _Let \(r\geq 3\), and \(N=l2^{r}\). Then the \(k=(2^{r-1}+1)\)-uniform complete \(N\)-qubit hypergraph state is stabilised by \(Y^{\otimes N}\) and gets mapped to_ \[|\widetilde{H}_{N}^{k}\rangle=\frac{1}{\sqrt{2}}|GHZ\rangle+\sum_{w(x)\, \mathrm{odd}}i^{w-1}c_{w(x)}|x\rangle \tag{23}\] _by \(\sqrt{Y}_{+}^{\otimes N}\), where the odd coefficients are given by_ \[c_{w}=\frac{2}{2^{r}}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1}\frac{\cos^{N-w}( \frac{\pi j}{2^{r}})\sin^{w}(\frac{\pi j}{2^{r}})}{\sin(\frac{2\pi j}{2^{r}})}. \tag{24}\] Proofs of the lemmata are given in A.3. Next, we look at the geometric measure of five-uniform complete hypergraph states with local Pauli stabilisers. In the five-uniform case, the situation is still somewhat well-behaved, since one can easily compute the coefficients in Eq. (22) to be \[c_{w}=\frac{\left(-1\right)^{l}}{\sqrt{2}}\left(\cos^{N-w}\left(\frac{\pi}{8} \right)\sin^{w}\left(\frac{\pi}{8}\right)+\sin^{N-w}\left(\frac{\pi}{8}\right) \cos^{w}\left(\frac{\pi}{8}\right)\right). \tag{25}\] In particular (by applying local Pauli-Z if necessary), we have a local unitary transformation which maps \(X\)-stabilised five-uniform hypergraph states to states with nonnegative coefficients. The closest product state to \(|H_{N}^{5}\rangle\) (when \(N\equiv 4\mod 8\)) can therefore be assumed to have the form \((\sin\theta|0\rangle+\cos\theta|1\rangle)^{\otimes N}\) for some \(\theta\in[0,\pi/2]\). After a few algebraic transformations on the analytic expression for the overlap, we can use the proof strategy for the optimisation in the three-uniform setting. Again, we provide an exact value for the \(X^{\otimes N}\)-stabilised states and estimates for the \(Y^{\otimes N}\)-stabilised cases, both converging to \(3/4\) as \(N\) increases. **Theorem 1**.: _Let \(|H_{N}^{5}\rangle\) be a hypergraph state which is either stabilised by \(X^{\otimes N}\), or \(Y^{\otimes N}\). Then we have the following analytical results for the geometric entanglement measure of \(|H_{N}^{5}\rangle\):_ 1. _If_ \(|H_{N}^{5}\rangle\) _is_ \(X^{\otimes N}\)_-stable, we have the exact expression:_ \[E_{G}(|H_{N}^{5}\rangle)=\frac{3}{4}-\lambda_{N}-\lambda_{N}^{2},\] (26) _where_ \(\lambda_{N}=\frac{1}{\sqrt{2}}\left(\cos^{N}\left(\frac{\pi}{8}\right)-\sin^ {N}\left(\frac{\pi}{8}\right)\right).\)__ 2. _If_ \(|H_{N}^{5}\rangle\) _is_ \(Y^{\otimes N}\)_-stabilised, we can estimate_ \[\frac{3}{4}-\lambda_{N}-\lambda_{N}^{2}\leq E_{G}(|H_{N}^{5}\rangle)\leq\frac {3}{4}.\] (27) The proof can be found in A.4.1. By exploiting the symmetries of the considered states at hand, we have improved significantly on the more universal bounds derived in Ref. [36]. Numerical experiments indicate that the results of Propositions 1, 1, and 2 about the behaviour for large \(N\) extends to complete \((2^{r}+1)\)-uniform hypergraph states with local Pauli stabilisers. However, for \(r\geq 3\), the involved terms behave not as nicely anymore, which prevents us from using the same strategy of proof as for three-, and five-uniform states. Moreover, the coefficients in Lemmata 5 and 6 in general can also be negative, therefore the best one can hope for are estimates, similar to the ones obtained in the \(Y^{\otimes N}\)-stabilised cases above. We conjecture that the geometric entanglement measure of \((2^{r-1}+1)\)-uniform complete hypergraph states which are stabilised by local Pauli-\(X\) satisfies the lower bound \[\frac{3}{4}-\lambda_{N}-\lambda_{N}^{2}\leq E_{G}(|H_{N}^{2^{r-1}+1}\rangle) \tag{28}\] with \[\lambda_{N}:=\frac{2}{2^{r}}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1}\mathrm{sgn} \left(\cos(\frac{\pi j}{2^{r}})\right)\frac{\cos^{N}(\frac{\pi}{4}-\frac{\pi j }{2^{r}})}{|\cos(\frac{2\pi j}{2^{r}})|}. \tag{29}\] On the other hand, the overlap of \(|\widetilde{H}_{N}^{k}\rangle\) with one of the two states \((\frac{|0\rangle\pm|1\rangle}{\sqrt{2}})^{\otimes N}\) must be greater than \(1/2\). Therefore, when the hyperedge cardinality is fixed, the geometric entanglement measure converges to \(3/4\) as \(N\) grows. Considering the previous examples, which hint at \(Y^{\otimes N}\)-stabilised states intrinsically having a higher geometric entanglement measure than \(X^{\otimes N}\)-stabilised ones, it seems natural to expect that the estimate Eq.(28) also carries over to \(Y^{\otimes N}\)-stabilised states. Here, we obtain the general upper bound \(3/4\) by observing that the overlap of \(|\widetilde{H}_{N}^{k}\rangle\) with \(|0\rangle^{\otimes N}\) is always \(1/2\). Much weaker lower estimates on the geometric measure of hypergraph states, based on entanglement witnessing using the maximal Schmidt coefficient in any bipartition, are also discussed in Ref. [36]. The results of several numerical computations for the geometric entanglement measure of different hypergraph states are plotted in Fig. 3. It shows that the lower bounds on the entanglement measure for \(Y^{\otimes N}\) we proved and conjectured are in general not saturated. ## 4 Connection to nonlocality ### Exponential violation of local realism It is known that complete hypergraph states violate Mermin-like inequalities in an exponential and robust manner [31]. Both, the violation and the robustness results were highly technical, since they were derived using brute-force calculations and lacked physical intuition. Here we reformulate the results in a systematic way and then expand them further to infinitely many classes of hypergraph states. The key idea in all of the subsequent considerations is Figure 3: **A**: Numerically obtained values for the geometric measure of entanglement for different classes of hypergraph states, which feature \(Y^{\otimes N}\)-symmetries. The additional grey curve represents the lower bound for the \(9\)-uniform case conjectured in Eq. (28). **B**: Comparison of the difference of geometric measure of entanglement for different \(X^{\otimes N}\) and \(Y^{\otimes N}\)-stabilised states to the maximum value of \(3/4\) on a logarithmic scale. Note, that the interpolated values for the \(X^{\otimes N}\)-stabilized states coincide with the lower bounds in Eq. (18) and Eq. (27), respectively. the following: In order to calculate the quantum value \(\langle\mathcal{B}_{N}\rangle_{|H_{N}\rangle}\), we augment the expression \(\langle H_{N}|\mathcal{B}_{N}|H_{N}\rangle=\langle H_{N}|U^{\dagger}U\mathcal{B} _{N}UU^{\dagger}|H_{N}\rangle\). With a clever choice of the unitary \(U\), we can get rid of the signs in the Bell operator by considering \(U\mathcal{B}_{N}U\), and also transform \(|H_{N}\rangle\) to a state \(U^{\dagger}|H_{N}\rangle\), which has a structure easier to handle computationally as opposed to the original one. In Ref. [31] the violation was derived using Mermin-like operators of the form: \[\begin{split}\mathcal{B}_{N}^{P}&=\frac{1}{2} \left((P+iZ)^{\otimes N}+(P-iZ)^{\otimes N}\right)\\ &=\sum_{m=0,\,\mathrm{even}}^{N}i^{m}Z_{1}\ldots Z_{m}P_{m+1} \ldots P_{N}+\mathrm{perm}.\end{split} \tag{30}\] for \(P\in\{X,Y\}\). Note that the expression is permutation symmetric. It was shown in Ref. [31] that the quantum value of \(\mathcal{B}_{N}^{X}\) on \(|H_{N}^{3}\rangle\) is \(\langle\mathcal{B}_{N}^{X}\rangle_{|H_{N}^{3}\rangle}=2^{N-2}\), while within local realistic theories, it is bounded by \(2^{N/2}\)[37]. Here we derive the same result in a much more concise manner, exploiting local Pauli symmetries of the hypergraph state. The following discussion relies significantly on the following observation: When applying \(\sqrt{P}_{-}^{\otimes N}\) or \(\sqrt{Z}_{-}^{\otimes N}\) to both sides of \(\mathcal{B}_{N}^{P}\), we can significantly simplify the structure of \(\mathcal{B}_{N}^{P}\): \[\begin{split}\widetilde{\mathcal{B}}_{N}&:=\sqrt{P }_{-}^{\otimes N}\mathcal{B}_{N}^{P}\sqrt{P}_{-}^{\otimes N}\\ &=\frac{1}{2}\left((\mathds{1}+Z)^{\otimes N}+(\mathds{1}-Z)^{ \otimes N}\right)\\ &=\sum_{m=0,\,\mathrm{even}}^{N}\mathds{1}_{1}\ldots\mathds{1}_{m }Z_{m+1}\ldots Z_{N}+\mathrm{perm}.\end{split} \tag{31}\] where we used that \(\sqrt{P}_{-}Z\sqrt{P}_{-}=\pm iZ\). Similarly, we can also exploit that \(\sqrt{Z}_{-}P\sqrt{Z}_{-}=\pm iP\) to find: \[\begin{split}\widetilde{\mathcal{B}}_{N}^{P}&:=\sqrt {Z}_{-}^{\otimes N}\mathcal{B}_{N}^{P}\sqrt{Z}_{-}^{\otimes N}\\ &=\frac{1}{2}\left((\mathds{1}+P)^{\otimes N}+(\mathds{1}-P)^{ \otimes N}\right)\\ &=\sum_{m=0,\,\mathrm{even}}^{N}\mathds{1}_{1}\ldots\mathds{1}_{m }P_{m+1}\ldots P_{N}+\mathrm{perm}.\end{split} \tag{32}\] As a working example, we consider the case where \(N\equiv 2\mod 4\), i.e. \(|H_{N}^{3}\rangle\) is invariant under \(X^{\otimes N}\). We may rewrite \(\langle\mathcal{B}_{N}^{X}\rangle_{|H_{N}^{3}\rangle}\) as \[\langle H_{N}^{3}|\mathcal{B}_{N}^{X}|H_{N}^{3}\rangle=\langle H_{N}^{3}| \sqrt{X}_{+}^{\otimes N}\widetilde{\mathcal{B}}_{N}\sqrt{X}_{+}^{\otimes N}|H_ {N}^{3}\rangle=\langle\widetilde{H}_{N}^{3}|\widetilde{\mathcal{B}}_{N}| \widetilde{H}_{N}^{3}\rangle, \tag{33}\] with the transformed operator \(\widetilde{\mathcal{B}}_{N}\) as in Eq. (31). According to Eq. (15), we also have \(|\widetilde{H}_{N}^{3}\rangle=\frac{\pm 1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt{2}}| \phi_{\mathrm{odd}}\rangle\), where \(|\phi_{\mathrm{odd}}\rangle=\frac{1}{\sqrt{2}^{N-1}}\sum_{w(x)\,\mathrm{odd}}|x\rangle\) contains all the odd weight contributions. Due to linearity of the expression \(\langle\widetilde{H}_{N}^{3}|\widetilde{\mathcal{B}}_{N}|\widetilde{H}_{N}^{3}\rangle\), we can separate it into four parts. The summands of \(\widetilde{\mathcal{B}}_{N}\) can only flip a sign when acting on the computational basis. We immediately conclude that the cross-terms do not yield any contribution, \(\langle GHZ|\widetilde{\mathcal{B}}_{N}|\phi_{\rm odd}\rangle=0\), and therefore: \[\langle\mathcal{B}_{N}^{X}\rangle_{|H_{N}^{3}\rangle}=\frac{1}{2}\langle GHZ| \widetilde{\mathcal{B}}_{N}|GHZ\rangle+\frac{1}{2}\langle\phi_{\rm odd}| \widetilde{\mathcal{B}}_{N}|\phi_{\rm odd}\rangle. \tag{34}\] Let us consider the GHZ part first. Since all the terms in (31) appear with a positive sign and the number of Pauli-\(Z\) gates is always even, \(|GHZ\rangle\) is an eigenstate of \(\widetilde{\mathcal{B}}_{N}\), and we directly obtain: \[\frac{1}{2}\langle GHZ|\widetilde{\mathcal{B}}_{N}|GHZ\rangle=\frac{1}{2}2^{N- 1}=2^{N-2}. \tag{35}\] We consider the last case and the action of Pauli-\(Z\) on \(|\phi_{\rm odd}\rangle\). Because \(N\) is even, we clearly have \(\widetilde{\mathcal{B}}_{N}=\widetilde{\mathcal{B}}_{N}Z^{\otimes N}\). Further, \(Z^{\otimes N}|x\rangle=-|x\rangle\) for every odd weight computational basis state \(|x\rangle\). Hence, \[\langle\phi_{\rm odd}|\widetilde{\mathcal{B}}_{N}|\phi_{\rm odd}\rangle= \langle\phi_{\rm odd}|\widetilde{\mathcal{B}}_{N}Z^{\otimes N}|\phi_{\rm odd }\rangle=-\langle\phi_{\rm odd}|\widetilde{\mathcal{B}}_{N}|\phi_{\rm odd}\rangle \tag{36}\] which consequently must be zero. This finishes the entire derivation proving that \[\langle H_{N}^{3}|\mathcal{B}_{N}^{X}|H_{N}^{3}\rangle=2^{N-2}. \tag{37}\] It is evident that the sign in front of the GHZ part in \(|\widetilde{H}_{N}^{3}\rangle\) does not make any difference. This reasoning can directly be transferred to arbitrary hypergraph states which get mapped to a superposition of the GHZ state and odd weight vectors after applying local square roots of the respective stabiliser. **Theorem 2**.: _Let \(|H_{N}^{\mathbf{k}}\rangle\) be a \(\mathbf{k}\)-uniform complete hypergraph state, which is stabilised by \(P^{\otimes N}\), \(P\in\{X,Y\}\) and gets mapped to a superposition of \(|GHZ\rangle\) and odd weight vectors via:_ \[\sqrt{P}_{+}^{\otimes N}|H_{N}^{\mathbf{k}}\rangle=\pm\frac{1}{\sqrt{2}}|GHZ \rangle+\frac{1}{\sqrt{2}}|\phi_{\rm odd}^{\mathbf{k}}\rangle. \tag{38}\] _Then the quantum value of the Mermin-operator \(\mathcal{B}_{N}^{P}\) evaluated on \(|H_{N}^{\mathbf{k}}\rangle\) is_ \[\langle H_{N}^{\mathbf{k}}|\mathcal{B}_{N}^{P}|H_{N}^{\mathbf{k}}\rangle=2^{N -2}, \tag{39}\] _as opposed to the classical bound of \(\sqrt{2}^{N}\) for any local realistic theory._ Proof.: Since we have \(\sqrt{P}_{+}Z\sqrt{P}_{+}=\pm iZ\) for both \(P\in\{X,Y\}\), we can conclude \(\sqrt{P}_{+}^{\otimes N}\mathcal{B}_{N}^{P}\sqrt{P}_{+}^{\otimes N}= \widetilde{\mathcal{B}}_{N}\) as in Eq. (31). The argument now works exactly like in our previous example. The cross contributions \(\langle GHZ|\widetilde{\mathcal{B}}_{N}|\phi^{\mathbf{k}}\rangle\) are both equal to zero, and the GHZ parts gives \(\langle GHZ|\widetilde{\mathcal{B}}_{N}|GHZ\rangle=2^{N-2}\) as in Eq. (35). The reasoning for Eq. (36) also immediately carries over to the more general setting, and we can conclude that again\(\langle\phi^{\mathbf{k}}|\widetilde{\mathcal{B}}_{N}|\phi^{\mathbf{k}} \rangle=0\). As evident from Lemma 3 and the follow-up Lemma 9 (Appendix only), this theorem yields exponential violation of Mermin-type inequalities for all symmetric even-qubit \(3\)- and \((3,2)\)-uniform hypergraph states. Several other classes of hypergraph states which satisfy the assumptions of Theorem 2 are listed in Tab. 1. The \(Y\)-stabilised cases are covered by Lemma 4. Using the coefficient formulae in Lemma 5, it is also possible to give a rigorous mathematical proof for the \(X\)-stabilised cases listed there. However, it is tedious to derive, and the result is not of the crucial importance for this work, therefore we shall omit it here. In order to achieve a violation for \((k=2^{r-1}+1)\)-uniform \(X^{\otimes N}\)-stabilised states, it seems natural to consider the operator \(\mathcal{B}_{N}^{X}\). However, this is in fact a dead end, since the even weight contributions of the transformed state only give \(\langle GHZ_{X}|\widetilde{\mathcal{B}}_{N}|GHZ_{X}\rangle=1\), whilst the odd ones and the cross-terms still yield zero. Instead, we consider the operator \(\mathcal{B}_{N}^{Y}\) and rewrite its quantum value on \(|H_{N}^{k}\rangle\) by \[\langle H_{N}^{k}|\mathcal{B}_{N}^{Y}|H_{N}^{k}\rangle=\langle\widetilde{H}_{ N}^{k}|\sqrt{X}_{-}^{\otimes N}\mathcal{B}_{N}^{Y}\sqrt{X}_{-}^{\otimes N}| \widetilde{H}_{N}^{k}\rangle, \tag{40}\] with \(|\widetilde{H}_{N}^{k}\rangle=\frac{1}{\sqrt{2}}|GHZ_{X}\rangle+\frac{1}{ \sqrt{2}}|\phi_{\rm odd}\rangle\). We continue with the following observations: First, the transformation actually does not change the Mermin operator: \(\sqrt{X}_{-}^{\otimes N}\mathcal{B}_{N}^{Y}\sqrt{X}_{-}^{\otimes N}=\mathcal{ B}_{N}^{Y}\), which is easily checked. Secondly, we can express \(\mathcal{B}_{N}^{Y}\) via \(\mathcal{B}_{N}^{Y}=Z^{\otimes N}\widetilde{\mathcal{B}}_{N}^{X}\) with the transformed Mermin-type operator as in Eq. (32). Now \(|GHZ_{X}\rangle\) is an eigenstate both of \(Z^{\otimes N}\) and \(\widetilde{\mathcal{B}}_{N}^{X}\), due to the even number of Pauli-\(X\) operators appearing in each summand. Therefore we can conclude \[\frac{1}{2}\langle GHZ_{X}|\sqrt{Z}_{+}\widetilde{\mathcal{B}}_{N}^{Y}\sqrt{Z} _{+}|GHZ_{X}\rangle=2^{N-2}. \tag{41}\] All summands of \(\mathcal{B}_{N}^{Y}\) contain an even number of Pauli-\(Y\)'s, so they preserve weights modulo two and thus the cross-contributions vanish. Therefore, the last contribution to consider are the odd weight contributions. Contrary to the previous two examples, this no longer vanishes in general. However, an explicit calculation (see B.1) using Lemma 5, shows that \[\frac{1}{2}\langle\phi_{\rm odd}|\mathcal{B}_{N}^{Y}|\phi_{\rm odd}\rangle=- \frac{4}{4^{r}}\left|\sum_{l=1,\,{\rm odd}}^{2^{r}-1}i^{l}\frac{\left(\cos( \frac{l\pi}{2^{r}})+\sin(\frac{l\pi}{2^{r}})\right)^{N}}{\cos(\frac{2l\pi}{2^ {r}})}\right|^{2}. \tag{42}\] The decisive point is now that the sum over \(l\) only runs over odd integers, therefore we always have \(\cos(\frac{l\pi}{2^{r}})+\sin(\frac{l\pi}{2^{r}})<2\). If we keep the hyperedge cardinality (and thereby \(r\)) fixed and increase \(N\), the contribution in Eq. (42) becomes more and more insignificant compared to the leading order \(2^{N-2}\). As a result, we get an infinite family of hypergraph states achieving the exponential violation of local realism. Fig. 4 shows this exponential violation for a few different classes of hypergraph states. \begin{table} \begin{tabular}{c|c|c|c} \(\mathbf{k}\) & \(m\) & \(N\) & Stabiliser \\ \hline \((k_{m},\ldots,k_{1},2)\) & odd & \(0\mod n_{r}\) & \(X^{\otimes N}\) \\ \((k_{m},\ldots,k_{1},2)\) & even & \(\frac{n_{r}}{2}\mod n_{r}\) & \(X^{\otimes N}\) \\ \((k_{m},\ldots,k_{1})\) & odd & \(0\mod n_{r}\) & \(Y^{\otimes N}\) \\ \((k_{m},\ldots,k_{1})\) & even & \(\frac{n_{r}}{2}\mod n_{r}\) & \(Y^{\otimes N}\) \\ \end{tabular} \end{table} Table 1: Classes of hypergraph states \(|H_{N}^{\mathbf{k}}\rangle\), satisfying the assumption of Theorem 2. Here, we choose any hypergraph with the cardinality vector \(\mathbf{k}\), with the entries of the form \(k_{i}=2^{i}+1\), \(i\geq 1\). Choose the highest cardinality, \(k_{\max}\) and compute \(r=\log(k_{\max}-1)\). Set \(n_{r}=2^{r+1}\) and pick \(N\) according to the third column. Then the fourth column indicates the corresponding stabilizer. Note the interplay between \(X\)- and \(Y\)-stabilised states in the sense of the remark after Lemma 1. ### Robustness against particle loss With a growing number of qubits, it becomes more and more idealistic to check to which extent nonlocality, or violation of Bell inequalities in hypergraph states, is stable under the loss of particles. It is known that the four-uniform complete hypergraph states keep the exponential violation even if several particles are lost [38]. However, for the three-uniform complete hypergraph states, we do not obtain a Bell-violation using Mermin inequalities anymore after losing even a single particle. Instead, the resulting states exponentially violate the separability bounds of Mermin-like operators presented in Ref. [39], and thus are still entangled. For separable states, the quantum value of the Mermin-type operators \(\mathcal{B}_{N}\) considered in the previous section, cannot exceed \(\sqrt{2}\)[39]. Here we reproduce some results of Ref. [38], making use of symmetries which allow for nicer representations of hypergraph states after loosing particles in the three-uniform case. To give an example, assume that one or more (\(k\)) particles are lost during the computation. We then replace the corresponding entry in the Bell operator by the identity operator, that is we calculate the separability violation for a reduced density matrix of the state. To obtain violations, we consider different Bell operators, depending on the initial state. However, their general structure is always of the form \[\mathcal{B}_{N\setminus k}:=\mathcal{M}^{i}_{N-k}\otimes\mathds{1}^{\otimes k },\,i\in\{0,1\} \tag{43}\] where \(\mathcal{M}^{0}\) and \(\mathcal{M}^{1}\) are the Mermin-type operators given in the Appendix, Eq. (B.6) and Eq. (B.7), and feature only even/odd numbers of Pauli-\(X\), respectively. The notation \(\mathcal{B}_{N\setminus k}\) indicates that \(k\) particles are lost. Due to the symmetry, it is justified to fix them to be the ones corresponding to the last sites. In Ref. [31] the robustness for \(X^{\otimes N}\)-stabilised \(|H_{N}^{3}\rangle\) and \(\mathcal{B}_{N\setminus k}=\mathcal{M}^{0}_{N-k}\otimes\mathds{1}^{\otimes k}\) was derived only for \(k=1\) and with a very lengthy proof. We simplify the calculations, to more general initial states and numbers of lost qubits. Figure 4: Quantum value \((QV)\) of \(\mathcal{B}_{N}^{Y}\) evaluated on different hypergraph states stabilised by \(X^{\otimes N}\), according to Eq. (42). The classical bound corresponds to the value \(\log(\sqrt{2})\approx 0.347\) **Theorem 3**.: _After loosing \(k\) particles of \(N\)-qubit three-uniform complete hypergraph state, we can derive the following violations of separability inequalities:_ \begin{tabular}{c|c|c} Constraints on \(N\) and \(k\) & Bell inequality \(\mathcal{B}_{N\setminus k}\) & Quantum value \(\langle\mathcal{B}_{N\setminus k}\rangle_{|H_{N}^{3}\rangle}\) \\ \hline \(N\equiv 2\mod 4\), \(k\) _odd_ & \(\mathcal{M}_{N-k}^{0}\otimes\mathds{1}^{\otimes k}\) & \(\sqrt{2}^{N-2k}\) \\ \(N\equiv 0\mod 4\), \(k\) _even_ & \(\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{\otimes k}\) & \(\sqrt{2}^{N-2k}\) \\ \(N-k\equiv 2\mod 4\) & \(\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{\otimes k}\) & \(\left|\sin\left(\frac{\pi k}{4}\right)\right|\sqrt{2}^{N-2k}\) \\ \(N-k\equiv 0\mod 4\) & \(\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{\otimes k}\) & \(\left|\cos\left(\frac{\pi k}{4}\right)\right|\sqrt{2}^{N-2k}\) \\ \end{tabular} _In the first two cases, the quantum value is \(1/2\) or \(0\) if we instead consider even and odd \(k\), respectively._ The proof can be found in B.2.1. As a consequence of this theorem, and the fact that loosing a particle of a separable state cannot lead to an entangled state, we can deduce that three-uniform hypergraph complete states remain entangled if up to \(\lfloor(N-4)/2\rfloor\) particles are lost. The first two cases guarantee this for even \(N\), whereas the last two can be used when \(N\) is odd. ## 5 Conclusions and outlook We discuss how local Pauli symmetries of hypergraph states can aid in analysing entanglement properties and non-locality of complete uniform hypergraph states. By transforming the state with local square-roots of Pauli operators, we analytically calculate the geometric measure of entanglement for various class of hypergraph states. Additionally, we significantly simplify calculations to recover previously derived results, and extend them further to cover more cases of hypergraph states. Our results shed light and deeper understanding to the rich structures present in these states. Families of interesting hypergraph states correspond to the superposition of \(GHZ\) states with overwhelming amplitude and the Dicke states with exponentially decreasing amplitudes. This structure explains both, the exponential violation of Mermin inequality and robustness against the particles losses. Local symmetries are key to all the findings in this work. Such symmetries have been exhaustively investigated for the complete uniform hypergraph states, restricted to the Pauli stabilizers only. On the other hand, recently, all local, invertible (unitary and nonunitary) symmetries of arbitrary stabilizer states (graph states) have been investigated [40], and connections to their entanglement structure and applications in quantum-error correction were identified. Motivated by this, exhaustive study of general symmetries for the states with nonlocal stabilizer could help better understand their entanglement properties. Besides, the identified symmetries could uncover additional transformations between hypergraph states and from hypergraph states to other multipartite pure states. Recall that extension of local complementation from graphs to hypergraphs was the central tool to construct a family of counterexamples to the famous LU=LC conjecture [41, 42]. ## 6 Acknowledgments We thank Nikoloz Tsimakuridze, Geza Toth, Nikolai Miklin, and David Wierichs for interesting ideas and discussions. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 236615297 - SFB 1119, 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No. 16KIS1618K). ## Appendix A Appendix to Geometric measure This part of Appendix contains the proofs of the statements leading up to the results on the geometric measure of entanglement, which were not proven in the main text. In Appendix A.1, we derive coefficients after local square root transformations. In Appendix A.2, we give the state transformation results for the three-uniform case. In Appendix A.3, we give computations on coefficients beyond the three-uniform case. In Appendix A.4, we give analytical derivations for the geometric measure of entanglement. In Appendix B, we discuss nonlocality results. In Appendix B.2, we give detailed calculations for the violation of the separability inequality in the three-uniform complete case in the presence of particle loss. ### Computing coefficients after local square root transformations - general results First we state and prove some technical results, which simplify subsequent computations. **Lemma 7**.: _Let \(N,q\in\mathbb{N}\) and \(0<n\leq N\). Then_ \[\sum_{w\equiv q\bmod n}\binom{N}{w}= \frac{1}{n}\bigg{(}\sum_{j=0}^{n-1}(2\cos\frac{\pi j}{n})^{N} \cos\frac{\pi j(N-2q)}{n}\bigg{)}. \tag{11}\] _In particular, in the form which we will encounter later, this reads_ \[\sum_{w\equiv q\bmod 2^{r}}\binom{N-|e|}{w-m}=\frac{1}{2^{r}}\bigg{(}\sum_{j =0}^{2^{r}-1}(2\cos\frac{\pi j}{2^{r}})^{N-|e|}\cos\frac{\pi j(N-|e|+2m-2q)}{2^ {r}}\bigg{)}. \tag{12}\] Proof.: Let \(\zeta_{n}\) be a primitive \(n\)-th root of unity. Then the expression \(\frac{1}{n}(1+(\zeta_{n}^{1})^{l}+\cdots+(\zeta_{n}^{n-1})^{l})\) is non-zero only whenever \(l\equiv 0\mod n\), in which case it equals \(1\). We use this property of the roots of unity to rewrite the left hand side of Eq. (13): \[\sum_{l=0}^{\lfloor(N-q)/l\rfloor}\binom{N}{nl+q} =\sum_{l^{\prime}=0}^{N}\binom{N}{l^{\prime}}\frac{1}{n}\Re\left(1+ \left(\zeta_{n}^{1}\right)^{l^{\prime}-q}+\cdots+\left(\zeta_{n}^{n-1}\right)^{ l^{\prime}-q}\right)\] \[=\frac{1}{n}\sum_{j=0}^{n-1}\Re\left(e^{-2q\pi ji/n}\left(1+e^{2j \pi i/n}\right)^{N}\right)\] \[=\frac{1}{n}\sum_{j=0}^{n-1}\Re\left(e^{i\pi j(N-2q)/n}\left(e^{- i\pi j/n}+e^{i\pi j/n}\right)^{N}\right)\] \[=\frac{1}{n}\sum_{j=0}^{n-1}\left(2\cos\frac{\pi j}{n}\right)^{N} \cos\frac{\pi j(N-2q)}{n}.\] **Lemma 8**.: _For \(\beta\neq 0\), \(\alpha\in\mathbb{N}\) and \(M\in\mathbb{N}\) we have_ \[\sum_{m=0}^{M}(-1)^{m}\binom{M}{m}\cos\frac{\pi(2m-M+\alpha)}{\beta}=\Re\left( e^{i\pi\alpha/\beta}(-2i\sin\frac{\pi}{\beta})^{M}\right). \tag{14}\] _In particular, in the form it will appear later,_ \[\sum_{m=0}^{|e|}(-1)^{m}\binom{|e|}{m}\cos\frac{\pi j(2m+N-|e|-2q)}{2^{r}}=\Re \left(e^{i\pi j(N-2q)/2^{r}}(-2i\sin\frac{\pi j}{2^{r}})^{|e|}\right). \tag{15}\] Proof.: \[\sum_{m=0}^{M}(-1)^{m}\binom{M}{m}\cos\frac{\pi(2m-M+\alpha)}{ \beta}=\sum_{m=0}^{M}(-1)^{m}\binom{M}{m}\Re\left(e^{i\frac{\pi(2m-M+\alpha)} {\beta}}\right)\] (16) \[=\Re\left(e^{i\pi\alpha/\beta}e^{i\pi M/\beta}\sum_{m=0}^{M} \binom{M}{m}(-1)^{m}e^{2im\pi/\beta}\right)=\Re\left(e^{i\pi\alpha/\beta}e^{i \pi M/\beta}(1-e^{2i\pi/\beta})^{M}\right)\] (17) \[=\Re\left(e^{i\pi\alpha/\beta}(e^{-i\pi/\beta}-e^{i\pi/\beta})^{ M}\right)=\Re\left(e^{i\pi\alpha/\beta}(-2i\sin\frac{\pi}{\beta})^{M}\right).\] (18) **Proposition 3**.: 1. _Given a symmetric hypergraph state which is stabilised by_ \(X^{\otimes N}\)_, then it is possible to calculate the coefficient_ \(c_{|e|}=c_{1\ldots 10\ldots 0}\) _of the computational basis element of a weight_ \(|e|\) _after application of_ \(\sqrt{X}^{\otimes N}\)_, using the following expression:_ \[c_{|e|}= \frac{1}{(2\sqrt{2})^{N}}\sum_{w}(-1)^{f(w)}\Re\left((1+i)^{N}(-i )^{w+|e|}\right)\sum_{m=0}^{|e|}\binom{N-|e|}{w-m}\binom{|e|}{m}(-1)^{m}\] (19) \[= \frac{1}{\sqrt{2}^{N}}-\frac{2}{(2\sqrt{2})^{N}}\sum_{f(w)=1} \sum_{m=0}^{|e|}\binom{N-|e|}{w-m}\binom{|e|}{m}\Re\left((1+i)^{N}(-i)^{w+|e| -2m}\right).\] (20) _Here_ \(f:\{0,\ldots,N\}\to\{0,1\}\) _is the function which describes the symmetric hypergraph state by specifying the sign of different weights, encompassing the hyperedge cardinalities._ 2. _If the function_ \(f\) _is_ \(2^{r}\)_-periodic, i.e._ \(f(w)=f(w+2^{r})\)_, Eq._ (A.8) _can alternatively be expressed as_ \[c_{|e|}= \frac{1}{2^{r}}\sum_{j=0}^{2^{r}-1}\cos^{N-|e|}\frac{\pi j}{2^{r}} \sin^{|e|}\frac{\pi j}{2^{r}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)}\Re\left(i^{|e|+q}e ^{i\pi N/4}\right)\Re\left(e^{i\pi j(N-2q)/2^{r}}i^{|e|}\right).\] (A.10) Proof.: We write the hypergraph state as \[|H\rangle=\frac{1}{\sqrt{2}^{N}}\sum_{w=0}^{N}\sum_{|I|=w}(-1)^{f(w)}|i_{1} \ldots i_{N}\rangle,\] (A.11) and use the identity \[\sqrt{X}_{+}=\frac{1}{2}\left((1+i)\mathds{1}+(1-i)X\right)=\frac{1+i}{2} \left(\mathds{1}-iX\right).\] (A.12) Then, using \(J=(j_{1},\ldots,j_{N})\), \[c_{|e|} =\langle\underbrace{1\ldots 1}_{|e|}0\ldots 0|\sqrt{X}_{+}^{ \otimes N}|H\rangle\] \[=\langle 1\ldots 10\ldots 0|\sum_{J\in\{0,1\}^{N}}\sum_{w}\sum_{|I|=w }\left(\frac{1+i}{2}\right)^{N}\frac{(-1)^{f(w)}}{\sqrt{2}^{N}}(-iX)^{j_{1}} \otimes\cdots\otimes(-iX)^{j_{N}}|I\rangle.\] Every \(|I\rangle=|i_{1},\ldots i_{N}\rangle\) is mapped onto the subspace spanned by \(|1\ldots 10\ldots 0\rangle\) by exactly one choice of \(J\), specifically \[j_{k}=\begin{cases}1-i_{k}&,k=1,\ldots,|e|\\ i_{k}&,k=|e|+1,\ldots,N.\end{cases}\] (A.13) By applying \((-iX)^{j_{1}}\otimes\cdots\otimes(-iX)^{j_{N}}\) with this choice on \(|I\rangle\), we pick up a total phase \((-i)^{|e|-(i_{1}+\cdots+i_{|e|})}(-i)^{w-(i_{1}+\cdots+i_{|e|})}\) in the process. Substituting \(m:=i_{1}+\cdots+i_{|e|}\), we get in total \(\binom{N-|e|}{w-m}\binom{|e|}{m}\) different possibilities to produce the factor \((-i)^{|e|+|w|-2m}\) in this fashion. Hence, we are left with \[c_{|e|} =\left(\frac{1+i}{2\sqrt{2}}\right)^{N}\sum_{w}\sum_{|I|=w}(-1)^{ f(w)}(-i)^{|e|+w-2(i_{1}+\cdots+i_{|e|})}\] \[=\left(\frac{1+i}{2\sqrt{2}}\right)^{N}\sum_{w}\sum_{m=0}^{|e|} \binom{N-|e|}{w-m}\binom{|e|}{m}(-1)^{f(w}(-i)^{|e|+w-2m},\] (A.14) which proves \((i)\). We can proceed by separating the negative contributions as determined by \(f(w)\): \[c_{|e|} =\left(\frac{1+i}{2\sqrt{2}}\right)^{N}\sum_{w=0}^{N}\sum_{m=0}^{|e |}\binom{N-|e|}{w-m}\binom{|e|}{m}(-i)^{|e|+w-2m} \tag{114}\] \[-2\left(\frac{1+i}{2\sqrt{2}}\right)^{N}\sum_{f(w)=1}\sum_{m=0}^{ |e|}\binom{N-|e|}{w-m}\binom{|e|}{m}(-i)^{|e|+w-2m}. \tag{115}\] Omitting the factor \(\left(\frac{1+i}{2\sqrt{2}}\right)^{N}\), we compute Eq. (114): \[\sum_{w=0}^{N}\sum_{m=0}^{|e|}\binom{|e|}{m}\binom{N-|e|}{w-m}(-i) ^{|e|+w-2m}=\sum_{m=0}^{|e|}\binom{|e|}{m}(-i)^{|e|-m}(1-i)^{N-|e|}\] \[= (1-i)^{N-|e|}(-i)^{|e|}\sum_{m=0}^{|e|}\binom{|e|}{m}i^{m}=(1-i)^ {N-|e|}(-i)^{|e|}(1+i)^{|e|}\] \[= (1-i)^{N}.\] Therefore first summand is actually equal to \(\frac{1}{\sqrt{2}^{N}}\), which reproduces the results of [38] (Lemma 5.9 therein). Since \(|H\rangle\) is stabilised by \(X^{\otimes N}\), both sums in the first part are invariant under replacing \(w\) with \(N-w\). This is sufficient to show that the expression in Eq. (113) is invariant under complex conjugation, and therefore equal to its real part. This finishes the proof of the first claim. Now assume that \(f\) is indeed \(2^{r}\) periodic with \(r\geq 2\), cf. [34]. In order to arrive at Eq. (109), we rewrite Eq. (108) as \[c_{|e|}=\frac{1}{(2\sqrt{2})^{N}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)}\sum_{m=0}^{|e |}\Re\left((1+i)^{N}(-i)^{q+|e|-2m}\right)\sum_{w\equiv q\ \mathrm{mod}\ 2^{r}}\binom{N-|e|}{w-m}\binom{|e|}{m}.\] Next we apply the first technical Lemma 8, and reshuffle some of the terms: \[c_{|e|}= \frac{1}{2^{N}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)}\Re\left(e^{i\pi N /4}(-i)^{q+|e|}\right)\sum_{m=0}^{|e|}\binom{|e|}{m}(-1)^{m}\times\] \[\frac{1}{2^{r}}\left(\sum_{j=0}^{2^{r}-1}(2\cos\frac{\pi j}{2^{r} })^{N-|e|}\cos\frac{\pi j(N-|e|+2m-2q)}{2^{r}}\right). \tag{116}\] We already carried out the summation over \(m\) in Lemma 7, which then results in: \[c_{|e|}= \frac{1}{2^{N}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)}\Re\left(e^{i\pi N /4}(-i)^{q+|e|}\right)\times\] \[\frac{1}{2^{r}}\sum_{j=0}^{2^{r}-1}(2\cos\frac{\pi j}{2^{r}})^{N- |e|}\Re\left(e^{i\pi j(N-2q)/2^{r}}(-2i\sin\frac{\pi j}{2^{r}})^{|e|}\right).\] This indeed simplifies to (109). A similar result holds for the action of \(\sqrt{Y}^{\otimes}\) on any \(N\)-qubit symmetric hypergraph state. **Proposition 4**.: 1. _Given a symmetric hypergraph state, then it is possible to calculate a coefficient of the computational basis element of a weight_ \(|e|\) _after application of_ \(\sqrt{Y}^{\otimes N}\) _using the following expression:_ \[c_{|e|}=\frac{(1+i)^{N}}{(2\sqrt{2})^{N}}\sum_{w}(-1)^{f(w)+w}\sum_{m=0}^{|e|} \binom{N-|e|}{w-m}\binom{|e|}{m}(-1)^{m},\] (11) _where_ \(f:\{0,\ldots,N\}\rightarrow\{0,1\}\) _again specifies the signs of the different weights._ 2. _Again, if_ \(f\) _is_ \(2^{r}\)_-periodic, this can alternatively be expressed as follows:_ \[c_{|e|}=\frac{1}{2^{r}}\sum_{j=0}^{2^{r}-1}\cos^{N-|e|}\frac{\pi j}{2^{r}}\sin^ {|e|}\frac{\pi j}{2^{r}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)+q}\Re\left((-i)^{|e|}e^{ i\pi j(N-2q)/2^{r}}\right).\] (12) Proof.: Similarly as before, we write \[c_{|e|}=\langle 1\ldots 10\ldots 0|\sum_{J\in\{0,1\}}^{N}\sum_{w}\sum_{|I| =w}\left(\frac{1+i}{2}\right)^{N}\frac{(-1)^{f(w)}}{\sqrt{2}^{N}}(-iY)^{j_{1} }\otimes\cdots\otimes(-iY)^{j_{N}}|i_{1}\ldots i_{N}\rangle\] With the same choice of \(J\) as in Eq. (10), keeping in mind that \(-iY|0\rangle=|1\rangle,-iY|1\rangle=-|0\rangle\), we collect a factor of \(1^{|e|}(-1)^{i_{|e|+1}+\cdots+i_{N}}\). With the same substitution as before we obtain \[c_{|e|}=\left(\frac{1+i}{2\sqrt{2}}\right)^{N}\sum_{w}(-1)^{f(w)}\sum_{m=0}^{ |e|}\binom{N-|e|}{w-m}\binom{|e|}{m}(-1)^{w-m}. \tag{13}\] For the second part we proceed in the same manner as before, utilising the Lemmata 8, 7. ### The state transformation results for the three-uniform case **Lemma 3**.: _The even-qubit three-uniform complete hypergraph states can be mapped to a superposition of the GHZ state and all odd weight vectors after applications of square roots of local Pauli matrices, corresponding to the respective stabilisers._ \(\mathbf{N}\equiv\mathbf{2}\mod\mathbf{4}\)_:_ \(|H_{N}^{3}\rangle\) _is stabilised by_ \(X^{\otimes N}\) _and with_ \(|\widetilde{H}_{N}^{3}\rangle=\sqrt{X}_{+}^{\otimes N}|H_{N}^{3}\rangle\) _we have_ \[|\widetilde{H}_{N}^{3}\rangle=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt {2}^{N}}\sum_{w(x)\,\mathrm{odd}}|x\rangle. \tag{14}\] \(\mathbf{N}\equiv\mathbf{0}\mod\mathbf{4}\)_:_ \(|H_{N}^{3}\rangle\) _is stabilised by_ \(Y^{\otimes N}\) _and with_ \(|\widetilde{H}_{N}^{3}\rangle=\sqrt{Y}_{+}^{\otimes N}|H_{N}^{3}\rangle\)_, we have_ \[|\widetilde{H}_{N}^{3}\rangle=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt {2}^{N}}\sum_{w(x)\,\mathrm{odd}}(-i)^{w(x)-1}|x\rangle. \tag{15}\] Proof.: In order to prove the claim, one could proceed using the results from Propositions 3 and 4. However, the three-uniform case allows for much neater proofs. In both cases, the computational basis elements which have negative contributions are the ones with weight \(w\equiv 3\mod 4\). Therefore, we can rewrite the state as \[|H_{N}^{3}\rangle= \frac{1}{\sqrt{2}^{N}}\left(\sum_{w(x)\,\mathrm{even}}|x\rangle+ \sum_{w(x)\,\mathrm{odd}}i^{w(x^{\prime})-1}|x^{\prime}\rangle\right)\] \[= \frac{1}{2}\left(|+\rangle^{\otimes N}+|-\rangle^{\otimes N} \right)-\frac{i}{2}\left(|+_{Y}\rangle^{\otimes N}-|-_{Y}\rangle^{\otimes N} \right),\] Which is the same as having \(|H_{N}^{3}\rangle=\frac{1}{\sqrt{2}}|GHZ_{X}^{+}\rangle+\frac{1}{\sqrt{2}}|GHZ_{ Y}^{-}\rangle\). Case 1: \(N\equiv 2\mod 4\): Since \(|\pm\rangle\) are the \(\pm 1\) eigenstates of \(X\), and \(\sqrt{X}_{+}|+_{Y}\rangle=\frac{1+i}{\sqrt{2}}|0\rangle\), \(\sqrt{X}_{+}|-_{Y}\rangle=\frac{1-i}{\sqrt{2}}|1\rangle\), it is clear that \[\sqrt{X}^{\otimes N}|H_{N}^{3}\rangle =\frac{1}{\sqrt{2}}\sqrt{X}_{+}^{\otimes N}\left(|+\rangle^{ \otimes N}+|-\rangle^{\otimes N}\right)+\frac{(-i)}{\sqrt{2}}\sqrt{X}_{+}^{ \otimes N}\left(|+_{Y}\rangle^{\otimes N}-|-_{Y}\rangle^{\otimes N}\right)\] \[=\frac{1}{\sqrt{2}}\left(|+\rangle^{\otimes N}+i^{N}|-\rangle^{ \otimes N}\right)+\frac{(-i)}{\sqrt{2}}\left(\left(\frac{1+i}{\sqrt{2}}|0 \rangle\right)^{\otimes N}-\left(\frac{1-i}{\sqrt{2}}|1\rangle\right)^{\otimes N }\right).\] And since \(N\equiv 2\mod 4\), this becomes \[=\frac{1}{\sqrt{2}}\left(|+\rangle^{\otimes N}-|-\rangle^{\otimes N }+(-i)(\pm i|0\rangle^{\otimes N}-(\mp i)|1\rangle^{\otimes N})\right)\] \[=\frac{1}{\sqrt{2}}|GHZ_{X}^{-}\rangle\pm\frac{1}{\sqrt{2}}|GHZ \rangle=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{1}{\sqrt{2}}N\sum_{w(x)\,\mathrm{ odd}}|x\rangle.\] Case 2: \(N\equiv 0\mod 4\) On the other hand, \(|\pm_{Y}\rangle\) are the \(\pm 1\) eigenstates of \(Y\), further \(\sqrt{Y}_{+}|+\rangle=\frac{1+i}{\sqrt{2}}|1\rangle\), \(\sqrt{Y}_{+}|-\rangle=\frac{1+i}{\sqrt{2}}|0\rangle\), so we get \[\sqrt{Y}_{+}^{\otimes N}|H_{N}^{3}\rangle =\frac{1}{\sqrt{2}}\sqrt{Y}_{+}^{\otimes N}\left(|+\rangle^{ \otimes N}+|-\rangle^{\otimes N}\right)+\frac{(-i)}{\sqrt{2}}\sqrt{Y}_{+}^{ \otimes N}\left(|+_{Y}\rangle^{\otimes N}-|-_{Y}\rangle^{\otimes N}\right)\] \[=\frac{1}{\sqrt{2}}\left(\left(\frac{1+i}{\sqrt{2}}|1\rangle \right)^{\otimes N}+\left(\frac{1+i}{\sqrt{2}}|0\rangle\right)^{\otimes N} \right)+\frac{(-i)}{\sqrt{2}}\left(|+_{Y}\rangle^{\otimes N}-i^{N}|-_{Y} \rangle^{\otimes N}\right)\] and by assumption on \(N\): \[=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{(-i)}{\sqrt{2}}|GHZ_{Y}^{-}\rangle.\] By applying \(\sqrt{Z}_{-}\) on every site, we can transform this even further to \[\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac{i}{\sqrt{2}}\sum_{w(x)\,\mathrm{odd}}|x\rangle. \tag{23}\] The sign in front of the GHZ part depends on the exact value of \(\left(\frac{1+i}{\sqrt{2}}\right)^{N}\) in both cases. **Lemma 9**.: _Consider an even-qubit \((3,2)\)-uniform hypergraph state \(|H_{N}^{3,2}\rangle\). It can be written as_ \[|H_{N}^{3,2}\rangle=\frac{1}{\sqrt{2}}|GHZ_{Y}^{+}\rangle+\frac{1}{\sqrt{2}}|GHZ _{X}^{-}\rangle. \tag{114}\] _With a similar calculation as above, we see the following:_ _Case 1:_ \(N=4k\)__ _The state is stabilised by_ \(X^{\otimes N}\)_, and gets transformed to_ \[\sqrt{X}_{+}^{\otimes N}|H_{N}^{3,2}\rangle =\frac{1}{\sqrt{2}}\left(\left(\frac{1+i}{\sqrt{2}}\right)^{N}|0 \rangle^{\otimes N}+\left(\frac{1-i}{\sqrt{2}}\right)^{N}|1\rangle^{\otimes N} +(|+\rangle^{\otimes N}-i^{N}|-\rangle^{\otimes N})\right)\] \[=(-1)^{k}\frac{1}{\sqrt{2}}|GHZ_{Z}^{+}\rangle+\frac{1}{\sqrt{2}} |GHZ_{X}^{-}\rangle. \tag{115}\] _Case 2:_ \(N=4k+2\)__ _The state ist stabilised by_ \(Y^{\otimes N}\)_, and gets transformed to_ \[\sqrt{Y}^{\otimes N}|H_{N}^{3,2}\rangle =\frac{1}{\sqrt{2}}\left(|+_{Y}\rangle^{\otimes N}+i^{N}|-_{Y} \rangle^{\otimes N}+\left(\frac{1+i}{\sqrt{2}}\right)^{N}(|1\rangle^{\otimes N }-|0\rangle^{\otimes N})\right)\] \[=(-1)^{k+1}\frac{i}{\sqrt{2}}|GHZ_{Z}^{-}\rangle+\frac{1}{\sqrt{2 }}|GHZ_{Y}^{-}\rangle. \tag{116}\] _In order for the second case to meet the requirements of Thm. 2, we can apply local \(\sqrt{Z}_{+}\) on every site, and get rid of the relative phase in \(|GHZ_{Z}^{-}\rangle\)._ ### Computations for coefficients beyond the three-uniform case Interlude: The proofs of Lemmata 5, 6 both require a similar calculation, which we jointly conduct in the following: Let \(\sigma\in\{0,-1\}\), \(r\geq 3\) and \(f(q):=\binom{q}{2^{r-1}+1}\mod 2\). Note that \(f(q)=1\) if and only if \(q\in\{2^{r-1}+1,\ldots,2^{r}-1\}\), and zero otherwise. This can be easily derived using Lucas' Theorem, which is stated e.g. in the Appendix of [34]. Then: \[\sum_{q=1\,\mathrm{odd}}^{2^{r}-1}(-1)^{f(q)}\left(i^{\sigma}e^{ -2i\pi j/2^{r}}\right)^{q}= \Bigg{(}1-\left(i^{\sigma}e^{-2i\pi j/2^{r}}\right)^{2^{r-1}} \Bigg{)}\sum_{q=1\,\mathrm{odd}}^{2^{r-1}-1}\left(i^{\sigma}e^{-2i\pi j/2^{r} }\right)^{q}\] \[= (1-e^{i\pi j})i^{\sigma}e^{-2i\pi j/2^{r}}\sum_{q^{\prime}=0}^{2 ^{r-2}-1}\left(i^{\sigma}e^{-4i\pi j/2^{r}}\right)^{q}.\] Clearly, this vanishes whenever \(j\) is even. We therefore continue, assuming that \(j\) is odd: \[\sum_{q=1\,\mathrm{odd}}^{2^{r}-1}(-1)^{f(q)}\left(i^{\sigma}e^{ -2i\pi j/2^{r}}\right)^{q}= 2i^{\sigma}e^{-2i\pi j/2^{r}}\frac{1-e^{i\pi j}}{1-(-1)^{\sigma} e^{-4i\pi j/2^{r}}} \tag{117}\] \[= \begin{cases}-2i\left(\sin\frac{2\pi j}{2^{r}}\right)^{-1},& \sigma=0\\ -2i\left(\cos\frac{2\pi j}{2^{r}}\right)^{-1},&\sigma=-1.\end{cases}\] ### Proof of Lemma 5 **Lemma 5**.: _Let \(r\geq 3\), and \(N=l2^{r}+2^{r-1}\). Then the \(k=(2^{r-1}+1)\)-uniform complete \(N\)-qubit hypergraph state is stabilised by \(X^{\otimes N}\) and we have_ \[|\widetilde{H}_{N}^{k}\rangle=\sqrt{X}_{+}^{\otimes N}|H_{N}^{k} \rangle=\frac{1}{\sqrt{2}}|GHZ_{X}\rangle+(-1)^{l}\sum_{w(x)\,\mathrm{odd}}c_ {w(x)}|x\rangle, \tag{114}\] _where \(|GHZ_{X}\rangle=\frac{1}{\sqrt{2}}(|+\rangle^{\otimes N}+|-\rangle^{\otimes N})\), and the odd coefficients are given by_ \[c_{w}=\frac{2}{2^{r}}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1}i^{j-1} \frac{\cos\left(\frac{\pi j}{2^{r}}\right)^{N-w}\sin\left(\frac{\pi j}{2^{r}} \right)^{w}}{\cos\left(\frac{2\pi j}{2^{r}}\right)}. \tag{115}\] Proof.: Recall that \(k=2^{-1}r+1\) and \(N=l2^{r}+2^{r-1}\). Because \(N\equiv 0\mod 4\), and only odd weights \(w\) have negative contributions, using (111) is easy to check that for even \(|e|\), we have \(\Re\left(e^{i\pi N/4}(-i)^{w+|e|-2m}\right)=0\), and therefore \(c_{|e|}=\frac{1}{\sqrt{2}^{N}}\). For odd \(|e|\), consider the formula Eq. (112): \[c_{|e|}=\frac{e^{i\pi N/4}}{2^{r}}\sum_{j=0}^{2^{r}-1}\cos^{n-| e|}\frac{\pi j}{2^{r}}\sin^{|e|}\frac{\pi j}{2^{r}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)} \Re((-i)^{|e|+q})\Re((-i)^{|e|}e^{i\pi j(N-2q)/2^{r}}). \tag{116}\] When \(q\) is even, we have \(\Re((-i)^{|e|+q})=0\), and for odd \(q\) we can rewrite \[\Re((-i)^{|e|-q})\Re((-i)^{|e|}e^{i\pi j(N-2q)/2^{r}})=\Re((-i)^{ 2|e|-q}e^{i\pi j(l2^{r}+2^{r-1}-2q)/2^{r}}\] \[= \Re\left((-i)^{q}\underbrace{e^{i\pi jl}}_{=(-1)^{jl}}e^{-2i\pi j q /2^{r}}\underbrace{e^{i\pi j/2}}_{=i^{j}}\right).\] Therefore Eq. (116) can be rearranged to \[=\sum_{j=0}^{2^{r}-1}\frac{(-1)^{N/4+1}}{2^{r}}\cos\frac{\pi j }{2^{r}}^{N-|e|}\sin\frac{\pi j^{|e|}}{2^{r}}(-1)^{lj}\Re\bigg{(}\sum_{q=1\, \mathrm{odd}}^{2^{r}-1}(-1)^{f(q)}(-i)^{q-j}e^{-2i\pi jq/2^{r}}\bigg{)}.\] Now we are in the situation of the preceeding remark and Eq. (113) in the case where \(\sigma=-1\). Therefore, the only terms contributing are the ones where \(j\) is odd, in which case we get \[\Re\left(i^{j}\sum_{q=1\,\mathrm{odd}}^{2^{r}-1}(-1)^{f(q)}(-i)^{ q}e^{-2i\pi jq/2^{r}}\right)=\Re\left(i^{j}\frac{-2i}{\cos(\frac{2\pi j}{2^{r}} )}\right)=\frac{2i^{j-1}}{\cos(\frac{2\pi j}{2^{r}})}.\] Putting everything together, we are left with \[c_{|e|}=(-1)^{N/4+l}\frac{2}{2^{r}}\sum_{j=1\,\mathrm{odd}}^{2^ {r}-1}\cos^{N-|e|}\frac{\pi j}{2^{r}}\sin^{|e|}\frac{\pi j}{2^{r}}\frac{i^{j-1 }}{\cos\frac{2\pi j}{2^{r}}}. \tag{117}\] ### Proof of Lemma 4 and Lemma 6 **Lemma 4**.: _Let \(|H_{N}^{\mathbf{k}}\rangle=\frac{1}{\sqrt{2}^{N}}\sum_{x}(-1)^{f(w(x))}|x\rangle\) be a symmetric \(\mathbf{k}\)-uniform \(N\)-qubit hypergraph state, which is stabilised by \(Y^{\otimes N}\). Further, assume that \(f\) is \(2^{r}\)-periodic, \(f(w)=0\) for all even \(w\), and \(N\equiv 0\mod 2^{r-1}\). Then the state can be mapped to a superposition of the GHZ state and some odd weights by applying \(\sqrt{Y}_{+}^{\otimes N}\), i.e._ \[|\widetilde{H}\rangle=\sqrt{Y}_{+}^{\otimes N}|H\rangle=\frac{1}{\sqrt{2}}|GHZ \rangle+\frac{1}{\sqrt{2}}|\phi_{\mathrm{odd}}^{\mathbf{k}}\rangle,\] (A.32) _where \(|\phi_{\mathrm{odd}}^{\mathbf{k}}\rangle\) is a normalized quantum state depending on \(\mathbf{k}\) which features odd weight contributions only in the computational basis._ **Lemma 6**.: _Let \(r\geq 3\), and \(N=l2^{r}\). Then the \(k=(2^{r-1}+1)\)-uniform complete \(N\)-qubit hypergraph state is stabilised by \(Y^{\otimes N}\) and gets mapped to_ \[|\widetilde{H}_{N}^{k}\rangle=\frac{1}{\sqrt{2}}|GHZ\rangle+\sum_{w(x)\, \mathrm{odd}}i^{w-1}c_{w(x)}|x\rangle\] (A.33) _by \(\sqrt{Y}_{+}^{\otimes N}\), where the odd coefficients are given by_ \[c_{w}=\frac{2}{2^{r}}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1}\frac{\cos(\frac{\pi j }{2^{r}})^{N-w}\sin(\frac{\pi j}{2^{r}})^{w}}{\sin(\frac{2\pi j}{2^{r}})}.\] (A.34) Proof.: First, we focus on the proof of Lemma 6, i.e. \(k=(2^{r-1}+1)\) and \(N=l2^{r}\). With some minor alterations, these calculations can be adapted to also prove Lemma 4. The weights in \(|H_{N}^{k}\rangle\) with a negative sign are the odd weights congruent to \(2^{r-1}+1,\ldots,2^{r}-1\) modulo \(2^{r}\). Assuming \(r\geq 3\), we also have \(\left(\frac{1+i}{\sqrt{2}}\right)^{N}=1\). When \(|e|=0\) or \(|e|=N\), the expression (A.18) reduces to \[c_{|e|} =\frac{1}{2^{N}}\left(\sum_{w\,\mathrm{even}}(-1)^{f(w)+w}\binom{ N}{w}\pm\sum_{w\,\mathrm{odd}}(-1)^{f(w)+w}\right)\] \[=\frac{1}{2^{N}}\left(\sum_{w\,\mathrm{even}}\binom{N}{w}\pm \left(\sum_{f(w)=0,\,\mathrm{odd}}\binom{N}{w}-\sum_{f(w)=1,\,\mathrm{odd}} \binom{N}{w}\right)\right).\] where the \(+\) appears when \(|e|=0\), and the \(-\) for \(|e|=N\). Let us fix an odd \(0\leq w\leq N\). According to the palindrome conditions in Lemma 1 we have \(f(w)\equiv f(N-w)+1\mod 2\) as well as \(\binom{N}{w}=\binom{N}{N-w}\). Hence the odd contributions cancel each other, which leaves us with \[c_{0}=c_{N}=\frac{1}{2}\] (A.35) We calculate the other coefficients. Similar to the proof of Lemma 4, the formula (A.19) can be rearranged to \[c_{|e|}=\frac{1}{2^{r}}\sum_{j=0}^{2^{r}-1}\cos^{N-|e|}\frac{\pi j}{2^{r}} \sin^{|e|}\frac{\pi j}{2^{r}}\sum_{q=0}^{2^{r}-1}(-1)^{f(q)+q}(-1)^{lj}\Re \left(e^{-2iqj\pi/2^{r}}(-i)^{|e|}\right)\] For \(j=0\), we have clearly no contribution, since \(|e|>0\). Let us therefore assume that \(j\neq 0\). It is easily seen, that in this case the contributions where \(q\) is even sum up to zero: \[\sum_{q\,\mathrm{even}}(-1)^{f(q)}e^{-2i\pi jq/2^{r}}=\sum_{q^{\prime}=0}^{2^{r- 1}}e^{-4i\pi jq^{\prime}/2^{r}}=0. \tag{111}\] Now we employ Eq. (110) in the \(\sigma=0\) case, to conclude that we still have \(\sum_{q=1,\,\mathrm{odd}}^{2^{r}-1}(-1)^{f(q)+q}e^{-2i\pi jq/2^{r}}=0\), when \(j\) is even and \[\Re\left(\sum_{q=1\,\mathrm{odd}}^{2^{r}-1}(-1)^{f(q)+q}e^{-2i\pi jq/2^{r}} \right)=-\Re\left((-i)^{|e|}\frac{-2i}{\sin(\frac{2\pi j}{2^{r}})}\right)=2\Re \left(\frac{(-i)^{|e|-1}}{\sin(\frac{2\pi j}{2^{r}})}\right), \tag{112}\] when \(j\) is odd. Clearly this expression equals zero whenever \(|e|\) is even. If \(|e|\) is odd on the other hand, combining everything we get \[c_{|e|}=\frac{2}{2^{r}}(-1)^{l}(-i)^{|e|-1}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1 }\frac{\cos^{N-|e|}\frac{\pi j}{2^{r}}\sin^{|e|}\frac{\pi j}{2^{r}}}{\sin\frac {2\pi j}{2^{r}}}. \tag{113}\] We can apply local \(\sqrt{Z_{\pm}^{\otimes N}}\) operators, in order to get rid of the alternating phases on the odd weight contributions, at the price of picking up an imaginary unit. Neglecting global phases, we can then transform the state as claimed, proving Lemma 6. Lastly, we want to transfer the qualitative result that all even weights apart from \(c_{0}=c_{N}=\frac{1}{2}\) are zero, to the more general assumptions in Lemma 4. First of all, according to Lemma 1 and the fact that \(N\equiv 0\mod 4\) by our assumption, we have that \(\mathbf{k}=(k_{1},\ldots,k_{n})\) must satisfy \[\sum_{i}\binom{w}{k_{i}}\equiv\sum_{i}\binom{N-w}{k_{i}}+w\mod 2\mbox{ for all }0\leq w\leq N. \tag{114}\] By assumption, the terms where which are congruent \(1\) modulo \(2\) are only distributed only among the odd weights, and the contributions of weights \(w\) and \(N-w\) must have opposite signs if \(w\) is odd. Hence we can conclude rightaway that \(c_{|e|}=\frac{1}{2}\) when \(|e|\in\{0,N\}\). In all the other cases where \(|e|\) is even, we can assume without loss of generality, that the negative weights only occur when \(w\mod 2^{r}\in\{2^{r-1}+1,\ldots,2^{r-1}+3,\ldots,2^{r}-1\}\). If necessary, we can exchange \(w\) with \(N-w\), since with \(m^{\prime}=m-|e|\) \[\sum_{m}^{|e|}\binom{N-|e|}{N-w-m}\binom{|e|}{m}(-1)^{m}=\sum_{m^{\prime}}^{|e |}\binom{N-|e|}{w-m^{\prime}}\binom{|e|}{m^{\prime}}(-1)^{m^{\prime}+|e|}. \tag{115}\] Let therefore \(|e|\) be even and fixed. As we just argued, \[c_{|e|}=\frac{2(1+i)^{N}}{(2\sqrt{2})^{N}}\sum_{f(w)=1}\sum_{m=0}^{|e|}(-1)^{ m}\binom{N-|e|}{w-m}\binom{|e|}{m} \tag{116}\] is therefore actually equal to \[c_{|e|}=\frac{2(1+i)^{N}}{(2\sqrt{2})^{N}}\sum_{\begin{subarray}{c}w\equiv 2^{k-1} +1\\ \mod 2^{k}\end{subarray}}^{2^{k}-1}\sum_{m=0}^{|e|}(-1)^{m}\binom{N-|e|}{w-m} \binom{|e|}{m}, \tag{112}\] and we are in a similar situation as before. When \(N\equiv 0\mod 2^{r}\), the computation works out in exactly the same manner as above. In the other case, where \(N\equiv 2^{r-1}\mod 2^{r}\), we get an additional factor \(i^{j}\) in the real part, similar to the \(X^{\otimes N}\)-stabilised case before. Instead of obtaining zero right away from Eq.(109), we have \[c_{|e|}=\frac{2}{2^{r}}(-i)^{|e|-1+2l}\sum_{j=1,\,\mathrm{odd}}^{2^{r}-1}i^{j} \frac{\cos^{N-|e|}\frac{\pi j}{2^{r}}\sin^{|e|}\frac{\pi j}{2^{r}}}{\sin\frac{ 2\pi j}{2^{r}}}. \tag{113}\] However, because \(|e|\) and \(N\) are even and \(j\) is odd, we see that the summands corresponding to \(j\) and \(2^{r}-j\) are their negative counterparts, therefore Eq. (113) still vanishes. ### Analytical results on Geometric entanglement measure **Lemma 10**.: _For \(r\geq 1\) and even \(N\), the function_ \[f(x)=\sum_{j=0}^{2^{r}-1}\cos^{N}\left(x+\frac{\pi j}{2^{r}}\right) \tag{114}\] _attains its maximum at \(x\in\frac{\pi}{2^{r}}\mathds{Z}\)_ Proof.: Due to the symmetry, it suffices to check that \(f\) attains its maximum on \([-\pi/2^{r},\pi/2^{r}]\) at \(x=0\). Here we can exploit the following, using that \(N\) is even, see [43] p. 263: \[\cos^{N}(x)=\frac{1}{2^{N-1}}\frac{N!}{((N/2)!)^{2}}\left(\frac{1}{2}+\frac{N }{N+2}\cos(2x)+\frac{N(N-2)}{(N+2)(N+4)}\cos(4x)+\ldots\right).\] For odd \(l\), we have that \[\sum_{j=0}^{2^{r}-1}\cos\left(2l(x+\frac{\pi j}{2^{r}})\right)= \sum_{j=0}^{2^{r}-1}\cos\left(2l(x+\frac{\pi j}{2^{r}})\right)+\cos\left(2l(x +\frac{\pi(j+2^{r-1})}{2^{r}})\right)=0, \tag{115}\] and in the case where \(l\) is even, we can write \[\sum_{j=0}^{2^{r}-1}\cos\left(2l(x+\frac{\pi j}{2^{r}})\right)= \sum_{j=0}^{2^{r}-1}\cos\left(l(2x+\frac{\pi j}{2^{r-1}})\right).\] If \(l/2\) is still even, we apply the same reduction again, otherwise this term vanishes, according to (115). Inductively, we can show that for any \(l\) not divisible by \(2^{r}\), the sum over all the shifted contrubutions vanishes. For \(l\equiv 0\mod 2^{r}\) on the other hand, the reduction terminates when we are only left with one summand. and we get \(\cos(2l(x+\frac{\pi j}{2^{r}}))=\cos(2lx)\) for all \(j\), and therefore, we can write \(f(\theta)=\sum_{k}\alpha_{k}\cos(2^{r+1}k\theta)\), for some suitable, nonnegative coefficients \(\alpha_{k}\). This indeed shows, that \(f\) attains its maximum at \(\theta=0\) and by symmetry also at every \(\theta\in\frac{\pi}{2^{r}}\) #### a.4.1 Geometric entanglement in the five-uniform case With some tricks, the strategy for proving the three-uniform case can be applied to the five-uniform \(X^{\otimes N}\)-stabilised case. Recall that the odd coefficients are given by \[c_{w}=\frac{1}{4}\sum_{j=1,\,\mathrm{odd}}^{7}i^{j-1}\frac{\cos\left(\frac{\pi j }{8}\right)^{N-w}\sin\left(\frac{\pi j}{8}\right)^{w}}{\cos\left(\frac{\pi j}{4 }\right)},\] (A.46) which is positive for all odd \(j\) and \(w\). Without loss of generality, we assume that the the even weight coefficients have positive signs as well. We compute \[f_{N}(\theta):= \langle\widetilde{H}_{N}^{5}|\psi(\theta)\rangle^{\otimes N}\] \[= \frac{1}{2}\cos^{N}\left(\theta+\frac{\pi}{4}\right)+\frac{1}{2} \cos^{N}\left(\theta+\frac{3\pi}{4}\right)\] \[+ \frac{1}{4}\sum_{j=1,\,\mathrm{odd}}^{7}\frac{i^{j-1}}{\cos( \frac{\pi j}{4})}\sum_{k=1\,\mathrm{odd}}^{N}\binom{N}{k}\cos^{N-k}(\theta) \cos^{N-k}(\frac{\pi j}{8})\sin^{k}(\theta)\sin^{k}(\frac{\pi j}{8}).\] This can be simplified to \[f_{N}(\theta)= \frac{1}{2}\sum_{j=2,6}\cos^{N}\left(\theta+\frac{\pi j}{8} \right)+\frac{1}{8}\sum_{j=1,\,\mathrm{odd}}^{7}\frac{i^{j-1}}{\cos(\frac{\pi j }{4})}\left(\cos^{N}\left(\theta-\frac{\pi j}{8}\right)-\cos^{N}\left(\theta+ \frac{\pi j}{8}\right)\right)\] \[= \frac{1}{2}\sum_{j=2,6}\cos^{N}\left(\theta+\frac{\pi j}{8} \right)+\frac{1}{2\sqrt{2}}\left(\sum_{j=5,7}\cos^{N}\left(\theta+\frac{\pi j }{8}\right)-\sum_{j=1,3}\cos^{N}\left(\theta+\frac{\pi j}{8}\right)\right)\] \[= \left(\frac{1}{2}-\frac{1}{2\sqrt{2}}\right)\sum_{j=2,6}\cos^{N} \left(\theta+\frac{\pi j}{8}\right)+\frac{1}{2\sqrt{2}}\sum_{j=0}^{7}\cos^{N} \left(\theta+\frac{\pi j}{8}\right)\] \[\qquad-\frac{1}{2\sqrt{2}}\sum_{j=0,4}\cos^{N}\left(\theta+\frac {\pi j}{8}\right)-\frac{2}{2\sqrt{2}}\sum_{j=1,3}\cos^{N}\left(\theta+\frac{ \pi j}{8}\right).\] Again the sum running over all \(j=0,\ldots 7\) attains its maxima at \(\frac{\pi}{8}\mathbb{Z}\), in particular at \(\theta=\frac{\pi}{4}\). Also, the first sum is maximal at \(\theta=\frac{\pi}{4}\), whereas the negative contributions are minimal at \(\theta=\frac{\pi}{4}\). This is obvious for the first one \((j=0,4)\). For the second one, we can quickly check this analytically: \[g(x):=\cos^{N}\left(x+\frac{\pi}{8}\right)+\cos^{N}\left(x+\frac{3\pi}{8} \right).\] (A.47) We now look at the roots of the first derivative: \[0\stackrel{{!}}{{=}}\partial_{x}g(x)=-N\left(\cos^{N-1}\left(x+ \frac{\pi}{8}\right)\sin\left(x+\frac{\pi}{8}\right)+\cos^{N-1}\left(x+\frac{3 \pi}{8}\right)\sin\left(x+\frac{3\pi}{8}\right)\right)\] After dividing through \(\cos^{N}(x+\frac{\pi}{8})\) and substituting \(y:=\tan(x+\frac{\pi}{8})\), this is equivalent to demanding that \[L(y):=\frac{-2y}{1-y^{2}}=\left(\frac{1-y}{\sqrt{2}}\right)^{N-2}=:R_{N}(y).\] (A.48) For \(x\in[-3\pi/8,-\pi/8]\) we have \(g(x)\geq\frac{2}{\sqrt{2}^{N}}\geq g(\pi/4)\) therefore, we can exclude solutions of Eq. (A.48) with \(-1<y\leq 0\) from our search of the global minimum. When \(y<-1\) or \(0\leq y<1\), \(L(y)\) and \(R_{N}(y)\) have opposite signs (recall that \(N\) is even), so we find no solutions in those intervals. For \(y>1\), \(L(y)\) is strictly decreasing, and \(R_{N}(y)\) strictly increasing. Hence there is at most one solution in this interval. This solution is found at \(y=\sqrt{2}+1\Leftrightarrow x=\frac{\pi}{4}\) and easily verified to correspond to a local minimum, which therefore also must be the global minimum of \(g\). Alltogether, we can conclude that \(f_{N}\) attains its maximum at \(\theta=\frac{\pi}{4}\), which is then equal to \[\max_{\theta}f_{N}(\theta)=\frac{1}{2}+\frac{1}{\sqrt{2}}\left(\cos^{N}\left( \frac{\pi}{8}\right)-\sin^{N}\left(\frac{\pi}{8}\right)\right).\] (A.49) For \(Y^{\otimes N}\)-stabilised \(5\)-uniform states, again we can no longer assume all coefficients to be real, therfore, we need to introduce a complex phase. However, we can still assume that the closest state is of the form \[\left|\psi(\theta,\phi)\right\rangle^{\otimes N}=(\sin(\theta)|0\rangle+e^{i \phi}\cos(\theta)|1\rangle)^{\otimes N}\] (A.50) for some \(\theta\in[0,\frac{\pi}{2}]\). We compute the overlap of \(|\widetilde{H}_{N}^{5}\rangle=\sqrt{Z}_{+}^{\otimes N}\sqrt{Y}_{+}^{\otimes N}\) with such a general symmetric state: \[\langle\widetilde{H}_{N}^{5}|\psi(\theta,\phi)\rangle^{\otimes N} =\frac{i}{2}\left(\sin^{N}(\theta)+e^{i\phi N}\cos^{N}(\theta)\right)\] \[+\frac{1}{4}\sum_{j=1,\,\mathrm{odd}}^{7}\sum_{k=1\,\mathrm{odd} }^{N}\binom{N}{k}\frac{\sin^{k}(\frac{\pi j}{8})\cos^{N-k}(\frac{\pi j}{8})}{ \sin(\frac{\pi j}{4})}e^{i\phi(N-k)}\cos^{N-k}(\theta)\sin^{k}(\theta)\] The expression \[\frac{\sin^{k}(\frac{\pi j}{8})\cos^{N-k}(\frac{\pi j}{8})}{\sin(\frac{\pi j}{ 4})}\cos^{N-k}(\theta)\sin^{k}(\theta)\] is nonnegative for all odd \(j\in\{1,\ldots,2^{r}-1\}\), odd \(k\) and \(\theta\in[0,\frac{\pi}{2}]\), therefore we can estimate the modulus by \[|\langle\widetilde{H}_{N}^{5}|\psi(\theta,\phi)\rangle^{\otimes N}|\leq \frac{1}{2}\sin^{N}(\theta)+\frac{1}{2}\cos^{N}(\theta)\] \[+ \frac{1}{4}\sum_{j=1,\,\mathrm{odd}}^{7}\sum_{k=1\,\mathrm{odd}} ^{N}\binom{N}{k}\frac{\sin^{k}(\frac{\pi j}{8})\cos^{N-k}(\frac{\pi j}{8})}{ \sin(\frac{\pi j}{4})}\cos^{N-k}(\theta)\sin^{k}(\theta)\] \[= \frac{1}{2}\sum_{j=0,4}\cos^{N}\left(\theta+\frac{\pi j}{8}\right)\] \[+ \frac{1}{2\sqrt{2}}\left(\sum_{j=5,7}\cos^{N}\left(\theta+\frac{ \pi j}{8}\right)-\sum_{j=1,3}\cos^{N}\left(\theta+\frac{\pi j}{8}\right)\right).\] This does not get maximal neither at \(\theta=\frac{\pi}{4}\) nor at \(\theta=0\), which makes it hard to analytically determine its maximum explicitly. Therfore we again rely on estimates. For \(N\geq 16\), it is not difficult to check that in order to have \(|\langle\widetilde{H}_{N}^{5}|\psi(\theta,\phi)\rangle|\geq\frac{1}{2}\), it is necessary to have \(\theta\in[-\frac{\pi}{8},\frac{\pi}{8}]+\frac{\pi}{2}\mathds{Z}\). Without loss of generality assume that \(\theta\in[-\frac{\pi}{8},\frac{\pi}{8}]\), which allows us to estimate \[\cos^{N}\Big{(}\theta+\frac{5\pi}{8}\Big{)}-\cos^{N}\Big{(}\theta+\frac{\pi}{8} \Big{)}\leq 0\leq\cos^{N}\Big{(}\theta+\frac{\pi}{8}\Big{)}-\cos^{N}\Big{(} \theta+\frac{5\pi}{8}\Big{)}.\] Thereby we can relate the maximisation to the one of \(X^{\otimes N}\)-stabilised case: \[\max_{\phi,\theta}|\langle\widetilde{H}_{N}^{5}|\psi(\theta,\phi) \rangle^{\otimes N}| \leq\max_{\theta}\frac{1}{2}\sum_{j=0,4}\cos^{N}\left(\theta+ \frac{\pi j}{8}\right)\] \[+\frac{1}{2\sqrt{2}}\left(\sum_{j=1,7}\cos^{N}\left(\theta+\frac{ \pi j}{8}\right)-\sum_{j=3,5}\cos^{N}\left(\theta+\frac{\pi j}{8}\right)\right)\] \[=\max_{\theta}f_{N}\left(\theta+\frac{\pi}{4}\right)=\frac{1}{2} +\frac{1}{\sqrt{2}}\left(\cos^{N}\left(\frac{\pi}{8}\right)-\sin^{N}\left( \frac{\pi}{8}\right)\right),\] which proves the lower bound in (27). The upper bound follows trivially from the fact that \(|\langle\widetilde{H}_{N}^{5}|0\rangle^{\otimes N}|=\frac{1}{2}\). ## Appendix B Appendix to nonlocality This part of the Appendix is dedicated to provide detailed calculations and reasoning for the statements of the main text concerning nonlocality and robustness, in this order. ### Computation for non-locality of the X-stabilised case Let \(k=2^{r-1}+1\) and \(N\equiv 2^{r-1}\mod 2^{r}\), so that \(|H^{k}_{N}\rangle\) is \(X^{\otimes N}\)-stabilised. Consider the Bell operator \[\mathcal{B}^{Y}_{N} =\frac{1}{2}(Y+iZ)^{\otimes N}+\frac{1}{2}(Y-iZ)^{\otimes N}\] \[=\sum_{m\text{ even}}i^{m}\underbrace{Y_{1}\ldots Y_{m}Z_{m+1} \ldots Z_{N}}_{=:A(m)}+\text{perm}.\] Because \(\mathcal{B}^{Y}_{N}\) is invariant under conjugation with \(\sqrt{X}^{\otimes N}_{+}\), we directly see that \[\langle H^{k}_{N}|\mathcal{B}^{Y}_{N}|H^{k}_{N}\rangle=\langle\widetilde{H}^{k }_{N}|\mathcal{B}^{Y}_{N}|\widetilde{H}^{k}_{N}\rangle=\frac{1}{2}\langle GHZ _{X}|\mathcal{B}^{Y}_{N}|GHZ_{X}\rangle+\langle\phi_{\rm odd}|\mathcal{B}^{Y} _{N}|\phi_{\rm odd}\rangle.\] Note that the cross-terms cancel, as the summands of \(\mathcal{B}^{Y}_{N}\) contain only even numbers of bit-flips. Let us first consider the GHZ part. First, we observe that because \(N\equiv 0\mod 4\), we can rewrite \(\mathcal{B}^{Y}_{N}\) in the following way: \[\mathcal{B}^{Y}_{N}=\frac{1}{2}Z^{\otimes N}\left((\mathds{1}+X)^{\otimes N}+ (\mathds{1}-X)^{\otimes N}\right)=\frac{1}{2}Z^{\otimes N}\widetilde{\mathcal{ B}}^{X}_{N}. \tag{21}\] Since \(|GHZ_{X}\rangle\) is a \(+1\)-eigenstate for \(Z^{\otimes N}\) and all summands of \(\widetilde{\mathcal{B}}^{X}_{N}\) - recall that these always contain an even number of Pauli-\(X\) operators, and identities otherwise - we clearly get \[\frac{1}{2}\langle GHZ_{X}|\mathcal{B}^{Y}_{N}|GHZ_{X}\rangle=\frac{1}{2} \langle GHZ_{X}|Z^{\otimes N}\widetilde{\mathcal{B}}^{X}_{N}|GHZ_{X}\rangle=2 ^{N-2}. \tag{22}\] Let us consider the odd contributions next. Lastly, we compute the contribution of the odd weight terms for \(0\leq m\leq N\). For a single correlator \(A(m)\), the contribution on the odd terms is given by \[\frac{1}{2}\langle\phi_{\rm odd}|A(m)|\phi_{\rm odd}\rangle= \frac{1}{2}\langle\phi_{\rm odd}|Y_{1}\ldots Y_{m}Z_{m+1}\ldots Z_ {N}|\phi_{\rm odd}\rangle\] \[= \frac{1}{2}\sum_{w,w^{\prime}\,\rm odd}\sum_{|I|=w}\sum_{|J|=w^{ \prime}}c_{w}c_{w^{\prime}}\langle I|Y_{1}\ldots Y_{m}Z_{m+1}\ldots Z_{N}|J\rangle \tag{23}\] The only terms which contribute to this summation are the ones where \((j_{1},\ldots,j_{N})=(\tilde{i_{1}},\ldots,\tilde{i_{m}},i_{m+1},\ldots,i_{N})\). Also, \(Y\) and \(Z\) both introduce a sign, when acting on \(|1\rangle\), so we pick up a total phase of \(-i^{m}\) in the process. Further, we substitute \(j:=i_{1}+\cdots+i_{m}\), that is if \(I\) has weight \(w\), the corresponding \(J\) has weight \(w^{\prime}=|J|=w+m-2j\). Alltogether we count \({m\choose j}{N-w\choose m-j}\) possibilities to distribute a weight of \(j\) among the first \(m\) entries, given that the total weight is \(w\). Exploiting the symmetry, we can then reduce Eq. (23) to \[=-i^{m}\sum_{w,\,\rm odd}\sum_{j=0}^{m}c_{w}c_{w+m-2j}{m\choose j}{N-m\choose w -j}. \tag{24}\] In order to reduce cumbersome notation, let \(a(l):=\cos(\frac{\pi l}{2^{r}})\) and \(b(l):=\sin(\frac{\pi l}{2^{r}})\). We proceed calculating the product of the coefficients, using the general formulae derived for the odd coefficients in Lemma 5. \[c_{w}c_{w+m-2j}=\frac{4}{4^{r}}\sum_{\begin{subarray}{c}l,l^{\prime}=1\\ \mathrm{odd}\end{subarray}}^{2^{r}-1}\frac{-i^{l+l^{\prime}}}{a(2l)a(2l^{ \prime})}a(l)^{N-w}a(l^{\prime})^{N-w-m+2j}b(l)^{w}b(l^{\prime})^{w+m-2j}\] (B.5) For now, let us fix \(l\) and \(l^{\prime}\), and compute the sum over \(j\) and \(w\): \[\sum_{j=0}^{m}\sum_{\begin{subarray}{c}w^{\prime}=j+1\\ \mathrm{mod}\end{subarray}}\binom{m}{j}\binom{N-m}{w^{\prime}}a(l)^{N-w^{ \prime}-j}a(l^{\prime})^{N-w^{\prime}-m+j}b(l)^{w^{\prime}+j}b(l^{\prime})^{w ^{\prime}+m-j}\] \[= \sum_{j=0}^{m}\binom{m}{j}a(l)^{N-j}a(l^{\prime})^{N-m+j}b(l)^{ j}b(l^{\prime})^{m-j}\times\] \[\qquad\qquad\qquad\frac{1}{2}\left(\left(1+\frac{b(l)b(l^{\prime })}{a(l)a(l^{\prime})}\right)^{N-m}-(-1)^{j}\left(1-\frac{b(l)b(l^{\prime})}{a (l)a(l^{\prime})}\right)^{N-m}\right)\] \[= \frac{1}{2}\sum_{\sigma\in\{0,1\}}(-1)^{\sigma}a(l)^{N}a(l^{ \prime})^{N-m}b(l^{\prime})^{m}\left(1+(-1)^{\sigma}\frac{b(l)b(l^{\prime})}{ a(l)a(l^{\prime})}\right)^{N-m}\left(1+(-1)^{\sigma}\frac{a(l^{\prime})b(l)}{a(l)b(l^{ \prime})}\right)^{m}\] \[= \frac{1}{2}\sum_{\sigma\in\{0,1\}}(-1)^{\sigma}\left(a(l)a(l^{ \prime})+(-1)^{\sigma}b(l)b(l^{\prime})\right)^{N-m}\left(a(l)b(l^{\prime})+( -1)^{\sigma}a(l^{\prime})b(l)\right)^{m}\] \[= \frac{1}{2}\sum_{\sigma\in\{0,1\}}(-1)^{\sigma}\cos^{N-m}\left((l -(-1)^{\sigma}l^{\prime})\frac{\pi}{2^{r}}\right)\sin^{m}\left((l+(-1)^{ \sigma}l^{\prime})\frac{\pi}{2^{r}}\right).\] By replacing \(l^{\prime}\) with \((2^{r}-l^{\prime})\) in the \(\sigma=1\)-term, we see that the summation over \(l,l^{\prime}\) then gives the same contribution for \(\sigma=0\) and \(\sigma=1\). Adding up the resulting contributions for the different \(m\), we arrive at: \[\frac{1}{2}\langle\phi_{\mathrm{odd}}|\mathcal{B}_{N}^{Y}|\phi_{ \mathrm{odd}}\rangle= -\sum_{\begin{subarray}{c}m=0\\ \mathrm{even}\end{subarray}}^{N}\binom{N}{m}i^{m}\sum_{\begin{subarray}{c}w=1 \\ \mathrm{odd}\end{subarray}}^{N-1}i^{m}c_{w}c_{w+m-2j}\binom{m}{j}\binom{N-m}{w-j}\] \[= \frac{4}{4^{r}}\sum_{\begin{subarray}{c}m=0\\ \mathrm{even}\end{subarray}}^{N}\binom{N}{m}\sum_{l,l^{\prime}=1,\\ \mathrm{odd}\end{subarray}}^{2^{r}-1}i^{l+l^{\prime}}\frac{\sin^{m}\left((l+l^ {\prime})\frac{\pi}{2^{r}}\right)\cos^{N-m}\left((l-l^{\prime})\frac{\pi}{2^{ r}}\right)}{\cos(\frac{2\pi l}{2^{r}})\cos(\frac{2\pi l^{\prime}}{2^{r}})}\] \[= \frac{2}{4^{r}}\sum_{\begin{subarray}{c}l,l^{\prime}=1\\ \mathrm{odd}\end{subarray}}^{2^{r}-1}\frac{i^{l+l^{\prime}}}{\cos\left(\frac{2 \pi l}{2^{r}}\right)\cos\left(\frac{2\pi l^{\prime}}{2^{r}}\right)}\Bigg{[} \left(\cos\left(\frac{(l-l^{\prime})\pi}{2^{r}}\right)+\sin\left(\frac{(l+l^{ \prime})\pi}{2^{r}}\right)\right)^{N}\] \[\qquad\qquad\qquad\qquad\qquad+\left(\cos\left(\frac{(l-l^{ \prime})\pi}{2^{r}}\right)-\sin\left(\frac{(l+l^{\prime})\pi}{2^{r}}\right) \right)^{N}\Bigg{]}.\] In the second summand, we exchange \(l\leftrightarrow 2^{r}-l\), \(l^{\prime}\leftrightarrow 2^{r}-l^{\prime}\), and thereby get rid of the relative minus sign. Thus we get \[= \frac{4}{4^{r}}\sum_{\begin{subarray}{c}l,l^{\prime}=1\\ \text{odd}\end{subarray}}^{2^{r}-1}\frac{i^{l+l^{\prime}}}{\cos(\frac{2\pi l}{2 ^{r}})\cos(\frac{2\pi l^{\prime}}{2^{r}})}\left(\cos(\frac{(l-l^{\prime})\pi}{2 ^{r}})+\sin(\frac{(l+l^{\prime})\pi}{2^{r}})\right)^{N}\] \[= \frac{4}{4^{r}}\sum_{\begin{subarray}{c}l,l^{\prime}=1\\ \text{odd}\end{subarray}}^{2^{r}-1}i^{l+l^{\prime}}\frac{\left[\cos(\frac{l \pi}{2^{r}})\cos(\frac{l^{\prime}\pi}{2^{r}})+\cos(\frac{l\pi}{2^{r}})\sin( \frac{l^{\prime}\pi}{2^{r}})+\sin(\frac{l\pi}{2^{r}})\cos(\frac{l^{\prime}\pi} {2^{r}})+\sin(\frac{l\pi}{2^{r}})\sin(\frac{l^{\prime}\pi}{2^{r}})\right]^{N} }{\cos(\frac{2\pi l}{2^{r}})\cos(\frac{2\pi l^{\prime}}{2^{r}})}\] \[= -\frac{4}{4^{r}}\sum_{l=1,\,\text{odd}}^{2^{r}-1}i^{l}\frac{ \left(\cos(\frac{l\pi}{2^{r}})+\sin(\frac{l\pi}{2^{r}})\right)^{N}}{\cos( \frac{2\pi l}{2^{r}})}\sum_{l^{\prime}=1,\,\text{odd}}^{2^{r}-1}i^{-l^{\prime} }\frac{\left(\cos(\frac{l^{\prime}\pi}{2^{r}})+\sin(\frac{l^{\prime}\pi}{2^{r} })\right)^{N}}{\cos(\frac{2\pi l^{\prime}}{2^{r}})}\] \[= -\frac{4}{4^{r}}\left|\sum_{l=1,\,\text{odd}}^{2^{r}-1}i^{l}\frac {\left(\cos(\frac{l\pi}{2^{r}})+\sin(\frac{l\pi}{2^{r}})\right)^{N}}{\cos( \frac{2\pi}{2^{r}})}\right|^{2},\] which finishes our derivation. Violation of the separability inequality in the three-uniform complete case in the presence of particle loss The Bell operator for the three-uniform complete hypergraph state after losing \(k\) particles: \[\mathcal{M}_{N-k}^{0} =X_{1}X_{2}X_{3}\ldots X_{N-k}\] (B.6) \[-Z_{1}Z_{2}X_{3}X_{4}X_{5}\ldots X_{N-k}-\text{ perm.}\] \[+Z_{1}Z_{2}Z_{3}Z_{4}X_{5}X_{6}X_{7}\ldots X_{N-k}+\text{ perm.}\] \[-\ldots,\] \[\mathcal{M}_{N-k}^{1} =Z_{1}X_{2}X_{3}\ldots X_{N-k}+\text{ perm.}\] \[-Z_{1}Z_{2}Z_{3}X_{4}X_{5}\ldots X_{N-k}-\text{ perm.}\] \[+Z_{1}Z_{2}Z_{3}Z_{4}Z_{5}X_{6}X_{7}\ldots X_{N-k}+\text{ perm.})\] \[-\ldots,\] Additional to the already introduced notation \(|GHZ_{X})=\frac{1}{\sqrt{2}}(|+)^{\otimes N}+|-)^{\otimes N})\) we write \(|GHZ_{X}^{\pm})=\frac{1}{\sqrt{2}}(|+)^{\otimes N}\pm|-)^{\otimes N})\), and similarly for the GHZ state with respect to the Pauli-\(Y\) eigenbasis. ### Proof of Theorem 3 **Theorem 3**.: _After loosing \(k\) particles of \(N\)-qubit three-uniform complete hypergraph state, we can derive the following violations of separability inequalities:_ \begin{tabular}{c|c|c} Constraints on \(N\) and \(k\) & Bell inequality \(\mathcal{B}_{N\setminus k}\) & Quantum value \(\langle\mathcal{B}_{N\setminus k}\rangle_{|H_{N}^{3}\rangle}\) \\ \hline \(N\equiv 2\mod 4\)_, k odd_ & \(\mathcal{M}_{N-k}^{0}\otimes\mathds{1}^{\otimes k}\) & \(\sqrt{2}^{N-2k}\) \\ \(N\equiv 0\mod 4\)_, k even_ & \(\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{\otimes k}\) & \(\sqrt{2}^{N-2k}\) \\ \(N-k\equiv 2\mod 4\) & \(\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{\otimes k}\) & \(\left|\sin\left(\frac{\pi k}{4}\right)\right|\sqrt{2}^{N-2k}\) \\ \(N-k\equiv 0\mod 4\) & \(\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{\otimes k}\) & \(\left|\cos\left(\frac{\pi k}{4}\right)\right|\sqrt{2}^{N-2k}\) \\ \end{tabular} _In the first two cases, the quantum value is \(1/2\) and \(0\), if instead we consider even and odd \(k\), respectively._ Proof.: Case 1: \(N\equiv 2\mod 4\) We transform the operator \(\mathcal{B}_{N\setminus k}=\mathcal{M}_{N-k}^{0}\otimes\mathds{1}^{\otimes k}\) as follows \[\begin{split}\widetilde{\mathcal{B}}_{N\setminus k}& :=\sqrt{X_{+}^{\otimes n}}\mathcal{B}_{N\setminus k}\sqrt{X_{+} ^{\otimes N}}\\ &=\mathds{1}_{1}\ldots\mathds{1}_{N-k}X_{N-k+1}\ldots X_{N}\\ &\quad+Z_{1}Z_{2}\mathds{1}_{3}\mathds{1}_{4}\ldots\mathds{1}_{ N-k}X_{N-k+1}\ldots X_{N}+\ \text{perm}.(1\ldots(N-k))\\ &\quad+\ldots.\\ &=\sum_{m\,\mathrm{odd}}Z^{\otimes m}\otimes\mathds{1}^{\otimes N -k-m}\otimes X^{\otimes k}+\ \text{perm}.(1\ldots(N-k))\end{split}\] (B.8) Similar to the calculations for the part on non-locality, we write \[\langle H_{N}^{3}|\mathcal{B}_{N\setminus k}|H_{N}^{3}\rangle=\langle \widetilde{H}_{N}^{3}|\widetilde{\mathcal{B}}_{N\setminus k}|\widetilde{H}_{ N}^{3}\rangle=\frac{1}{2}\left(\langle GHZ|+\langle GHZ_{X}^{-}|\right) \widetilde{\mathcal{B}}_{N\setminus k}\left(|GHZ\rangle+|GHZ_{X}^{-}\rangle \right)\] where we also used that we can rewrite the sum over the odd weights as GHZ state in the \(X\)-basis, \[\frac{1}{\sqrt{2}}|GHZ_{X}^{-}\rangle=\frac{1}{\sqrt{2}^{N}}\sum_{w(x)\, \mathrm{odd}}|x\rangle=\frac{1}{2}\left(|+\rangle^{\otimes N}-|-\rangle^{ \otimes N}\right)\] (B.9) We compute the values of a single correlator \(Z^{\otimes m}\otimes\mathds{1}^{\otimes N-m-k}\otimes X^{\otimes k}=(Z^{ \otimes m}\otimes\mathds{1}^{\otimes N-m})(\mathds{1}^{\otimes N-k}\otimes X ^{\otimes k})\) on the three different combinations of odd and even-weight contributions. Note that \(|GHZ\rangle\) absorbs the \(Z^{\otimes m}\)-terms since \(m\) is even, and the \(X^{\otimes k}\)-part maps \(|GHZ_{X}^{-}\rangle\) to \(|GHZ_{X}^{\pm}\rangle\), depending on \(k\). First consider the \(GHZGHZ\) pairing: \[\langle GHZ|Z^{\otimes m}\otimes\mathds{1}^{\otimes N-m-k}\otimes X ^{\otimes k}|GHZ\rangle =\langle GHZ|\mathds{1}_{1}\ldots\mathds{1}_{N-k}X_{N-k+1}\ldots X _{N}|GHZ\rangle\] \[=0,\] since we assume \(N>k>0\). The cross-terms yield \[\langle GHZ|Z^{\otimes m}\otimes\mathds{1}^{\otimes N-m-k}\otimes X^{\otimes k }|GHZ_{X}^{-}\rangle=\begin{cases}\langle GHZ|GHZ_{X}^{-}\rangle=0,&k\text{ even}\\ \langle GHZ|GHZ_{X}^{+}\rangle=\frac{2}{\sqrt{2}^{N}},&k\text{ odd}.\end{cases}\] And finally, \[\langle GHZ_{X}^{-}|Z^{\otimes m}\otimes\mathds{1}^{\otimes N-m-k }\otimes X^{\otimes k}|GHZ_{X}^{-}\rangle =\langle GHZ_{X}^{-}|Z_{1}\ldots Z_{m}\mathds{1}_{m+1}\ldots \mathds{1}_{N-k}|GHZ_{X}^{\pm}\rangle\] \[=\begin{cases}1&,k\text{ is even and }m=0\\ 0&,\text{ otherwise}.\end{cases}\] Counting in total \(2^{N-k-1}\) of such correlators, we end up with \[\langle H_{N}^{3}|\mathcal{B}_{N\setminus k}|H_{N}^{3}\rangle=\sqrt{2}^{N-2k}\] (B.10) if \(k\) is odd and \(\langle\mathcal{B}_{N\setminus k}\rangle_{|H^{3}_{N}}=\frac{1}{2}\) in the case where \(k\) is even. Case 2: \(N\equiv 0\mod 4\) Again, our first step is to transform the inequality: \[\begin{split}\widetilde{\mathcal{B}}_{N\setminus k}:=& \sqrt{Z_{-}^{\otimes N}}\sqrt{Y_{+}^{\otimes N}}\mathcal{B}_{N \setminus k}\sqrt{Y_{-}^{\otimes N}}\sqrt{Z_{-}^{\otimes N}}\\ =&(-i)X_{1}\mathds{1}_{2}\mathds{1}_{3}\ldots \mathds{1}_{N-k}Z_{N-k+1}\ldots Z_{N}+\text{ perm. }(1\ldots(N-k))\\ +&(-i)X_{1}X_{2}X_{3}\mathds{1}_{4}\mathds{1}_{5} \ldots\mathds{1}_{N-k}Z_{N-k+1}\ldots Z_{N}+\text{ perm. }(1\ldots(N-k))\\ +&\ldots\\ =&(-i)\sum_{m\,\text{odd}}X^{\otimes m}\mathds{1}^{N -m-k}Z^{\otimes k}+\text{perm.}(1\ldots(N-k))\end{split} \tag{2.11}\] Because \(N\equiv 0\mod 4\), we have \(\langle\widetilde{H}^{3}_{N}|\sqrt{Z}_{+}=\frac{1}{\sqrt{2}}\left(\pm\langle GHZ |+i\langle GHZ_{X}^{-}|\right)\), and similarly \(\sqrt{Z}_{+}|\widetilde{H}^{3}_{N}\rangle=\frac{1}{\sqrt{2}}\left(\pm|GHZ \rangle+i|GHZ_{X}^{-}\right)\), see Lemma 3. Without loss of generality, let us assume that we have a positive sign in front of the \(Z\)-basis GHZ part. All summands of \(\widetilde{\mathcal{B}}_{N\setminus k}\) interchange between even and odd weights, therefore only the cross-terms contribute: \[\begin{split}\langle H^{3}_{N}|\mathcal{B}_{N\setminus k}|H^{3}_ {N}\rangle=&\frac{1}{2}(\langle GHZ|+i\langle GHZ_{X}^{-}|) \widetilde{\mathcal{B}}_{N\setminus k}(|GHZ\rangle+i|GHZ_{X}^{-}))\\ =&\langle GHZ|X^{\otimes m}\mathds{1}^{N-m-k}Z^{ \otimes k}|GHZ_{X}^{-}\rangle.\end{split}\] We let the \(Z^{\otimes k}\)-part of the operator act on \(\langle GHZ|\) and the \(X^{\otimes m}\)-components on \(|GHZ_{X}^{-}\rangle\). Since \(m\) is always odd, we have \[\langle GHZ|(\mathds{1}^{\otimes N-k}\otimes Z^{\otimes k})(X^{\otimes m} \otimes\mathds{1}^{\otimes N-m})|GHZ_{X}^{-}\rangle=\begin{cases}\langle GHZ |GHZ_{X}^{+}\rangle=\frac{2}{\sqrt{2}},&k\text{ even},\\ \langle GHZ^{-}|GHZ_{X}^{+}\rangle=0,&k\text{ odd}.\end{cases}\] Counting in total \(2^{N-k-1}\) contributions, we indeed arrive at \[\langle H^{3}_{N}|\mathcal{B}_{N\setminus k}|H^{3}_{N}\rangle=\sqrt{2}^{N-2k}, \text{ for even }k. \tag{2.12}\] It is evident that for odd \(k\) we get correlation \(0\). Case 3: \(N-k\equiv 2\mod 4\) Again, we consider the "odd" Mermin-operator \(\mathcal{M}^{1}_{N-k}\). We now decompose the hypergraph state, by conditioning on the last \(k\) qubits \[\begin{split}|H^{3}_{N}\rangle&=\frac{1}{\sqrt{2}}|H^{3 }_{N-1}\rangle|0\rangle+\frac{1}{\sqrt{2}}|H^{3,2}_{N-1}\rangle|1\rangle\\ &=\frac{1}{2}\left(|H^{3}_{N-2}\rangle|00\rangle+|H^{3,2}_{N-2} \rangle(|01\rangle+|10\rangle)+|H^{3,1}_{N-2}\rangle|11\rangle\right)=\ldots\\ &=\frac{1}{\sqrt{2}^{k}}\sum_{l=0}^{k}|H_{N-k}(l)\rangle\left( \sum_{w(x)=l}|x\rangle\right)\end{split}\] Since \(|H_{m}^{3,1}\rangle=\frac{1}{\sqrt{2}}\left(|H_{m-1}^{3,1}\rangle|0\rangle-|H_{m-1 }^{3,2,1}\rangle|1\rangle\right)\) and \(|H_{m}^{3,2,1}\rangle=\frac{1}{\sqrt{2}}\left(|H_{m-1}^{3,2,1}\rangle|0\rangle-| H_{m-1}^{3}\rangle|1\rangle\right)\), the states \(|H_{N-k}(l)\rangle\) only depend on \(l\mod 4\), more specifically \[|H_{N-k}(l)\rangle=\begin{cases}&|H_{N-k}^{3}\rangle,\quad l\equiv 0\mod 4\\ &|H_{N-k}^{3,2}\rangle,\quad l\equiv 1\mod 4\\ &|H_{N-k}^{3,1}\rangle,\quad l\equiv 2\mod 4\\ -|H_{N-k}^{3,2,1}\rangle,\quad l\equiv 3\mod 4.\end{cases} \tag{123}\] With this, we decompose the correlator by measuring on the last qubits, to obtain mixed-state overlaps: \[\langle H_{N}^{3}|\mathcal{B}_{N\backslash k}|H_{N}^{3}\rangle= \frac{1}{2^{k}}\sum_{l=0}^{k}\langle H_{N-k}(l)|\left(\sum_{w(x)= l}\langle x|\right)\mathcal{M}_{N-k}^{1}\otimes\mathds{1}^{k}\sum_{l^{ \prime}=0}^{k}|H_{N-k}(l^{\prime})\rangle\left(\sum_{w(x)=l^{\prime}}|x^{ \prime}\rangle\right)\] \[= \frac{1}{2^{k}}\sum_{l,l^{\prime}=0}^{k}\sum_{\begin{subarray}{ c}w(x)=l\\ w(x^{\prime})=l^{\prime}\end{subarray}}\langle H_{N-k}(l)|\mathcal{M}_{N-k}^{1} |H_{N-k}(l^{\prime})\rangle\langle x|x^{\prime}\rangle\] \[= \frac{1}{2^{k}}\sum_{l=0}^{k}\binom{k}{l}\langle H_{N-k}(l)| \mathcal{M}_{N-k}^{1}|H_{N-k}(l)\rangle. \tag{124}\] By construction \(|H_{N-k}^{3}\rangle\) and \(|H_{N-k}^{3,1}\rangle\) are \(X^{\otimes N-k}\)-stabilised, and from Lemma 1 it is evident that \(|H_{N-k}^{3,2}\rangle,|H_{N-k}^{3,2,1}\rangle\) are invariant under \(Y^{\otimes N-k}\). In order to calculate the contributions originating from the \(X\)-stabilised cases, corresponding to \(l\mod 4=0,2\) we transform the inequality to \[\widetilde{\mathcal{B}}_{N-k}^{1}:= \sqrt{X}_{+}^{\otimes N-k}\mathcal{M}_{N-k}^{1}\sqrt{X}_{+}^{ \otimes N-k}\] \[= \sum_{m\,\mathrm{odd}}iZ^{\otimes m}\otimes\mathds{1}^{\otimes N -k-m}+\text{perm}\] For the \(Y\)-stabilised cases \(l\equiv 1,3\mod 4\), we consider \[\mathcal{B}_{N-k}^{\prime}:= \sqrt{Z}_{-}^{\otimes N-k}\sqrt{Y}_{+}^{\otimes N-k}\mathcal{M}_ {N-k}^{1}\sqrt{Y}_{-}^{\otimes N}\sqrt{Z}_{-}^{\otimes N}\] \[= \sum_{m\,\mathrm{odd}}iX^{\otimes m}\otimes\mathds{1}^{N-m-k}+ \text{perm}.\] First consider \(l\equiv 0\mod 4\) \[\langle\widetilde{H}_{N-k}^{3}|\widetilde{\mathcal{B}}_{N-k}^{1}|\widetilde{ H}_{N}^{3}\rangle=\frac{1}{2}\left(\langle GHZ|+\langle GHZ_{X}^{-}|\right) \widetilde{\mathcal{B}}_{N-k}\left(|GHZ\rangle+|GHZ_{X}^{-}\rangle\right) \tag{125}\] We compute the contributions separately. Because \(m\) is odd, we have \[\langle GHZ|Z_{1}\dots Z_{m}\mathds{1}_{m+1}\dots\mathds{1}_{N-k }|GHZ\rangle =\langle GHZ^{-}|GHZ\rangle=0\] \[\langle GHZ|Z_{1}\dots Z_{m}\mathds{1}_{m+1}\dots\mathds{1}_{N-k }|GHZ_{X}^{-}\rangle =\langle GHZ^{-}|GHZ_{X}^{-}\rangle=0\] \[\langle GHZ_{X}^{-}|Z_{1}\dots Z_{m}\mathds{1}_{m+1}\dots\mathds{1 }_{N-k}|GHZ_{X}^{-}\rangle =0,\] since \(Z\) interchanges between \(|+\rangle\) and \(|-\rangle\). Therfore \[\langle H^{3}_{N-k}|\mathcal{B}_{N-k}|H^{3}_{N-k}\rangle=\langle H^{3,1}_{N-k}| \mathcal{B}_{N-k}|H^{3,1}_{N-k}\rangle=0. \tag{166}\] Next we look at the case where \(l\equiv 1\mod 4\). We rewrite the correlator using the latter transformation of the Bell operator: \[\langle H^{3,2}_{N-k}|\mathcal{B}_{N-k}|H^{3,2}_{N-k}\rangle=\langle H^{3,2}_{ N-k}|\sqrt{Y}^{\otimes N-k}_{-}\sqrt{Z}^{\otimes N-k}_{+}\mathcal{B^{ \prime}}_{N-k}\sqrt{Z}^{\otimes N-k}_{+}\sqrt{Y}^{\otimes N-k}_{+}|H^{3,2}_{ N-k}\rangle\] With the aid of Lemma 9, we can transform the state as \[\sqrt{Y}^{\otimes N}_{+}|H^{3,2}_{N-k}\rangle=\pm\frac{i}{\sqrt{2}}|GHZ^{-} \rangle+\frac{1}{\sqrt{2}}|GHZ^{-}_{Y}\rangle.\] Without loss of generality, we shall assume the first sign to be negative. Applying \(\sqrt{Z}^{\otimes N}_{+}\) to the respective ket-. and bra-states yields \[\sqrt{Z}^{\otimes N-k}_{+}\sqrt{Y}^{\otimes N-k}_{+}|H^{3,2}_{N-k}\rangle=\pm \frac{i}{\sqrt{2}}|GHZ^{+}_{Z}\rangle+\frac{1}{\sqrt{2}}|GHZ^{-}_{X}\rangle,\] and \[\langle H^{3,2}_{N-k}|\sqrt{Y}^{\otimes N-k}_{-}\sqrt{Z}^{\otimes N-k}_{+}= \mp\frac{i}{\sqrt{2}}\langle GHZ^{+}_{Z}|-\frac{1}{\sqrt{2}}\langle GHZ^{-}_{ X}|.\] Note that this again requires \(N\equiv 2\mod 4\) to map \(|GHZ^{-}\rangle\) to \(|GHZ\rangle\). Then \[\langle GHZ^{+}|X_{1}\ldots X_{m}\mathds{1}_{m+1}\ldots\mathds{1} _{N-k}|GHZ^{+}\rangle = 0\] \[\langle GHZ^{+}|X_{1}\ldots X_{m}\mathds{1}_{m+1}\ldots\mathds{1} _{N-k}|GHZ^{-}_{X}\rangle = \langle GHZ^{+}|GHZ^{+}_{X}\rangle=\frac{2}{\sqrt{2}^{N-k}}\] \[\langle GHZ^{-}_{X}|X_{1}\ldots X_{m}\mathds{1}_{m+1}\ldots \mathds{1}_{N-k}|GHZ^{-}_{X}\rangle = \langle GHZ^{+}_{X}|GHZ^{-}_{X}\rangle=0\] Therefore, when decomposing the quantum value into the different pairings, only the cross-terms yield a contribution and we are left with \[\langle H^{3,2}_{N-k}|\mathcal{M}^{1}_{N-k}|H^{3,2}_{N-k}\rangle = \frac{1}{2}\left(i\langle GHZ^{+}|-\langle GHZ^{-}_{X}|\right) \mathcal{B}^{\prime}_{N-k}\left(-i|GHZ^{+}_{Z}\rangle+|GHZ^{-}_{X}\rangle\right)\] \[= i\langle GHZ^{+}|\mathcal{B}^{\prime}_{N-k}|GHZ^{-}_{X}\rangle\] \[= \sum_{m\,\mathrm{odd}}\binom{N-k}{m}\langle GHZ^{+}|X_{1}\ldots X _{m}\mathds{1}_{m+1}\ldots\mathds{1}_{N-k}|GHZ^{-}_{X}\rangle\] \[= 2^{N-k-1}\frac{2}{\sqrt{2}^{N-k}}=\sqrt{2}^{N-k}.\] Because conjugation with \(Z^{\otimes N-k}\) flips the sign of \(\mathcal{M}^{1}_{N-k}\), the \(3,2,1\)-uniform state has the same contribution with opposite sign: \[\langle H^{3,2,1}_{N-k}|\mathcal{M}^{1}_{N-k}|H^{3,2,1}_{N-k}\rangle=-\langle H ^{3,2}_{N-k}|\mathcal{M}^{1}_{N-k}|H^{3,2}_{N-k}\rangle=\sqrt{2}^{N-k}.\] Hence, whenever \(k\geq 1\), (B.14) becomes \[\langle H_{N}^{3}|\mathcal{B}_{N\setminus k}|H_{N}^{3}\rangle= \frac{(-i)}{2^{k}}\sum_{l=1,\mathrm{odd}}^{k}\binom{k}{l}i^{l} \sqrt{2}^{N-k}\] (B.17) \[= (-i)\sqrt{2}^{N-3k}\frac{1}{2}\left((1+i)^{k}-(1-i)^{k}\right)\] (B.18) \[= \sin\left(\frac{\pi k}{4}\right)\sqrt{2}^{N-2k},\] (B.19) which can be made nonnegative by choosing the sign of the inequality appropriately in the beginning. Case 4: \(N-k\equiv 0\mod 4\) As far as the last claim is concerned, the derivation is identical of the previous one until Eq. (B.14). The difference here is, that \(|H_{N-k}^{3}\rangle\) is \(Y\)-stabilised, whereas \(|H_{N-k}^{3,2}\rangle\) is \(X\)-stabilised. With the same tricks as before, we can express \[\langle H_{N-k}^{3}|\mathcal{M}_{N-k}^{1}|H_{N-k}^{3}\rangle =\frac{1}{2}\left(\pm\langle GHZ|+i\langle GHZ_{X}^{-}|\right) \mathcal{B}^{\prime}\left(\pm|GHZ\rangle+i|GHZ_{X}^{-}\rangle\right)\] (B.20) \[=\pm\langle GHZ_{Z}|GHZ_{X}^{+}\rangle=\pm\frac{1}{\sqrt{2}^{N-k}}.\] (B.21) Since \(\mathcal{B}^{\prime}\) consists only of summands with an odd number of Pauli-\(X\) and identities otherwise, the cross-terms contribute give the only contribution, and \(|GHZ_{X}^{-}\rangle\) gets mapped to \(|GHZ_{X}^{+}\rangle\). According to Lemma 9, we have \(\sqrt{X}^{\otimes N}|H_{N}^{3,2}\rangle=\pm\frac{1}{\sqrt{2}}|GHZ\rangle+\frac {1}{\sqrt{2}}|GHZ_{X}^{-}\rangle\). Similarly as before, we can rewrite \[\langle H_{N-k}^{3,2}|\mathcal{M}_{N-k}^{1}|H_{N-k}^{3,2}\rangle=\frac{1}{2}( \langle GHZ|+\langle GHZ_{X}^{-}|)\widetilde{\mathcal{B}}_{N-k}(|GHZ\rangle+ |GHZ_{X}^{-}\rangle)\] (B.22) It is now straightforward to check that all the four contributions evaluate to zero, as \(\widetilde{\mathcal{B}}\) only features odd number of Pauli \(Z\), and identities otherwise. The rest of the proof then follows analogously, with the exception of the alternating \(l\)-summation in Eq. (B.17) running over even integers instead, so that in Eq. (B.19), the sine gets replaced by a cosine, which concludes the proof.
2302.10940
Dynamical mean-field theory for Rényi entanglement entropy and mutual Information in Hubbard Model
Quantum entanglement, lacking any classical counterpart, provides a fundamental new route to characterize the quantum nature of many-body states. In this work, we discuss an implementation of a new path integral method [Phys. Rev. Res. 2, 033505 (2020)] for fermions to compute entanglement for extended subsystems in the Hubbard model within dynamical mean field theory (DMFT) in one and two dimensions. The new path integral formulation measures entanglement by applying a ``kick" to the underlying interacting fermions. We show that the R\'{e}nyi entanglement entropy can be extracted efficiently within the DMFT framework by integrating over the strength of the kick term. Using this method, we compute the second R\'{e}nyi entropy as a function of subsystem size for metallic and Mott insulating phases of the Hubbard model. We explore the thermal entropy to entanglement crossover in the subsystem R\'{e}nyi entropy in the correlated metallic phase. We show that the subsystem-size scaling of second R\'{e}nyi entropy is well described by the crossover formula which interpolates between the volume-law thermal R\'{e}nyi entropy and the universal boundary-law R\'{e}nyi entanglement entropy with logarithmic violation, as predicted by conformal field theory. We also study the mutual information across the Mott metal-insulator transition.
Surajit Bera, Arijit Haldar, Sumilan Banerjee
2023-02-21T19:00:12Z
http://arxiv.org/abs/2302.10940v2
# Dynamical mean-field theory for Renyi entanglement entropy and mutual Information in Hubbard Model ###### Abstract Quantum entanglement, lacking any classical counterpart, provides a fundamental new route to characterize the quantum nature of many-body states. In this work, we discuss an implementation of a new path integral method [Phys. Rev. Res. **2**, 033505 (2020)] for fermions to compute entanglement for extended subsystems in the Hubbard model within dynamical mean field theory (DMFT) in one and two dimensions. The new path integral formulation measures entanglement by applying a "kick" to the underlying interacting fermions. We show that the Renyi entanglement entropy can be extracted efficiently within the DMFT framework by integrating over the strength of the kick term. Using this method, we compute the second Renyi entropy as a function of subsystem size for metallic and Mott insulating phases of the Hubbard model. We explore the thermal entropy to entanglement crossover in the subsystem Renyi entropy in the correlated metallic phase. We show that the subsystem-size scaling of second Renyi entropy is well described by the crossover formula which interpolates between the volume-law thermal Renyi entropy and the universal boundary-law Renyi entanglement entropy with logarithmic violation, as predicted by conformal field theory. We also study the mutual information across the Mott metal-insulator transition. ## I Introduction Entanglement, arguably the strongest aspect of quantum mechanics, signifies the existence of true non-local quantum correlations. As a result, it has found enormous applications for characterizing quantum many-body states in condensed matter and high-energy physics, and as a resource for quantum computation [1]. In condensed matter systems, entanglement can be used to distinguish various kinds of symmetry-broken and topological states, gapped or gapless phases [2] etc. For instance, entanglement provides an unambiguous indicator of topological order [3; 4] in quantum ground states. Entanglement has also emerged as an important measure for distinguishing high-energy states as well as non-equilibrium dynamics. For example, entanglement can be used to classify dynamical phases of isolated quantum systems as ergodic or many-body localized [5; 6; 7]. Entanglement of a quantum system is quantified in terms of various measures, e.g., von Neumann and Renyi entanglement entropies, mutual information and entanglement negativity [2; 8; 9]. These measures can be calculated by partitioning the overall system into two subsystems and computing the reduced density matrix of one of the subsystems by tracing over the other. To this end, the dependence of entanglement entropy on the size and geometry of the subsystem under various partitioning of the system are used to classify quantum many-body states and their non-local entanglement properties. For example, ground-states of gapped bosonic and fermionic systems in \(d\) dimensions follow the so-called 'area law' or 'boundary-law' for entanglement entropy (\(\sim L^{d-1}\)) of a subsystem with length \(L\)[2; 8; 10; 11]. In contrast, critical states in one dimension (1d) and fermionic systems with Fermi surface, i.e., standard metals, in any dimension exhibit a logarithmic violation [2; 12; 13; 14; 15; 16; 17; 18] of the area law, namely the subsystem entanglement entropy scales as \(L^{d-1}\ln L\). These characterizations of the many-body ground states are mainly obtained through powerful analytical results based on conformal field theory (CFT) methods [8; 19] and related arguments [15; 16; 17; 18], as well as numerical results for non-interacting systems [14; 11]. For the latter, entanglement measures can be computed efficiently using the correlation matrix of the subsystem [11]. However, numerical computations of entanglement entropy is much more challenging for interacting systems, typically limited to small systems accessible via exact diagonalization (ED) or 1d systems through density matrix renormalization group (DMRG) or heavily numerical and sophisticated quantum Monte Carlo (QMC) techniques [20; 21; 22; 23; 24; 25; 26; 27]. The above numerical methods have provided many useful insights into entanglement characteristics of interacting systems. However, there is a lack of complementary quantum many-body methods, e.g., mean-field theories, perturbation expansions, and other approximations, for computing entanglement entropy of interacting systems, unlike those for usual thermodynamic, spectroscopic and transport properties. The CFT techniques employ a replica path integral approach [8; 19] where bosonic and fermionic fields are defined on a non-trivial space-time manifold with complicated boundary conditions. The latter are often hard to implement within the standard quantum many-body methodology. To circumvent this difficulty, a new path integral approach was first developed in ref.28 for bosons and was subsequently extended to fermionic systems [29; 30; 31]. In particular, ref.29 employed this method to compute Renyi entanglement entropy of Fermi and non-Fermi liquid states of strongly interacting fermions described by Sachdev-Ye
2304.12201
Quantum-Squeezing-Induced Point-Gap Topology and Skin Effect
We theoretically predict the squeezing-induced point-gap topology together with a {\it symmetry-protected $\mathbb{Z}_2$ skin effect} in a one-dimensional (1D) quadratic-bosonic system (QBS). Protected by a time-reversal symmetry, such a topology is associated with a novel $\mathbb{Z}_2$ invariant (similar to quantum spin-Hall insulators), which is fully capable of characterizing the occurrence of $\mathbb{Z}_2$ skin effect. Focusing on zero energy, the parameter regime of this skin effect in the phase diagram just corresponds to a {\it real-gap and point-gap coexisted topological phase}. Moreover, this phase associated with the {\it symmetry-protected $\mathbb{Z}_2$ skin effect} is experimentally observable by detecting the steady-state power spectral density. Our work is of fundamental interest in enriching non-Bloch topological physics by introducing quantum squeezing, and has potential applications for the engineering of symmetry-protected sensors based on the $\mathbb{Z}_2$ skin effect.
Liang-Liang Wan, Xin-You Lü
2023-04-21T12:25:57Z
http://arxiv.org/abs/2304.12201v2
# Quantum-Squeezing-Induced Point-Gap Topology and Skin Effect ###### Abstract We theoretically predict the squeezing-induced point-gap topology together with a _symmetry-protected \(\mathbb{Z}_{2}\) skin effect_ in a one-dimensional (1D) quadratic-bosonic system (QBS). Protected by a time-reversal symmetry, such a topology is associated with a novel \(\mathbb{Z}_{2}\) invariant (similar to quantum spin-Hall insulators), which is fully capable of characterizing the occurrence of \(\mathbb{Z}_{2}\) skin effect. Focusing on zero energy, the parameter regime of this skin effect in the phase diagram just corresponds to a _real- and point-gap coexisting topological phase_. Moreover, this phase associated with the symmetry-protected \(\mathbb{Z}_{2}\) skin effect is experimentally observable by detecting the steady-state power spectral density. Our work is of fundamental interest in enriching non-Bloch topological physics by introducing quantum squeezing, and has potential applications for the engineering of symmetry-protected sensors based on the \(\mathbb{Z}_{2}\) skin effect. The concept of topological phases of matter has radiated from the condensed-matter physics to several fields including photonics [1], magnetoplasmon [2], mechanics [3; 4; 5; 6], cold atoms [7; 8], metasurface [9; 10; 11], etc. In particular, growing efforts are paid to search for distinctive topological phenomena in non-Hermitian systems [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. The most intriguing is the non-Hermitian skin effect [14; 18], which refers to the localization of bulk states at boundaries. Accompanied with the breakdown of the bulk-boundary correspondence, it stems from the point-gap topology where the complex-valued spectrum enclosing an energy point has a nonvanishing winding number [19; 22; 23; 30; 31]. Squeezing of bosonic fields [44], as a useful technique of quantum engineering, could not only exponentially enhance light-matter interactions [45; 46; 47; 48; 49; 50; 51; 52], but also induce instability of edge state in QBSs [53; 54; 55; 56; 57]. In the sense that the instability arises from the complex-valued spectrum given by a non-Hermitian matrix, the QBS is also of interest in the framework of non-Hermitian physics [58; 59]. The topological classification for the generic QBS is established based on the Bernard-LeClair 38-fold symmetry classes [60], and it predicts the topological triviality of 1D QBS in terms of zero energy [23]. However, the bosonic Kitaev chain exhibits an end-to-end amplification and has the analogue of Majorana zero modes [61; 62; 63; 64; 65; 66], which should be an effect of point-gap topology. Such a contradiction implies that the topological nature of QBSs still remains unclear, and solving this contradiction is fundamentally interesting in exploring the exotic topological phenomena (e.g., skin effect). Here, we investigate the topological origin of a 1D QBS in the thermodynamic-instability regime. By introducing an unconventional time-reversal symmetry, we discover that the squeezing can induce the appearance of point-gap topology together with a symmetry-protected \(\mathbb{Z}_{2}\) skin effect in the QBS. The mechanism relies on additional symmetry enriching the topology of system. In contrast to the imaginary gauge transformation in non-Hermitian systems [18; 24; 26; 67; 68], this skin effect corresponds to a real squeezing transformation, and it is extremely sensitive to the local perturbation that breaks the time-reversal symmetry of system. By increasing the squeezing until the point gap is open at zero energy, we also find the survival of a pair of zero modes in the open boundary condition (OBC) even if the real gap closes in the periodic boundary condition (PBC). This indicates an anomalous bulk-boundary correspondence and the appearance of a real- and point-gap coexisting topological phase. Meanwhile, the \(\mathbb{Z}_{2}\) skin effect, appearing in this coexisting phase, inhibits another pair of zero modes. Compared with the previous works focusing on the transport amplification [61; 62; 65], Majorana bosonic analogues together with the topological metastability [63; 64], and non-Bloch wave behaviors [69], here we introduce an unconventional time-reversal symmetry to the QBS, and uncover the symmetry-enriched topological classification. Remarkably, we also find the real- and point-gap coexisting topological phase, and it can be identified by the steady-state power spectral density. Our work builds the connection between point-gap topology together with skin effect and quantum squeezing. It opens up a door for exploring the crossover between topological physics and quantum engineering, and offers potential applications in designing new types of topological protected devices. _Squeezing-induced point-gap topology._--Let us consider a 1D QBS subject to the lattice-translational symmetry with Hamiltonian \(\hat{H}=\frac{1}{2}\sum_{k}\hat{\Phi}_{k}^{\dagger}H(k)\hat{\Phi}_{k}\). Here \(H(k)\) is the first-quantized Hamiltonian of the QBS in the crystal-momentum space, and \(\hat{\Phi}_{k}=(\hat{a}_{k1},\dots,\hat{a}_{kN},\hat{a}_{-k1}^{\dagger},\dots, \hat{a}_{-kN}^{\dagger})^{T}\) is the Nambu spinor in terms of \(2N\) bosonic annihilation and creation operators with \(k\) and \(-k\), respectively. The spinor obeys \([\hat{\Phi}_{ki},\hat{\Phi}_{k^{\prime}j}^{\dagger}]=\delta_{kk^{\prime}}( \tau^{3})_{ij}\) with \(\tau^{3}\) being the indefinite metric [70; 71]. Here, \(\tau^{i}=\sigma^{i}\otimes I_{N}\) with the Pauli matrices \(\sigma^{i}\) (\(i=1,2,3\)). The system dynamics is described by \(\partial_{t}\hat{\Phi}_{k}\left(t\right)=-iH_{\tau}(k)\hat{\Phi}_{k}\left(t\right)\) with \(H_{\tau}(k)=\tau^{3}H(k)\) being non-Hermitian. The dynamical matrix \(H_{\tau}(k)\) inherently respects the particle-hole symmetry \(\mathcal{C}H_{\tau}^{*}(-k)\mathcal{C}^{-1}=-H_{\tau}(k)\) with \(\mathcal{C}=\tau^{1}\) being the "charge conjugation" [72; 73; 74] and the pseudo-Hermiticity \(\eta H_{\tau}^{\dagger}(k)\eta^{-1}=H_{\tau}(k)\) with \(\eta=\tau^{3}\)[75]. In the thermodynamic-instability regime, the squeezing may induce a complex-valued spectrum formed by loops in the PBC and open curves in the OBC [61; 62; 61]. This scenario is a reminiscence of the point-gap topology in non-Hermitian systems [19; 22; 23]. In terms of zero energy, we construct the Hermitian matrix \[\tilde{H}_{\tau}\left(k\right)=\left(\begin{array}{cc}0&H_{\tau}\left(k \right)\\ H_{\tau}^{\dagger}\left(k\right)&0\end{array}\right), \tag{1}\] which respects the chiral symmetry \(\Gamma\tilde{H}_{\tau}\Gamma^{-1}=-\tilde{H}_{\tau}\) with \(\Gamma=I_{2N}\oplus-I_{2N}\). This symmetry leads to the winding number \(W\in\mathbb{Z}\) given by \[W=\int_{\mathrm{BZ}}\frac{dk}{2\pi i}\frac{\partial}{\partial k}\ln\det H_{ \tau}\left(k\right). \tag{2}\] Equation (2) is always trivial due to the pseudo-Hermiticity. However, in general, the symmetry class together with the topological classification for the QBS would be altered once some additional symmetries are introduced. Hence, the presence of additional symmetries can enrich the topological phase of the QBS [77]. For illustrations, we study the squeezed Su-Schrieffer-Heeger (SSH) model shown in Fig. 1(a). The system Hamiltonian is \[\begin{split}\hat{H}_{\mathrm{SSH}}=&\sum_{j\in \mathbb{Z}}\left(t_{1}\hat{a}_{j,\mathcal{A}}^{\dagger}\hat{a}_{j,B}+t_{2}\hat {a}_{j+1,\mathcal{A}}^{\dagger}\hat{a}_{j,B}\right.\\ &\left.+g_{1}\hat{a}_{j,\mathcal{A}}\hat{a}_{j,B}+g_{2}\hat{a}_{j+ 1,\mathcal{A}}\hat{a}_{j,B}+\text{H.c.}\right),\end{split} \tag{3}\] where \(t_{1},t_{2}>0\) are the hopping strengths between the nearest-neighbor sites, and \(g_{1},g_{2}\in\mathbb{R}\) are the strengths of the intracell and intercell squeezing, respectively. This model can be implemented in many platforms like quantum superconducting circuits [78; 79; 80; 81; 82] and photonic crystals with optomechanical interaction[83; 84; 85]. In particular, the crucial bosonic squeezing can be implemented via the three-wave mixing process introduced by the Josephson ring modulator or superconducting nonlinear asymmetric inductive element device [77]. The Bloch spectrum with a twofold degeneracy is \(E_{\pm}^{2}(k)=\Delta^{2}+2(t_{1}t_{2}-g_{1}g_{2})\cos k\pm 2i(t_{1}g_{2}-t_{2}g_{1 })\sin k\) with \(\Delta=\sqrt{t_{1}^{2}+t_{2}^{2}-g_{1}^{2}-g_{2}^{2}}\). Figures 1(b-e) show that the spectrum experiences three processes in the complex plane as increasing \(g_{2}\). First, two isolated loops are located at the real axis (I), and subsequently a curve encloses zero energy (II). Finally, two isolated loops move to the imaginary axis (III). Those processes have the real (\(\mathrm{Re}E=0\)), point (\(E=0\)) and imaginary (\(\mathrm{Im}E=0\)) gaps, respectively. This hints the appearance of nontrivial point-gap topology at zero energy in regime II induced by squeezing. Specifically, the winding number (2) for our system is trivial, when the Bogoliubov bands enclose zero energy shown in Figs. 1(b,d). However, the system also respects a sublattice symmetry \(\mathcal{S}H_{\tau\mathrm{SSH}}\left(k\right)\mathcal{S}^{-1}=-H_{\tau\mathrm{ SSH}}\left(k\right)\) with \(\mathcal{S}=\sigma^{3}\) being the sublattice and \(H_{\tau\mathrm{SSH}}\) being the dynamical matrix. The combination of the particle-hole symmetry, pseudo-Hermiticity and sublattice symmetry yields an unconventional time-reversal symmetry [86; 13; 87] \[\mathcal{T}H_{\tau\mathrm{SSH}}^{T}(-k)\mathcal{T}^{-1}=H_{\tau\mathrm{SSH}} \left(k\right),\ \ \ \mathcal{T}\mathcal{T}^{*}=-I, \tag{4}\] with \(\mathcal{T}=i\tau^{2}\sigma^{3}\). In terms of zero energy, this symmetry supports a \(\mathbb{Z}_{2}\) invariant \(\nu\in\left\{0,1\right\}\), defined by [77; 23] \[(-1)^{\nu}=\mathrm{sgn}\left[\frac{\mathrm{Pf}\left(H_{\tau\mathrm{SSH}}\left( 0\right)\mathcal{T}\right)}{\mathrm{Pf}\left(H_{\tau\mathrm{SSH}}\left(\pi \right)\mathcal{T}\right)}\right], \tag{5}\] where \(\mathrm{Pf}\left(O\right)\) denotes the Pfaffian for any skew-symmetric matrix \(O\) (\(O^{T}=-O\)). This \(\mathbb{Z}_{2}\) invariant gives the critical points at \(|t_{1}\pm t_{2}|=|g_{1}\pm g_{2}|\), i.e., the red dots in Fig. 1(b), which shows a squeezing-induced nontrivial point-gap topology in regime II. Moreover, in the regimes I and III, the point-gap topology of system can also be Figure 1: (a) Schematic of the squeezed SSH model consisting of the A and B sublattices in the presence of two-mode squeezing. The hopping and squeezing strengths between the adjacent sites are denoted by \(t_{1}\), \(t_{2}\) and \(g_{1}\), \(g_{2}\), respectively. (b) Spectrum in the complex plane as varying the intercell squeezing strength \(g_{2}\). Here \(t_{2}=3t_{1}/2\) and \(g_{1}=0\). The red dots at \(g_{2}=0.5t_{1},2.5t_{1}\) represent the critical points for closing or opening the point gap at \(E=0\), and \(\nu=1\) corresponds to \(g_{2}\in\left(0.5,2.5\right)t_{1}\). (c-e) Spectra (black curves) for I, II and III in (b) can be continuously deformed to (c) \(\pm 1\) (blue dots), (d) unit circle (blue circle) and (e) \(\pm i\) (blue dots), respectively, while preserving the associated gaps (red). nontrivial, if the reference energy \(E\) is not zero and is placed in the closed loop [77]. _Symmetry-protected \(\mathbb{Z}_{2}\) skin effect._--In the presence of point-gap topology, the spectrum of Hamiltonian (3) dramatically changes from a closed curve [black loop in Fig. 2(a)] to the discrete points that form open lines [see the first panel of Fig. 2(c)] under the OBC. Consequently, as shown in Fig. 2(b), the Kramers pair guaranteed by the time-reversal symmetry (4) are localized at both ends, which shows the appearance of the symmetry-protected \(\mathbb{Z}_{2}\) skin effect [30; 88; 89]. In general, the non-Hermitian skin effect corresponds to an imaginary gauge transformation [18; 24; 67; 68; 26]. However, here the \(\mathbb{Z}_{2}\) skin effect corresponds to a real squeezing transformation with operator \(\hat{S}\)[77]. Specifically, under the parameter condition of \(g_{1}=0\) and \(t_{2}>|g_{2}|\), we perform a squeezing transformation to the "particles" \(\hat{a}_{j\sigma}\) and "holes" \(\hat{a}_{j\sigma}^{\dagger}\) with \(\sigma=A,B\) such that \[\left(\begin{array}{c}\hat{a}_{j,A/B}\\ \hat{a}_{j,A/B}^{\dagger}\end{array}\right)=(e^{\pm r\tau^{1}})^{j}\left( \begin{array}{c}\hat{\alpha}_{j,A/B}\\ \hat{\alpha}_{j,A/B}^{\dagger}\end{array}\right). \tag{6}\] Here the squeezing parameter \(r\) satisfies \(\tanh r=-g_{2}/t_{2}\). The squeezing transformation (6) inherently belongs to SU(1,1) [90], and the particles and holes (\(\hat{\alpha}_{j\sigma},\hat{\alpha}_{j\sigma}^{\dagger}\)) in the new quasi-particle basis preserve \([\hat{\alpha}_{j\sigma},\hat{\alpha}_{j^{\prime}\sigma^{\prime}}^{\dagger}]= \delta_{jj^{\prime}}\delta_{\sigma\sigma^{\prime}}\). Using this transformation (6), the Hamiltonian (3) is mapped to the conventional SSH model with Hamiltonian \(\hat{H}_{\rm SSH}=\sum_{j=1}^{L}t_{1}\hat{\alpha}_{jA}^{\dagger}\hat{\alpha} _{jB}+\tilde{t}_{2}\hat{\alpha}_{j+1A}^{\dagger}\hat{\alpha}_{jB}+\text{H.c.}\), where \(\tilde{t}_{2}=\sqrt{t_{2}^{2}-g_{2}^{2}}\) and \(L\) is the number of the total unit cells. As shown in Fig. 2(a), the spectrum of \(\hat{H}_{\rm SSH}\) becomes two open (red) lines in the continuum limit \(L\rightarrow\infty\) corresponding to the PBC [91], which indicates the disappearance of skin effect in the squeezed-state representation. This demonstrates that the obtained \(\mathbb{Z}_{2}\) skin effect originally comes from the intercell squeezing \(\sum_{j}g_{2}(\hat{a}_{jB}\hat{a}_{j+1A}+\text{H.c.})\) in the QBS. Physically, such squeezing interaction describes a non-degenerate parametric amplification process, and gives rise to the entanglement between two bosonic modes in the adjacent unit cells. Then, the introduced intercell parametric amplification in the 1D lattice induces intrinsically the non-Hermicity of system, which ultimately leads to the appearance of the point-gap topology together with symmetry-protected \(\mathbb{Z}_{2}\) skin effect [77]. This \(\mathbb{Z}_{2}\) skin effect is extremely sensitive against local symmetry-breaking perturbations [26; 77; 89]. To show this, we introduce an onsite perturbation \(\hat{H}_{\rm onsite}=\mu\sum_{j\sigma}\hat{a}_{j\sigma}^{\dagger}\hat{a}_{j\sigma}\) to the system, which breaks the time-reversal symmetry (4). Applying the squeezing transformation (6) to the perturbation, we obtain \[\begin{split}\hat{H}_{\rm onsite}=&\mu\sum_{j=1}^{L }\left[\cosh\left(2rj\right)\left(\hat{\alpha}_{jA}^{\dagger}\hat{\alpha}_{jA }+\hat{\alpha}_{jB}^{\dagger}\hat{\alpha}_{jB}\right)\right.\\ &\left.+\frac{\sinh\left(2rj\right)}{2}\left(\hat{\alpha}_{jA}^{ \dagger}\hat{\alpha}_{jA}^{\dagger}-\hat{\alpha}_{jB}^{\dagger}\hat{\alpha}_ {jB}^{\dagger}+\text{H.c.}\right)\right].\end{split} \tag{7}\] The impact of Eq. (7) on the unperturbed Hamiltonian is qualitatively determined by the scaling [26; 69; 92] \[\mu/t_{1}\sim e^{-|r|L}. \tag{8}\] It implies that the presence of an infinitesimal perturbation also can change the physics of system in the continuum limit. Such an instability arises from the breakdown of the time-reversal symmetry. More precisely, the anomalous squeezing in (7) dramatically alters the spectrum by coupling the Kramers pairs localized at the opposite ends of the chain [see Fig. 2(b)]. As shown in Fig. 2(c), the instability of spectrum occurring at \(\mu/t_{1}\sim 10^{-8}\) confirms our analysis. _Real- and point-gap coexisting topological phase._--The parameter regime of \(\mathbb{Z}_{2}\) skin effect actually corresponds to a real- and point-gap coexisting topological phase due to the interplay between the squeezing and particle-exchange coupling. Such a phase is unconventional since the real gap is closed in the PBC while the zero mode survives in the OBC, which indicates an anomalous bulk-boundary correspondence. Meanwhile, the point-gap topology is also nontrivial. To show this, in Fig. 3, we plot the phase diagram for the real-gap topology by calculating the winding number in the PBC and the zero modes in the OBC. Figure 2: (a) Spectrum (black) of the squeezed SSH model under the PBC and the corresponding continuum bands (red) after the mapping (6). (b) Amplitudes of the Kramers pair with the lowest energy (blue and red bars) for both the particles and holes in the OBC. The localization of the two degenerate states manifests the \(Z_{2}\) skin effect. (c) Spectra of the perturbed model in the OBC with varying the chemical potential \(\mu=(0,10^{-8},10^{-7},10^{-6},10^{-5},10^{-4})t_{1}\). The red diamond mark denotes the Kramers pair in (b). Parameters: \(t_{2}=1.5t_{1}\), \(g_{1}=0\), \(g_{2}=0.6t_{1}\) and \(L=40\). Firstly, the real gap \(\mathrm{Re}E=0\) opens in the PBC if \(|g_{2}|<|t_{2}-t_{1}|\) holds, as shown in Fig. 1(b). Because of the sublattice symmetry \(\mathcal{S}\), the real-gap topology can be characterized by the winding number \(W^{(\mathrm{real})}=(1/2\pi i)\int_{\mathrm{BZ}}q^{-1}dq\) with \(q=t_{1}+t_{2}e^{ik}\)[77]. This winding number is nontrivial for \(t_{1}+|g_{2}|<t_{2}\), corresponding to the yellow area of Fig. 3. The bulk-boundary correspondence ensures the emergence of zero modes in the bulk gap. In the representation of the canonical coordinates and momenta \(\hat{x}_{j\sigma}=(\hat{a}_{j\sigma}+\hat{a}_{j\sigma}^{\dagger})/\sqrt{2}\) and \(\hat{p}_{j\sigma}=(\hat{a}_{j\sigma}-\hat{a}_{j\sigma}^{\dagger})/\sqrt{2}\), two pairs of zero modes \(\hat{x}_{\mathrm{L}}^{-s}=\sum_{j=1}^{L}\delta_{-s}^{j-1}\hat{x}_{jA}\), \(\hat{x}_{\mathrm{R}}^{-s}=\sum_{j=1}^{L}\delta_{-s}^{L-j}\hat{x}_{jB}\) and \(\hat{p}_{\mathrm{L}}^{s}=\sum_{j=1}^{L}\delta_{s}^{s-1}\hat{p}_{jA}\), \(\hat{p}_{\mathrm{R}}^{s}=\sum_{j=1}^{L}\delta_{s}^{L-j}\hat{p}_{jB}\) with \(\delta_{\pm s}=-t_{1}/(t_{2}\pm s|g_{2}|)\) and \(s=\mathrm{sgn}(g_{2})=\pm\) (\(|\delta_{\pm}|<1\)) appear in the OBC [77]. Here the subscripts L and R denote the left and right edges of the 1D QBS, respectively. \([\hat{x}_{\mathrm{L}}^{-s},\hat{p}_{\mathrm{L}}^{s}]=[\hat{x}_{\mathrm{R}}^{- s},\hat{p}_{\mathrm{R}}^{s}]=i(1-\delta^{2L})/(1-\delta^{2})\) (\(\delta=-t_{1}/\hat{t}_{2}\)) implies that \(\hat{x}_{\mathrm{L/R}}^{-s}\) and \(\hat{p}_{\mathrm{L/R}}^{s}\) are canonically conjugate with each other. As increasing \(g_{2}\), the real gap closes at \(t_{1}+|g_{2}|=t_{2}\), while a pair of zero mode \((\hat{x}_{\mathrm{L}}^{+},\hat{x}_{\mathrm{R}}^{+})\) or \((\hat{p}_{\mathrm{L}}^{+},\hat{p}_{\mathrm{R}}^{\mp})\) can survive. This means that the conventional bulk-boundary correspondence based on \(W^{(\mathrm{real})}\) is no longer valid. To reconstruct it, we impose the continuum limit to the mapped Hamiltonian \(\hat{H}_{\mathrm{SSH}}\), and find that the real gap preserves in the region \(|g_{2}|<t_{2}\) under the PBC. Furthermore, the reconstructed winding number \(\tilde{W}^{(\mathrm{real})}\)[77] indicates the new nontrivial phase (i.e., \(\sqrt{t_{1}^{2}+g_{2}^{2}}<t_{2}\)), corresponding to the yellow and green areas of Fig. 3. In terms of \(E=0\), the defined \(\nu\) is nontrivial in the green area, which indicates a real- and point-gap coexisting topological phase. Correspondingly, the symmetry-protected \(\mathbb{Z}_{2}\) skin effect appears and greatly inhibits the occurrence of a pair of zero modes, either \((\hat{x}_{\mathrm{L}}^{-},\hat{x}_{\mathrm{R}}^{-})\) for \(g_{2}>0\) or \((\hat{p}_{\mathrm{L}}^{-},\hat{p}_{\mathrm{R}}^{-})\) for \(g_{2}<0\). This inhibition originates from the localization competition between the skin effect and zero modes of the conventional SSH model [77]. Meanwhile, another pair of zero modes \((\hat{p}_{\mathrm{L}}^{+},\hat{p}_{\mathrm{R}}^{+})\) or \((\hat{x}_{\mathrm{L}}^{+},\hat{x}_{\mathrm{R}}^{+})\) survive, and they are extremely sensitive to the local perturbation (7). The scaling of \(\mu\) can be heuristically estimated by \(\mu/t_{1}\sim\xi^{-L}\) with \(\xi=e^{|r|}|\delta|^{1/2}\). Figure 2(c) shows that the zero modes \((\hat{p}_{\mathrm{L}}^{+},\hat{p}_{\mathrm{R}}^{+})\) almost disappear at \(\mu/t_{1}\sim 3\times 10^{-5}\), which is consistent with this critical scaling. As continuously increasing \(g_{2}\), the imaginary gap is open and the associated topology becomes nontrivial in regime III of Figs. 1(b,e), corresponding to the dark gray areas of Fig. 3. Moreover, the phase diagram can be enriched further when the intracell squeezing is introduced, i.e., \(g_{1}\neq 0\)[77]. _Detection of the coexisting topological phase together with the \(\mathbb{Z}_{2}\) skin effect._-- For detection, we calculate the normalized power spectral density \(S_{\rho_{j\sigma}\rho_{j\sigma}}(\omega)=\int\mathrm{d}\tau\langle\hat{\rho}_{j \sigma}(\tau)\hat{\rho}_{j\sigma}(0)\rangle_{\mathrm{ss}}e^{i\omega\tau}/ \langle\hat{\rho}_{j\sigma}(0)\hat{\rho}_{j\sigma}(0)\rangle_{\mathrm{ss}}\) in the presence of decay with rate \(\gamma\)[77]. Here \(\langle\cdot\rangle_{\mathrm{ss}}\) denotes a steady-state expectation value and \(\hat{\rho}_{j\sigma}=\hat{x}_{j\sigma},\hat{p}_{j\sigma}\). Normally, any zero mode corresponds to the peak of \(|S_{\rho_{j\sigma}\rho_{j\sigma}}(0)|\) at edge sites. Focusing on the first site 1A, the zero modes \(\hat{x}_{\mathrm{L}}^{-s}\) and \(\hat{p}_{\mathrm{L}}^{s}\) correspond to the peaks of \(|S_{x_{1A}x_{1A}}(0)|\) and \(|S_{p_{1A}p_{1A}}(0)|\), respectively. Then double peaks at zero frequency in Fig. 4(a) indicate the real-gap topological phase (yellow area of Fig. 3), and one peak in Figs. 4(b) depicts the real- and point-gap coexisting topological phase (green area in Fig. 3). Moreover, the peaks of \(|S_{x_{1A}x_{1A}}(\pm t_{1})|\) in Fig. 4(b) also manifest the skin effect, which is algebraically divergent with \(L\)[77, 64]. The above signature for detecting the coexisting topological phase (i.e., the zero-frequency dip) will be destroyed by the dissipation or perturbation of the system. Figures 4(c,d) show that the dip of \(|S_{x_{1A}x_{1A}}(0)|\) disappears at the critical point \(\gamma_{c}\equiv\sqrt{g_{2}^{2}-(t_{1}-t_{2})^{2}}\)[77]. Physically, the presence of dissipation moves the effective spectrum to the lower half plane, and the reference frequency \(\omega\) would go out of the loop as increasing \(\gamma\). Figure 4(e) demonstrates that the zero-frequency dip vanishes at the scaling \(\mu/t_{1}\sim\xi^{-L}\), since the perturbation breaks the time-reversal symmetry. _Conclusion._--We have shown the squeezing-induced point-gap topology together with the \(\mathbb{Z}_{2}\) skin effect in the QBS, when the time-reversal symmetry is introduced. The interplay of the bosonic squeezing and particle-exchange coupling results in the survival of zero modes in the OBC even if a real gap closes in the PBC. This exhibits an anomalous bulk-boundary correspondence. Our work enriches non-Bloch topological physics in the QBS by predicting the real- and point-gap coexisting topological phase. This may stimulate future studies of symmetry-enriched topological physics in the higher-dimensional systems. Our work also provides a perfect example of the combination of non-linearity and non-Hermiticity with topology, and it will inspire experimental activity in the field of nonlinear topological photonics [93]. L.-L.W. is very thankful to Dr. Zixian Zhou for his fruitful discussions. This work is supported by the National Key Research and Development Program of China (Grant No. 2021YFA1400700), the National Natural Science Foundation of China (Grants No. 11974125, No. 12205109, No. 12147143).
2306.12478
A Precise Electron EDM Constraint on CP-odd Heavy-Quark Yukawas
CP-odd Higgs couplings to bottom and charm quarks arise in many extensions of the standard model and are of potential interest for electroweak baryogenesis. These couplings induce a contribution to the electron EDM. The experimental limit on the latter then leads to a strong bound on the CP-odd Higgs couplings. We point out that this bound receives large QCD corrections, even though it arises from a leptonic observable. We calculate the contribution of CP-odd Higgs couplings to the bottom and charm quarks in renormalisation-group improved perturbation theory at next-to-leading order in the strong interaction, thereby reducing the uncertainty to a few percent.
Joachim Brod, Zachary Polonsky, Emmanuel Stamou
2023-06-21T18:00:02Z
http://arxiv.org/abs/2306.12478v2
# A Precise Electron EDM Constraint on CP-odd Heavy-Quark Yukawas ###### Abstract CP-odd Higgs couplings to bottom and charm quarks arise in many extensions of the standard model and are of potential interest for electroweak baryogenesis. These couplings induce a contribution to the electron EDM. The experimental limit on the latter then leads to a strong bound on the CP-odd Higgs couplings. We point out that this bound receives large QCD corrections, even though it arises from a leptonic observable. We calculate the contribution of CP-odd Higgs couplings to the bottom and charm quarks in renormalisation-group improved perturbation theory at next-to-leading order in the strong interaction, thereby reducing the uncertainty to a few percent. DO-TH 23/07 ## 1 Introduction The precise determination of the Yukawa couplings of the Higgs boson to all fermions has been a focus of particle physics since the discovery of the Higgs boson in 2012 [1; 2]. In the Standard Model (SM) all Yukawa couplings are aligned with the fermion masses and thus real, but multiple extensions of the SM induce non-trivial phases. This is of particular interest as these phases (mainly in the Yukawa couplings to the third fermion generation) play a leading role in models of electroweak baryogenesis [3]. In this article, we focus on the bottom- and charm-quark Yukawa couplings. It is well-known that the present bounds on the electric dipole moment (EDM) of the electron place strong constraints on CP-violating phases in the quark Yukawa couplings [4; 5; 6; 7; 8]. However, it is less well-appreciated (and maybe somewhat surprising) that the heavy-quark contributions to the electron EDM receive large QCD corrections, leading to a large implicit uncertainty in the current constraints. In this section, we briefly review the current situation. In the remainder of this article, we calculate the leading logarithmic (LL) and next-to-leading logarithmic (NLL) QCD corrections, thereby reducing the presently \({\cal O}(1)\) uncertainty to a few percent. For the purpose of this work, we assume a modification of the SM heavy-quark Yukawa couplings of the form1 Footnote 1: This parameterisation of pseudoscalar Higgs couplings should be thought of as either the dimension-four part of the so-called Higgs Effective Field Theory [9], the electroweak chiral Lagrangian[10] in _unitarity gauge_ for the electroweak sector, or as the leading term arising from the dimension-six SMEFT operators of the form \(H^{\dagger}H\overline{Q}_{L,i}Hd_{R,i}\) and \(H^{\dagger}H\overline{Q}_{L,i}\tilde{H}u_{R,i}\), where \(H\) denotes the Higgs doublet in the unbroken phase of electroweak gauge symmetry, while \(Q_{L,i}\) and \(d_{R,i}/u_{R,i}\) with \(i=2,3\) represent the left-handed quark doublet and the right-handed quark fields of the second or third generation, respectively. For further details see also the discussion in Ref. [8]. \[{\cal L}_{hq_{h}q_{h}}=-\frac{y_{q_{h}}^{\rm SM}}{\sqrt{2}}\kappa_{q_{h}}\bar{ q}_{h}\left(\cos\phi_{q_{h}}+i\gamma_{5}\sin\phi_{q_{h}}\right)q_{h}\,h\,. \tag{1}\] Here, \(q_{h}=b,c\) denotes the bottom- or charm-quark field and \(h\) the physical Higgs field. Moreover, \(y_{q_{h}}^{\rm SM}\equiv m_{q_{h}}c/\sqrt{2}s_{w}M_{W}\) is the SM Yukawa, with \(e\) the positron charge, \(s_{w}\) the sine of the weak mixing angle, and \(m_{q}\) and \(M_{W}\) the heavy-quark and \(W\)-boson masses, respectively. The real parameter \(\kappa_{q_{h}}\geq 0\) parameterises modifications to the absolute value of the Yukawa coupling, while the phase \(\phi_{q_{h}}\in[0,2\pi)\) parameterises CP violation. The SM corresponds to the values \(\kappa_{q_{h}}=1\) and \(\phi_{q_{h}}=0\). Virtual heavy quarks with CP-odd Higgs couplings induce an electron EDM \(d_{e}\), defined by \[{\cal L}_{\rm eff}=-d_{e}\,\frac{i}{2}\,\bar{e}\sigma^{\mu\nu}\gamma_{5}e\,F_ {\mu\nu}\,, \tag{2}\] with \(\sigma^{\mu\nu}\equiv\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]\), via the well-known Barr-Zee diagrams [11]. Calculating the two-loop Barr-Zee diagrams (see Figure 1) keeping the non-zero heavy-quark mass and expanding the loop functions for small quark mass and small external momenta leads to [4] \[d_{e}\simeq-12eQ_{e}Q_{q_{h}}^{2}\,\frac{\alpha_{e}}{(4\pi)^{3}}\sqrt{2}G_{F} m_{e}\,\kappa_{q_{h}}\sin\phi_{q_{h}}\,x_{q_{h}}\left(\log^{2}x_{q_{h}}+\frac{ \pi^{2}}{3}\right)\,, \tag{3}\] up to higher orders in the ratio \(x_{q_{h}}\equiv m_{q_{h}}^{2}/M_{h}^{2}\) (with \(M_{h}\sim 125\) GeV the Higgs-boson mass). Here, \(\alpha_{e}\) is the fine-structure constant, \(Q_{q_{h}}\) is the charge of the heavy quark with \(Q_{b}=-1/3\) and \(Q_{c}=+2/3\), and \(Q_{e}=-1\) is the electron charge. Barr-Zee diagrams with an internal \(Z\) boson also affect the electron EDM, however, they lead to a contribution that is suppressed with respect to those with an internal photon by the small coupling of the \(Z\) boson to electrons [4]. Using the ACME result, \(|d_{e}|<1.1\times 10^{-29}\,e\,\)cm (at 90% CL) [12], Eq. (3) implies the bounds \(\kappa_{b}|\sin\phi_{b}|\leq 0.41\) and \(\kappa_{c}|\sin\phi_{c}|\leq 1.1\) at the 90% CL for the bottom- and charm-quark case, respectively. However, to obtain these bounds, numerical values for the heavy-quark masses must be Figure 1: Barr–Zee diagram that contributes to the electron EDM from a CP-odd heavy-quark Yukawa coupling with the Higgs (red square). chosen. It is not clear _a priori_ at which scale the quark masses should be evaluated; \(\mu=M_{h}\) and \(\mu=m_{q_{h}}\) would be obvious choices. For the bounds above, we used the values \(m_{b}(M_{h})\) and \(m_{c}(M_{h})\); choosing \(m_{b}(m_{b})\) and \(m_{c}(m_{c})\) instead leads to the significantly stronger bounds \(\kappa_{b}|\sin\phi_{b}|\leq 0.23\) and \(\kappa_{c}|\sin\phi_{c}|\leq 0.35\) at the 90% CL. The differences arise from the large QCD running of the quark masses between the two scales \(\mu=M_{h}\) and \(\mu=m_{q_{h}}\). This indicates that the QCD corrections to Eq. (3) are large, even though the electron EDM is a leptonic observable. By our explicit calculation, we will show that the predicted value for \(d_{e}\) after resolving the ambiguity lies somewhere in between the values obtained by using the two scales \(M_{h}\) and \(m_{q_{h}}\). We give here a brief outline of the main ideas of the calculation. First, notice that the result in Eq. (3), which was obtained by a two-loop, fixed-order calculation, is numerically dominated by the large quadratic logarithms \(\log^{2}x_{q_{h}}\). This logarithmic contribution can be reproduced by the _one-loop_ QED renormalisation-group (RG) evolution in an appropriate effective theory (see Section 2), truncated at order \(\alpha_{e}^{2}\). The second term in Eq. (3), \(\pi^{2}/3\), has no logarithmic dependence on the mass ratio and is formally of next-to-next-to-leading-logarithmic (NNLL) order in RG-improved perturbation theory. Numerically, this term gives a \(\sim 6\%\) correction to the logarithmic contribution to \(d_{e}/e\) in the bottom case and a \(\sim 3.5\%\) correction in the charm case. On the other hand, we have seen above that the choice of different renormalisation scales for the quark masses represents a much larger uncertainty, of the order of \(100\%\). We, thus, conclude that the QCD corrections to the Barr-Zee diagrams are large, and we expect that these corrections are dominated by leading QCD logarithms, since the product \(\alpha_{s}\log x_{q_{h}}\) is large (\(\alpha_{s}\) denotes the strong coupling constant). These logarithms can be reliably calculated using RG-improved perturbation theory. The result with resummed leading QCD logarithms has the schematic form \(\alpha_{e}^{2}\log^{2}x_{q_{h}}(1+\alpha_{s}\log x_{q_{h}}+\alpha_{s}^{2}\log ^{2}x_{q_{h}}+\ldots)\). Here, the first term reflects the quadratic logarithm in Eq. (3), while all other terms correspond to the leading logarithms of the diagrams obtained by dressing the Barr-Zee diagrams, Figure 1, with an arbitrary number of gluons. It is, however, well-known that one can only consistently fix the scale and scheme dependence of the input parameters by going beyond the LL approximation. Hence, we will also perform the NLL calculation, reducing the the uncertainty down to the percent level. This corresponds roughly to the size of the correction that we have estimated above using the \(\pi^{2}/3\) term in the fixed-order result, which can be viewed as part of the NNLL order result in RG-improved perturbation theory. This term is thus not included in our NLL calculation. Our calculation will show that the QCD perturbation series converges well, as might be expected for a leptonic observable. In addition, we emphasize that this is a complete calculation - no further hadronic input (such as the lattice matrix elements required for hadronic EDMs) is needed. The final result for \(d_{e}/e\) lies between the two values obtained from the naive computation, determining the heavy quark masses at the different scales. All these results are illustrated in Figure 4 of Section 4. This paper is organised as follows: in Section 2, we introduce the effective theories used in the computation. In Section 3, we present the detailed results of our calculation, namely, the initial conditions at the electroweak scale, the calculation of the anomalous dimensions, and the threshold corrections at the heavy-quark thresholds. We also show the final analytical results in a compact form. In Section 4, we present the numerical results of the calculation including updated bounds on the CP-odd heavy-quark Yukawa couplings. We conclude in Section 5. Additional information is presented in two appendices: In App. A, we collect the unphysical operators used in the computation of the anomalous-dimension matrix and in App. B, we show the results for renormalisation constants entering the calculation. ## 2 Effective Theories Below the Weak Scale A precise determination of the electron EDM in the presence of CP-odd Higgs couplings to the bottom and charm quarks requires an effective field theory that allows to sum large QCD logarithms to all orders in the strong coupling constant and includes the effect of the combined \(\alpha_{s}\) and \(\alpha_{e}\) RG evolution. Based on the "full" Lagrangian in Eq. (1), we construct the effective Lagrangian below the electroweak scale, \(\mu_{\rm{ew}}\), by integrating out the heavy degrees of freedom of the SM (the top quark, the weak gauge bosons, and the Higgs). EDMs are then induced by non-renormalisable operators that are CP odd. In the current work, we focus on the part of the effective Lagrangian that is relevant for predicting the electron EDM in the presence of CP-odd Yukawas couplings to the bottom and charm quarks. In this case, the relevant effective Lagrangian reads \[\mathscr{L}_{{\rm eff},q_{h}}=-\sqrt{2}G_{\rm F}\left(C_{1}^{eq_{ h}}O_{1}^{eq_{h}}+C_{1}^{q_{h}e}O_{1}^{q_{h}e}+C_{2}^{eq_{h}}O_{2}^{eq_{h}}+C_{3}^ {e}O_{3}^{e}\right)+\ldots\,, \tag{4}\] where the four linearly independent operators are \[O_{1}^{eq_{h}} =\left(\bar{q}_{h}\,i\gamma_{5}q_{h}\right), O_{1}^{q_{h}e} =\left(\bar{q}_{h}q_{h}\right)\left(\bar{e}\,i\gamma_{5}e\right),\] \[O_{2}^{eq_{h}} =\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}(\bar{e}\sigma_{\mu\nu}e )\left(\bar{q}_{h}\,\sigma_{\rho\sigma}q_{h}\right), O_{3}^{e} =\frac{Q_{e}}{2}\frac{m_{q_{h}}}{e}\left(\bar{e}\sigma^{\mu\nu}e \right)\tilde{F}_{\mu\nu}\,, \tag{5}\] and \(C_{1}^{eq_{h}}\), \(C_{1}^{q_{h}e}\), \(C_{2}^{eq_{h}}\), and \(C_{3}^{e}\) are the corresponding Wilson coefficients. Additional CP-odd operators that are suppressed by additional factors of \(m_{\rm{light}}/m_{q_{h}}\) (where \(m_{\rm{light}}\) corresponds to a light quark mass) are denoted by the ellipsis. We defined the electron dipole operator with a factor of the running quark mass \(m_{q_{h}}\equiv m_{q_{h}}(\mu)\), to avoid awkward ratios of quark and lepton masses in the anomalous dimensions. Throughout this work the \(\gamma_{5}\) matrix is defined by \[\gamma_{5}\equiv\frac{i}{4!}\epsilon_{\mu\nu\rho\sigma}\gamma^{ \mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma}\,, \tag{6}\] where \(\epsilon^{\mu\nu\rho\sigma}\) is the totally antisymmetric Levi-Civita tensor in four space-time dimensions with \(\epsilon_{0123}=-\epsilon^{0123}=1\), and we use the notation \(\widetilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}\). We treat \(\gamma_{5}\) within the "Larin" scheme; for the details and subtleties we refer to Ref. [6]. The non-standard sign convention for \(O_{3}^{q}\) is related to our definition of the covariant derivative acting on fermion fields \(f\), \[D_{\mu}\equiv\partial_{\mu}-ig_{s}T^{a}G_{\mu}^{a}+ieQ_{f}A_{\mu}\,. \tag{7}\] The basis of all flavour-diagonal, CP-odd operators is closed under the QED and QCD RG flow, as both interactions conserve CP and flavour. Below the electroweak scale, there is a tower of effective theories relevant for predicting the electron EDM. For the bottom case, \(q_{h}=b\), we employ the effective Lagrangian in Eq. (4) for the five-flavour theory, while for the charm case, \(q_{h}=c\), we employ Eq. (4) for both the five-flavour and the four-flavour theory (see discussion in Section 3.3 for the threshold corrections on couplings and Wilson coefficients at the bottom-quark scale). The effective theory below the heavy-quark scale, \(\mu_{q_{h}}\), does not contain four-fermion operators with heavy quarks, and we use the modified effective Lagrangian \[\widetilde{\mathscr{L}}_{{\rm eff},q_{h}}=-\sqrt{2}G_{\rm F}\widetilde{C}_{3} ^{e}\widetilde{O}_{3}^{e}+\ldots\,, \tag{8}\] in which - as opposed to Eq. (4) - the dipole operator is defined with the conventional factor \(m_{e}\): \[\widetilde{O}_{3}^{e}=\frac{Q_{e}}{2}\frac{m_{e}}{e}\left(\bar{e} \sigma^{\mu\nu}e\right)\tilde{F}_{\mu\nu}\,. \tag{9}\] Note that for \(q_{h}=b\) the Lagrangian in Eq. (8) refers to the four-flavour Lagrangian, while for \(q_{h}=c\) to the three-flavour one. This definition of \(\widetilde{\mathscr{L}}_{\text{eff},q_{h}}\) implies that (cf. Eq. (2)) \[\frac{d_{e}}{e}=-\sqrt{2}G_{\text{F}}\frac{m_{e}}{4\pi\alpha_{e}}\widetilde{C} _{3}^{e}\,. \tag{10}\] In the next section, we describe how the RG evolution within the effective field theories relates \(d_{e}\) to the parameters \(\kappa_{q_{h}}\) and \(\phi_{q_{h}}\) of the "full" theory in Eq. (1). ## 3 Renormalisation Group Evolution Our goal is the summation of all leading and next-to-leading logarithms via RG-improved perturbation theory. The calculation proceeds in the following steps. First, we integrate out the Higgs and weak gauge bosons together with the top quark at the electroweak scale, \(\mu_{\text{ew}}\sim M_{h}\). This matching calculation at \(\mu_{\text{ew}}\) induces the initial conditions for the five-flavour Wilson coefficients appearing in Eq. (4). We collect them in Section 3.1. Subsequently, we perform the RG evolution from \(\mu_{\text{ew}}\) down to the bottom-quark threshold, \(\mu_{b}\sim m_{b}(m_{b})\). The anomalous dimensions relevant for the mixed QCD-QED RG evolution at NLL accuracy are computed here for the first time, see Section 3.2. The next step depends on whether \(q_{h}=b\) or \(q_{h}=c\). For the bottom case, \(q_{h}=b\), we match directly to the four-flavour version of the Lagrangian in Eq. (8) to obtain the prediction of the electron EDM. The relevant threshold corrections at \(\mu_{b}\) are discussed in Section 3.3. For the charm case, \(q_{h}=c\), we must instead match at \(\mu_{b}\) to the four-flavour version of the Lagrangian with four-fermion operators in Eq. (4) and additionally perform the RG evolution in the four-flavour theory from \(\mu_{b}\) down to \(\mu_{c}\sim m_{c}(m_{c})\). The corresponding anomalous dimensions are also given in section 3.2. Finally, we match to the three-flavour version of the Lagrangian in Eq. (8) to obtain the prediction of the electron EDM. The calculations of the amplitudes relevant for computing the initial conditions of Wilson coefficients and the hitherto unknown anomalous dimensions have been performed with self-written FORM [13] routines, i.e., MaRT.In [14], implementing the two-loop recursion algorithms presented in Refs. [15; 16]. The amplitudes were generated using QGRAF [17]. The RG evolution between the different matching scales is significantly more involved than in applications in which the electromagnetic coupling, \(\alpha_{e}\), can be neglected. The reason is that the leading contribution to the electron EDM contains a term with two powers of the large logarithm, i.e, \(\log^{2}x_{qh}\), see Eq. (3). Within the effective theory, this term is obtained by a LL QED calculation, truncated at order \(\alpha_{e}^{2}\). However, the numerically relevant corrections to this result are not further electromagnetic \(\alpha_{e}^{n}\log^{n}\) corrections, but large, logarithmically enhanced QCD corrections which must be summed to all orders to obtain an accurate prediction. Therefore, to properly account for the numerically relevant corrections we must consistently solve the mixed QCD-QED RG equations. This can be achieved consistently using the general formalism developed in Ref. [18]. The main idea is that, since \(\alpha_{s}\log x_{q_{h}}\) is \(\mathcal{O}(1)\), such products of QCD coupling times large logarithms must be resummed for an accurate prediction. By contrast, the product \(\alpha_{e}\log x_{q_{h}}\) is small and does not require resummation. The large logarithms appearing in \(\alpha_{e}\log x_{q_{h}}\) are expressed in terms of the resummed \(\alpha_{s}\log x_{q_{h}}\), i.e. \(\alpha_{e}\log x_{q_{h}}=(\alpha_{e}/\alpha_{s})\times\alpha_{s}\log x_{q_{h}}\). As a consequence, the conventional expansion in terms of \(\alpha_{s}\) and \(\alpha_{e}\) is replaced by an expansion in \(\alpha_{s}\) and \(\kappa\equiv\alpha_{e}/\alpha_{s}\). Therefore, in this setup the Wilson coefficients at some scale \(\mu\) have the expansion \[C_{i}(\mu)=\sum_{n,m=0}^{\infty}\tilde{\alpha}_{s}(\mu)^{n}\kappa(\mu)^{m}\,C _{i}^{(nm)}(\mu)\,, \tag{11}\] with \(\tilde{\alpha}_{s}\equiv\alpha_{s}/4\pi\). By implementing the mixed RG evolution we will compute the \(C_{i}^{(nm)}(\mu)\) coefficients relevant for the electron EDM; for details we refer to Ref. [18]. The analytical results are presented in Section 3.4. ### Initial Conditions at the Weak Scale We augment the SM by flavour-conserving, CP-odd Higgs Yukawa couplings to the heavy quarks \(q_{h}=b,c\), as parameterised in Eq. (1). At a scale \(\mu_{\text{ew}}\approx M_{h}\) we integrate out the heavy degrees of freedom of the SM and match to the effective five-flavour theory relevant for the electron EDM in Eq. (4). The initial conditions for the Wilson coefficients relevant for our calculation are obtained by evaluating the tree-level and one-loop Feynman diagrams such as those shown in Figure 2; we find \[C_{1}^{\text{e}q_{h}}(\mu_{\text{ew}}) =-\kappa_{q_{h}}\sin\phi_{q_{h}}\frac{m_{e}m_{q_{h}}}{M_{h}^{2}}+ \mathcal{O}(\alpha_{s}^{2},\alpha_{s}\alpha_{e},\alpha_{e}^{2})\,, \tag{12}\] \[C_{1}^{q_{h}e}(\mu_{\text{ew}}) =0+\mathcal{O}(\alpha_{s}^{2},\alpha_{s}\alpha_{e},\alpha_{e}^{2} )\,,\] (13) \[C_{2}^{\text{e}q_{h}}(\mu_{\text{ew}}) =\frac{\alpha_{e}}{4\pi}\left(\frac{3}{2}+\log\frac{\mu_{\text{ew }}^{2}}{M_{h}^{2}}\right)\frac{m_{e}m_{q_{h}}}{M_{h}^{2}}Q_{e}Q_{q_{h}}\kappa_ {q_{h}}\sin\phi_{q_{h}}+\mathcal{O}(\alpha_{s}^{2},\alpha_{s}\alpha_{e}, \alpha_{e}^{2})\,,\] (14) \[C_{3}^{\text{e}}(\mu_{\text{ew}}) =0+\mathcal{O}(\alpha_{e}^{2})\,. \tag{15}\] All parameters appearing above correspond to parameters in the five-flavour theory evaluated at the scale \(\mu_{\text{ew}}\), i.e., \(m_{q_{h}}=m_{q_{h},\text{5fl}}(\mu_{\text{ew}})\) and \(\alpha_{e}=\alpha_{e,\text{5fl}}(\mu_{\text{ew}})\). We treat the heavy quarks and the electron as massless at the electroweak matching scale; therefore, no powers of \(m_{e}/M_{h}\) or \(m_{q_{h}}/M_{h}\) appear in the results. The explicit factors of the electron and heavy-quark masses arise by expressing the Yukawa couplings in Eq. (1) in terms of \(m_{e}\) and \(m_{q_{h}}\). We have included all terms up to corrections of quadratic order in the strong and electromagnetic coupling constants, as only these are required for our analysis. The \(\mathcal{O}(\alpha_{e})\) coefficient, Eq. (14), is well-defined only after specifying the basis of evanescent operators which can be found in App. A. In Section 3.4, we will use the framework of Ref. [18] to solve the mixed QCD-QED RG as an expansion in \(\tilde{\alpha}_{s}\) and \(\kappa\). Based on Eqs. (12)-(14) we find the contributing expansion coefficients in Eq. (11) \[C_{1}^{\text{e}q_{h},(00)}(\mu_{\text{ew}}) =-\kappa_{q_{h}}\sin\phi_{q_{h}}\frac{m_{e}m_{q_{h}}}{M_{h}^{2}}\,, \tag{16}\] \[C_{2}^{\text{e}q_{h},(11)}(\mu_{\text{ew}}) =\left(\frac{3}{2}+\log\frac{\mu_{\text{ew}}^{2}}{M_{h}^{2}} \right)\frac{m_{e}m_{q_{h}}}{M_{h}^{2}}Q_{e}Q_{q_{h}}\kappa_{q_{h}}\sin\phi_{q _{h}}\,.\] Figure 2: Examples of leading order (left) and next-to-leading order (right) Feynman diagrams that contribute to the initial conditions of the four-fermion sector in the EFT. CP-odd heavy-quark Yukawa couplings are indicated by red squares. ### Anomalous Dimensions To solve the RG in the effective theories below the electroweak scale we need to include the running of coupling constants, mass anomalous dimensions, and the mixing of operators. In this section, we collect the results that enter the analysis at NLL accuracy. The running of the Wilson coefficients from the electroweak matching scale down to the relevant quark scale is governed by the RG equation \[\frac{dC_{i}}{d\log\mu}=C_{j}\gamma_{ji}\,, \tag{17}\] where \(\gamma_{ji}\) are the components of the anomalous dimension matrix (ADM). We choose the ordering of Wilson coefficients as \[\vec{C}=(C_{1}^{eq_{h}},\;C_{1}^{q_{h}e},\;C_{2}^{eq_{h}},\;C_{3}^{e})^{T}\,. \tag{18}\] The ADM admits an expansion in powers of2\(\tilde{\alpha}_{s}\equiv\alpha_{s}/4\pi\) and \(\tilde{\alpha}_{e}\equiv\alpha_{e}/4\pi\), Footnote 2: Note that for the ADM we do not expand in terms of \(\tilde{\alpha}_{s}\) and \(\kappa\) as for Wilson coefficients. \[\gamma=\sum_{\begin{subarray}{c}n,m=0\\ n+m\geq 1\end{subarray}}\gamma^{(nm)}\tilde{\alpha}_{s}(\mu)^{n}\tilde{\alpha }_{e}(\mu)^{m}\,, \tag{19}\] with \(\tilde{\alpha}_{s}\), \(\tilde{\alpha}_{e}\), and \(\gamma^{(nm)}\) depending on the number of active fermion flavours in the effective theory. Using this expansion, the ADM can be organised by loop order. In general, it depends on the number of active fermion flavours and on the flavour of the heavy quark \(q_{h}\). Below we present our results entering the five- and four-flavour RG evolution required for the case \(q_{h}=b\) and \(q_{h}=c\). By explicit calculation, we find at one loop \[\gamma^{(01)}=\begin{pmatrix}-6(Q_{e}^{2}+Q_{q_{h}}^{2})&0&-2Q_{e}Q_{q_{h}}& 0\\ 0&-6(Q_{e}^{2}+Q_{q_{h}}^{2})&-2Q_{e}Q_{q_{h}}&0\\ -48Q_{q_{h}}Q_{e}&-48Q_{q_{h}}Q_{e}&2(Q_{e}^{2}+Q_{q_{h}}^{2})&-48\frac{Q_{q_{ h}}}{Q_{e}}\\ 0&0&0&10Q_{e}^{2}+6Q_{q_{h}}^{2}-2\beta_{e}^{(0)}\end{pmatrix}\,, \tag{20}\] \[\gamma^{(10)}=\begin{pmatrix}-6C_{F}&0&0&0\\ 0&-6C_{F}&0&0\\ 0&0&2C_{F}&0\\ 0&0&0&\gamma_{q_{h},s}^{(0)}\end{pmatrix}\,, \tag{21}\] with \(\beta_{e}^{(0)}=-\frac{4}{3}\Big{(}N_{c}n_{d}Q_{b}^{2}+N_{c}n_{u}Q_{c}^{2}+n_ {\ell}Q_{e}^{2}\Big{)}\) and \(\gamma_{q_{h},s}^{(0)}=6C_{F}\). Moreover, \(N_{c}=3\) is the number of quark colors, \(n_{u}\) is the number of active up-type quarks, \(n_{d}\) is the number of active down-type quarks, \(n_{\ell}\) is the number of active charged leptons and \(C_{F}\equiv(N_{c}^{2}-1)/2N_{c}=4/3\). In both the bottom and charm cases, we have \(n_{u}=2\) and \(n_{\ell}=3\). At two-loop, the pure QCD ADM is the same for the bottom- and charm-quark case and depends on the number of active quark flavours, \[\gamma^{(20)}=\begin{pmatrix}\frac{724}{9}-\frac{88n_{d}}{9}&0&0&0\\ 0&-\frac{1132}{9}+\frac{40n_{d}}{9}&0&0\\ 0&0&\frac{1964}{27}-\frac{104n_{d}}{27}&0\\ 0&0&0&\frac{1132}{9}-\frac{40n_{d}}{9}\end{pmatrix}\,. \tag{22}\] For the bottom case, we only need the five-flavour theory, i.e., we always fix \(n_{d}=3\). For the charm case, we must solve the RG in both five- and four-flavour theory in which case \(n_{d}=3\) and \(n_{d}=2\), respectively. For the two-loop mixed QCD-QED and the pure two-loop QED ADMs, explicit factors of the quark charges result in different results for the bottom and charm cases. For notational simplicity, we quote the ADMs for the two cases separately after having substituted the electric charges. For the bottom, we find, after fixing \(n_{d}=3\) \[\gamma^{(11)}_{[q_{h}=b]}=\begin{pmatrix}-\frac{8}{9}&0&-\frac{16}{9}&0\\ 0&-\frac{8}{9}&\frac{16}{3}&0\\ 128&-\frac{128}{3}&-\frac{152}{27}&-\frac{448}{3}\\ 0&0&0&40\end{pmatrix}\,,\quad\gamma^{(02)}_{[q_{h}=b]}=\begin{pmatrix}\frac{2 420}{81}&-\frac{14}{3}&\frac{392}{81}&-8\\ -\frac{14}{3}&-\frac{7820}{81}&\frac{8}{81}&-\frac{8}{3}\\ \frac{64}{27}&\frac{3136}{27}&-\frac{17234}{243}&\frac{464}{9}\\ 0&0&0&-\frac{8704}{81}\end{pmatrix}\,. \tag{23}\] For the charm, we find, keeping the \(n_{d}\)-dependence explicit \[\gamma^{(11)}_{[q_{h}=\c]}=\begin{pmatrix}-\frac{32}{9}&0&\frac{32}{9}&0\\ 0&-\frac{32}{9}&-\frac{32}{3}&0\\ -256&\frac{256}{3}&-\frac{608}{27}&\frac{896}{3}\\ 0&0&0&32+\frac{32n_{d}}{9}\end{pmatrix}\,, \tag{24}\] \[\gamma^{(02)}_{[q_{h}=\c]}=\begin{pmatrix}-\frac{439}{81}+\frac{4n_{d}}{81}&- \frac{56}{81}&-\frac{688}{81}&-32\\ -\frac{56}{3}&-\frac{5879}{81}-\frac{316n_{d}}{81}&-\frac{208}{81}-\frac{8n_{d }}{81}&-\frac{32}{3}\\ -\frac{1664}{27}-\frac{64n_{d}}{27}&-\frac{5504}{27}-\frac{64n_{d}}{27}&-\frac {25661}{243}-\frac{676n_{d}}{243}&-\frac{256}{9}\\ 0&0&0&-\frac{8647}{81}-\frac{404n_{d}}{81}\end{pmatrix}\,. \tag{25}\] Figure 3: Examples of two-loop QCD (left), mixed QED–QCD (middle), and two-loop QED (right) Feynman diagrams entering the computation of the anomalous dimension matrix. Effective operator insertions are represented by the unshaded boxes. Some representative Feynman diagrams used to compute the two-loop ADMs are shown in Figure 3. The two-loop ADM are well-defined only after specifying the basis of evanescent operators; this is done in App. A. All two-loop results in this section are new, to the best of our knowledge. The one-loop RG evolution has also been calculated in Ref. [19]. ### Threshold Corrections When performing the running of the Wilson coefficients, we must also integrate out the heavy quarks at the respective scales. This leads to threshold corrections to the gauge couplings and quark masses, as well as to the Wilson coefficients. Below we describe the origin of the threshold corrections for each case, and collect the corresponding results. Due to the mixed RG evolution, the matching of the effective theories must also be performed as an expansion in \(\tilde{\alpha}_{s}\) and \(\kappa\), cf. Eq. (11). We stress that the results presented below are only applicable for our specific case in which the only non-vanishing initial conditions are the ones in section 3.1. Other UV completions, with more contributions to the initial conditions, can receive additional threshold corrections (for instance, if \(C_{3}^{e}(\mu_{\text{ew}})\neq 0\) at one-loop). #### The Bottom-Quark Case In the case of an anomalous bottom-quark Yukawa coupling, \(q_{h}=b\), the only relevant threshold below the weak scale is at \(\mu_{b}\) in which we match the five-flavour version of the Lagrangian in Eq. (4) to the four-flavour version of the Lagrangian in Eq. (8). There are three effects that induce a non-trivial correction to the matching onto \(\widetilde{C}_{3}^{e}\): the threshold correction for \(\alpha_{s}\) when matching from the five-flavour onto the four-flavour theories (the corresponding one for \(\alpha_{e}\) does not contribute because in our case \(C_{3,\text{5d}}^{e,(01)}(\mu_{b})=0\)); the different normalization of the dipole operators \(O_{3}^{e}\) and \(\widetilde{O}_{3}^{e}\) in the two theories, which leads to a factor of \(m_{b}/m_{e}\); and a threshold correction from one-loop insertions of \(O_{2}^{be}\) in the five-flavour theory. Details can be found in the analogous discussion in Ref. [6]. Taking all of these effects into account, we find for the electron dipole operator in the four-flavour theory at \(\mu_{b}\) \[\begin{split}\widetilde{C}_{3}^{e}(\mu_{b})=&\kappa_ {\text{4fl}}^{2}\frac{m_{b}}{m_{e}}C_{3,\text{5fl}}^{e,(02)}(\mu_{b})\\ +&\tilde{\alpha}_{s,\text{4fl}}\kappa_{\text{4fl}}^{ 2}\frac{m_{b}}{m_{e}}\bigg{(}C_{3,\text{5fl}}^{e,(12)}(\mu_{b})+24\frac{Q_{b} }{Q_{e}}\log\frac{\mu_{b}^{2}}{m_{b}^{2}}C_{2,\text{5fl}}^{be,(01)}(\mu_{b})\\ &\qquad\qquad\qquad\qquad\qquad-\frac{1}{2}\gamma_{b,s}^{(0)}\log \frac{\mu_{b}^{2}}{m_{b}^{2}}C_{3,\text{5fl}}^{e,(02)}(\mu_{b})-2\delta\alpha _{s}\log\frac{\mu_{b}^{2}}{m_{b}^{2}}C_{3,\text{5fl}}^{e,(02)}(\mu_{b})\bigg{)} \\ =&\kappa_{\text{4fl}}^{2}\frac{m_{b}}{m_{e}}C_{3, \text{5fl}}^{e,(02)}(\mu_{b})\\ +&\tilde{\alpha}_{s,\text{4fl}}\kappa_{\text{4fl}}^{ 2}\frac{m_{b}}{m_{e}}\bigg{(}C_{3,\text{5fl}}^{e,(12)}(\mu_{b})+8\log\frac{\mu _{b}^{2}}{m_{b}^{2}}C_{2,\text{5fl}}^{be,(01)}(\mu_{b})-\frac{16}{3}\log\frac{ \mu_{b}^{2}}{m_{b}^{2}}C_{3,\text{5fl}}^{e,(02)}(\mu_{b})\bigg{)}\,,\end{split} \tag{26}\] with \(\gamma_{b,s}^{0}=6C_{F}\) the leading-order QCD mass anomalous dimension, the \(\alpha_{s}\) threshold correction \(\delta\alpha_{s}=2/3\), and \(m_{b}=m_{b}(m_{b})\) in the five-flavour theory in the \(\overline{\text{MS}}\) scheme. We indicate by explicit subscripts "5fl" and "4fl" in which effective theory the various quantities are defined. The couplings in Eq. (27) are evaluated at \(\mu_{b}\), i.e., \(\tilde{\alpha}_{s,\text{4fl}}(\mu_{b})\) and \(\kappa_{\text{4fl}}(\mu_{b})\). #### The Charm-Quark Case In the case of an anomalous charm-quark Yukawa coupling, \(q_{h}=b\), there are threshold corrections both at the bottom- and the charm-quark thresholds, \(\mu_{b}\) and \(\mu_{c}\), respectively. At \(\mu_{b}\), the effective Lagrangian in the five- and four-flavour theories are the same (see Eq. (4)), and at NLL accuracy the only relevant effect is the decoupling of \(\alpha_{s}\). The only non-trivial matching condition at the bottom-quark scale then reads for the charm-quark case, \(q_{h}=c\), \[C^{e}_{3,4\text{fl}}(\mu_{b})=\kappa_{4\text{fl}}^{2}C^{e,(02)}_{3,5\text{fl}}( \mu_{b})+\tilde{\alpha}_{s,4\text{fl}}\kappa_{4\text{fl}}^{2}\bigg{(}C^{e,(12 )}_{3,5\text{fl}}(\mu_{b})-2\delta\alpha_{s}\log\frac{\mu_{b}^{2}}{m_{b}^{2}}C^ {e,(02)}_{3,5\text{fl}}(\mu_{b})\bigg{)}\,. \tag{28}\] Other operators beyond \(O_{3}^{e}\) also receive threshold corrections. However, none of these terms enter the final three-flavour value of \(\widetilde{C}^{e}_{3}(\mu_{c})\) at NLL order, so we do not list them here. At the charm-quark threshold, \(\mu_{c}\), the threshold correction from matching the four-flavour onto the three-flavour theory with the single operator in Eq. (8) is analogous to the bottom case. Accordingly, we find for \(q_{h}=c\) \[\widetilde{C}^{e}_{3}(\mu_{c})= \kappa_{3\text{fl}}^{2}\frac{m_{c}}{m_{e}}C^{e,(02)}_{3,4\text{fl }}(\mu_{c})\] \[+ \tilde{\alpha}_{s,3\text{fl}}\kappa_{3\text{fl}}^{2}\frac{m_{c}} {m_{e}}\bigg{(}C^{e,(12)}_{3,4\text{fl}}(\mu_{c})+24\frac{Q_{c}}{Q_{e}}\log \frac{\mu_{c}^{2}}{m_{c}^{2}}C^{ee,(01)}_{2,4\text{fl}}(\mu_{c})\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\frac{1}{2}\gamma^{(0)}_{c,s }\log\frac{\mu_{c}^{2}}{m_{c}^{2}}C^{e,(02)}_{3,4\text{fl}}(\mu_{c})-2\delta \alpha_{s}\log\frac{\mu_{c}^{2}}{m_{c}^{2}}C^{e,(02)}_{3,4\text{fl}}(\mu_{c}) \bigg{)} \tag{29}\] \[= \kappa_{3\text{fl}}^{2}\frac{m_{c}}{m_{e}}C^{e,(02)}_{3,4\text{fl} }(\mu_{c})\] \[+ \tilde{\alpha}_{s,3\text{fl}}\kappa_{3\text{fl}}^{2}\frac{m_{c}} {m_{e}}\bigg{(}C^{e,(12)}_{3,4\text{fl}}(\mu_{c})-16\log\frac{\mu_{c}^{2}}{m_{ c}^{2}}C^{ce,(01)}_{2,4\text{fl}}(\mu_{c})-\frac{16}{3}\log\frac{\mu_{c}^{2}}{m_{ c}^{2}}C^{e,(02)}_{3,4\text{fl}}(\mu_{c})\bigg{)}\,, \tag{30}\] where \(\gamma^{0}_{c,s}=6C_{F}\), \(\delta\alpha_{s}=2/3\), and \(m_{c}=m_{c}(m_{c})\) in the four-flavour theory in the \(\overline{\text{MS}}\) scheme. ### Analytic Solution of the RG In this section, we show the final result for the electron EDM Wilson coefficients after implementing the mixed RG evolution (see discussion above Eq. (11)) using the ADMs from section 3.2, and including the threshold corrections from section 3.3. We obtain obtain the exact analytical result for the contribution to the electron dipole moment up to \(\mathcal{O}(\alpha_{s}\kappa^{2})\) including the resummation of QCD logarithms: \[\frac{d_{e}}{e}=-\sqrt{2}G_{\text{F}}\frac{m_{e}}{4\pi\alpha_{e}}\left[\kappa ^{2}\widetilde{C}^{e,(02)}_{3}(\mu_{q_{h}})+\tilde{\alpha}_{s}\kappa^{2} \widetilde{C}^{e,(12)}_{3}(\mu_{q_{h}})+\mathcal{O}\big{(}\tilde{\alpha}_{s}^ {2}\kappa^{2},\kappa^{3}\big{)}\right]\,, \tag{31}\] with \(\alpha_{e}\), \(\tilde{\alpha}_{s}\), and \(\kappa\equiv\alpha_{e}/\alpha_{s}\) evaluated at the scale \(\mu_{q_{h}}\) in the four- and three-flavour theory for the bottom- and charm-quark case, respectively. The coefficients \(\widetilde{C}^{e,(mn)}_{3}(\mu_{q_{h}})\) are functions of the initial conditions at \(\mu_{\text{ew}}\) and of ratios of the values of \(\alpha_{s}\) at different scales. For brevity we introduce corresponding compact notations \[\bar{C}^{(mn)}_{iX}\equiv C^{X,(mn)}_{i,5\text{fl}}(\mu_{\text{ew}})\,,\qquad \eta_{5}\equiv\frac{\alpha_{s,5\text{fl}}(\mu_{\text{ew}})}{\alpha_{s,5\text{fl }}(\mu_{b})}\,,\qquad\eta_{4}\equiv\frac{\alpha_{s,4\text{fl}}(\mu_{b})}{ \alpha_{s,4\text{fl}}(\mu_{c})}\,.\] In both the bottom and charm-quark cases, the operator \(\widetilde{O}^{3}_{e}\) in Eq. (8) below the respective heavy-quark scale is modified only by QED running effects which are negligibly small. #### Bottom-Quark Result In the bottom-quark case, \(q_{h}=b\), we find for the coefficients in Eq. (31) \[\widetilde{C}_{3}^{e,(02)}(\mu_{b}) =\frac{m_{b}}{m_{e}}\bar{C}_{1eb}^{(00)}\bigg{(}\frac{12}{77}\eta_{5 }^{-\frac{12}{23}}-\frac{8}{35}\eta_{5}^{-\frac{19}{23}}+\frac{4}{55}\eta_{5}^{ -\frac{34}{23}}\bigg{)}\,, \tag{32}\] \[\widetilde{C}_{3}^{e,(12)}(\mu_{b}) =\frac{m_{b}}{m_{e}}\bar{C}_{1eb}^{(00)}\bigg{(}\frac{248}{105} \eta_{5}^{-\frac{19}{23}}-\frac{152}{77}\eta_{5}^{-\frac{12}{23}}-\frac{64}{16 5}\eta_{5}^{-\frac{34}{23}}\bigg{)}\log\frac{\mu_{b}^{2}}{m_{b}^{2}}\] \[+\frac{m_{b}}{m_{e}}\bar{C}_{1eb}^{(00)}\bigg{(}\frac{37864}{40733 }\eta_{5}^{\frac{11}{23}}-\frac{314306}{166635}\eta_{5}^{\frac{4}{23}}+\frac{3 779848}{261855}\eta_{5}^{-\frac{11}{23}}\] \[\qquad\qquad\qquad-\frac{2044442}{122199}\eta_{5}^{-\frac{12}{23 }}+\frac{199636}{55545}\eta_{5}^{-\frac{19}{23}}-\frac{29848}{87285}\eta_{5}^{ -\frac{34}{23}}\bigg{)}\] \[+\frac{m_{b}}{m_{e}}\bar{C}_{2eb}^{(11)}\bigg{(}\frac{8}{5}\eta_{ 5}^{-\frac{11}{23}}-\frac{8}{5}\eta_{5}^{\frac{4}{23}}\bigg{)}\,, \tag{33}\] where \(m_{b}=m_{b}(m_{b})\) in the five-flavour theory and in the \(\overline{\rm MS}\) scheme. #### Charm-Quark Result In the charm-quark case, \(q_{h}=c\), we find for the coefficients in Eq. (31) \[\widetilde{C}_{3}^{e,(02)}(\mu_{c}) =\frac{m_{c}}{m_{e}}\bar{C}_{1ec}^{(00)}\bigg{(}\frac{16}{39}\eta_ {4}^{-\frac{12}{25}}\eta_{5}^{-\frac{12}{23}}+\frac{64}{357}\eta_{4}^{-\frac{2 1}{25}}\eta_{5}^{-\frac{12}{23}}+\frac{576}{17017}\eta_{4}^{-\frac{38}{25}} \eta_{5}^{-\frac{12}{23}}\] \[\qquad\qquad-\frac{96}{119}\eta_{4}^{-\frac{21}{25}}\eta_{5}^{- \frac{19}{23}}-\frac{64}{595}\eta_{4}^{-\frac{38}{25}}\eta_{5}^{-\frac{19}{23} }+\frac{16}{55}\eta_{4}^{-\frac{38}{25}}\eta_{5}^{-\frac{34}{23}}\bigg{)}\,, \tag{34}\] \[\widetilde{C}_{3}^{e,(12)}(\mu_{c}) =\frac{m_{c}}{m_{e}}\bar{C}_{1ec}^{(00)}\log\frac{\mu_{b}^{2}}{m_{b }^{2}}\bigg{(}\frac{64}{119}\eta_{4}^{\frac{4}{25}}\eta_{5}^{-\frac{19}{23}}- \frac{64}{119}\eta_{4}^{\frac{4}{25}}\eta_{5}^{-\frac{12}{23}}-\frac{384}{1309 }\eta_{4}^{-\frac{13}{25}}\eta_{5}^{-\frac{12}{23}}\] \[\qquad\qquad\qquad\qquad+\frac{1216}{1785}\eta_{4}^{-\frac{13}{2 5}}\eta_{5}^{-\frac{19}{23}}-\frac{64}{165}\eta_{4}^{-\frac{13}{25}}\eta_{5}^{ -\frac{34}{23}}\bigg{)}\] \[+\frac{m_{c}}{m_{e}}\bar{C}_{1ec}^{(00)}\log\frac{\mu_{c}^{2}}{m_{ c}^{2}}\bigg{(}\frac{1056}{119}\eta_{4}^{-\frac{21}{25}}\eta_{5}^{-\frac{19}{23}}- \frac{224}{39}\eta_{4}^{-\frac{12}{25}}\eta_{5}^{-\frac{12}{23}}-\frac{704}{35 7}\eta_{4}^{-\frac{21}{25}}\eta_{5}^{-\frac{12}{23}}\] \[\qquad\qquad\qquad\qquad-\frac{3072}{17017}\eta_{4}^{-\frac{38}{25 }}\eta_{5}^{-\frac{12}{23}}+\frac{1024}{1785}\eta_{4}^{-\frac{38}{25}}\eta_{5} ^{-\frac{19}{23}}-\frac{256}{165}\eta_{4}^{-\frac{38}{25}}\eta_{5}^{-\frac{34}{ 23}}\bigg{)}\] \[+\frac{m_{c}}{m_{e}}\bar{C}_{1ec}^{(00)}\bigg{(}\frac{605824}{56655 9}\eta_{4}^{\frac{4}{25}}\frac{11}{9}\eta_{5}^{-\frac{1}{23}}-\frac{1257224}{188 853}\eta_{4}^{\frac{4}{25}}\eta_{5}^{\frac{4}{23}}+\frac{151456}{61893}\eta_{4} ^{\frac{13}{25}}\eta_{5}^{\frac{11}{23}}\] \[\qquad\qquad\qquad-\frac{21158428}{16861875}\eta_{4}^{\frac{4}{25} }\eta_{5}^{-\frac{12}{23}}+\frac{3414272}{12894375}\eta_{4}^{\frac{12}{25}} \eta_{5}^{-\frac{12}{23}}-\frac{891392}{5620625}\eta_{4}^{\frac{4}{25}}\eta_{5 }^{-\frac{19}{23}}\] \[\qquad\qquad\qquad-\frac{2514448}{2832795}\eta_{5}^{\frac{4}{23}} \eta_{4}^{-\frac{13}{25}}+\frac{1817472}{9001993}\eta_{5}^{\frac{11}{23}} \eta_{4}^{-\frac{13}{25}}+\frac{15119392}{261855}\eta_{4}^{-\frac{13}{25}}\eta_{ 5}^{-\frac{11}{23}}-\frac{11}{23}\] \[\qquad\qquad\qquad+\frac{8657588}{219375}\eta_{4}^{-\frac{12}{25}} \eta_{5}^{-\frac{12}{23}}-\frac{578787793792}{5626245625}\eta_{4}^{-\frac{12}{ 23}}\eta_{5}^{-\frac{12}{23}}-\frac{4759984}{2008125}\eta_{4}^{-\frac{21}{25}} \eta_{5}^{-\frac{12}{23}}\] \[\qquad\qquad\qquad-\frac{1460352}{10635625}\eta_{4}^{-\frac{38}{25 }}\eta_{5}^{-\frac{12}{23}}+\frac{612782088}{196721875}\eta_{4}^{-\frac{13}{25}} \eta_{5}^{-\frac{19}{23}}+\frac{2379992}{223125}\eta_{4}^{-\frac{21}{25}}\eta_{5 }^{-\frac{19}{23}}\] \[\qquad\qquad\qquad+\frac{486784}{1115625}\eta_{4}^{-\frac{38}{25 }}\eta_{5}^{-\frac{19}{23}}-\frac{3414272}{18184375}\eta_{4}^{-\frac{13}{25}} \eta_{5}^{-\frac{34}{23}}-\frac{121696}{103125}\eta_{4}^{-\frac{38}{25}} \eta_{5}^{-\frac{34}{23}}\] \[+\frac{m_{c}}{m_{e}}\bar{C}_{2ec}^{(11)}\bigg{(}\frac{48}{17}\eta_{ 25}^{\frac{4}{25}}\eta_{5}^{\frac{4}{23}}+\frac{32}{85}\eta_{5}^{\frac{4}{25}} \eta_{4}^{-\frac{13}{25}}-\frac{16}{5}\eta_{4}^{-\frac{13}{25}}\eta_{5}^{- \frac{11}{23}}\bigg{)} \tag{35}\] where \(m_{c}=m_{c}(m_{c})\) in the four-flavour theory and in the \(\overline{\rm MS}\) scheme. ## 4 Numerical Results In this section, we present the numerical results based on the analytic expressions of the previous section and obtain constraints on CP-odd Higgs Yukawas to the bottom and charm quarks from the electron EDM. Combining the initial conditions in section 3.1 with the RG solution in section 3.4 leads to the electron EDM prediction, cf., Eq. (10), \[\frac{d_{e}}{e}=\frac{\sqrt{2}G_{\rm F}}{4\pi\alpha_{e}}m_{e}\times\left\{ \begin{array}{l}\frac{m_{b}(m_{b})m_{b}(\mu_{\rm ew})}{M_{h}^{2}}\kappa_{b} \sin\phi_{b}\Big{[}\kappa^{2}F_{b}^{\rm LL}(\eta_{5})\\ \hskip 113.811024pt+\tilde{\alpha}_{s}\kappa^{2}F_{b}^{\rm NLL}(\eta_{5}; \log\frac{\mu_{b}}{m_{b}},\log\frac{\mu_{\rm ew}}{M_{h}})\Big{]}\\ \frac{m_{c}(m_{c})m_{c}(\mu_{\rm ew})}{M_{h}^{2}}\kappa_{c}\sin\phi_{c}\Big{[} \kappa^{2}F_{c}^{\rm LL}(\eta_{5},\eta_{4})\\ \hskip 113.811024pt+\tilde{\alpha}_{s}\kappa^{2}F_{c}^{\rm NLL}(\eta_{4},\eta_ {5};\log\frac{\mu_{c}}{m_{c}},\log\frac{\mu_{b}}{m_{b}},\log\frac{\mu_{\rm ew} }{M_{h}})\Big{]}\end{array}\right\}\] \[+\mathcal{O}\big{(}\tilde{\alpha}_{s}^{2}\kappa^{2},\kappa^{3} \big{)}\,. \tag{36}\] where the heavy quark masses at the electroweak scale are given by their NLL running relations \[\begin{array}{l}\frac{m_{b}(\mu_{\rm ew})}{m_{b}(m_{b})}=\eta_{ 5}^{\frac{12}{23}}\Bigg{[}1+\frac{\alpha_{s,5\rm fl}(\mu_{b})}{4\pi}\Bigg{(} \frac{7462}{1587}\big{(}\eta_{5}-1\big{)}-4\log\frac{\mu_{b}^{2}}{m_{b}^{2}} \Bigg{)}\Bigg{]}\,,\\ \frac{m_{c}(\mu_{\rm ew})}{m_{c}(m_{c})}=\eta_{4}^{\frac{12}{25}}\eta_{5}^{ \frac{12}{23}}\Bigg{[}1+\frac{\alpha_{s,4\rm fl}(\mu_{c})}{4\pi}\Bigg{(} \frac{7462}{1587}\eta_{4}\eta_{5}-\frac{213392}{330625}\eta_{4}-\frac{7606}{18 75}-4\log\frac{\mu_{c}^{2}}{m_{c}^{2}}\Bigg{)}\Bigg{]}\,.\end{array} \tag{37}\] This result demonstrates explicitly how our RG-improved calculation removes the ambiguity of the fixed-order computation by resumming the large \(\alpha_{s}\) logarithms and thus defining the scale at which the masses are evaluated: in contrast to the fixed-order result in Eq. (3), \(d_{e}\) is here proportional to \(m_{q_{h}}(m_{q_{h}})m_{q_{h}}(\mu_{\rm ew})/M_{h}^{2}\) and the functions \(F_{q_{h}}^{\rm(N)LL}\), which contain the \(\alpha_{s}\) resummation and do not depend on the large logarithms \(\log m_{q_{h}}/M_{h}\). Expanding the LL result of Eq. (31) in \(\alpha_{s}(\mu_{\rm ew})\) we recover exactly the \(\log^{2}x_{q}\) term in Eq. (3). On the other hand, we cannot reproduce the \(\pi^{2}/3\) term in Eq. (3) since it must arise from the higher-order terms in Eq. (31) of order \(\kappa^{2}\tilde{\alpha}_{s}^{2}\), which are counted as NNLL in the QCD resummation. The full expressions for \(F_{q_{h}}^{\rm(N)LL}\) are readily extracted from the results in section 3. For convenience, we give their numerical values for the special case of \(\mu_{\rm ew}=M_{h}\), \(\mu_{b}=m_{b}(m_{b})\), and \(\mu_{c}=m_{c}(m_{c})\). In this case, the logarithms in \(F_{q_{h}}^{\rm NLL}\) vanish and the only dependence is on \(\eta_{5/4}\). We find \[F_{b}^{\rm LL} =-0.0202\,, F_{b}^{\rm NLL} =-0.0952\,, \tag{38}\] \[F_{c}^{\rm LL} =-0.383\,, F_{c}^{\rm NLL} =-0.685\,, \tag{39}\] where we used the values \(\eta_{5}=0.506\) and \(\eta_{4}=0.605\) obtained by solving the mixed QCD-QED RG equations at two-loop accuracy using the numerical input in Table 1. Next, we use the full analytic expressions to estimate the uncertainty in predicting \(d_{e}\), and provide the corresponding bounds on the anomalous CP-odd \(q_{h}\)-Yukawa couplings. We estimate the uncertainty due to the truncation of the perturbation series in two ways: * The dependence on the matching scales cancels in our result to the order we calculated (\(\tilde{\alpha}_{s}\kappa^{2}\)). The residual scale dependence is sensitive to higher-order terms in the perturbation series. Therefore, we evaluate the Wilson coefficient \(\widetilde{C}_{3}^{e}\) at the fixed low scale \(m_{q_{h}}(m_{q_{h}})\), and separately vary all matching scales (\(\mu_{\text{ew}}\) and \(\mu_{b}\) for the bottom-quark case; \(\mu_{\text{ew}}\), \(\mu_{b}\), and \(\mu_{c}\) for the charm-quark case). We fix all scales that are not varied to their "central" values (\(\mu_{\text{ew}}=M_{h}\) and \(\mu_{q_{h}}=m_{q_{h}}(m_{q_{h}})\)). The maximal residual scale dependence then provides our first uncertainty estimate on \(\widetilde{C}_{3}^{e}\) or, equivalently, on \(d_{e}\). The ranges for the scale variations are chosen as \(\mu_{\text{ew}}\in[60\text{ GeV},\ 250\text{ GeV}]\), \(\mu_{b}=[2\text{ GeV},\ 8\text{ GeV}]\), and \(\mu_{c}=[1\text{ GeV},\ 2\text{ GeV}]\). The scale variations are shown in Figure 4 and further discussed below. (The \(\mu_{b}\) variation is not explicitly shown for the charm-quark case, as it looks very similar to the \(\mu_{c}\) variation.) * There is a further ambiguity in our result that would only be resolved by a NNLL calculation: we can evaluate the NLL correction to \(d_{e}\), i.e., the _whole_ term proportional to \(F_{q_{h}}^{\text{NLL}}\) in Eq. (31), using either two- or one-loop values for all masses and couplings. We use this numerical difference as a further way to estimate the uncertainty; this difference effectively smears the lines obtained for the NLL scale variation, as described above, into the red bands shown in Figure 4. As central values for \(\widetilde{C}_{3}^{e}\) or equivalently \(d_{e}\) at NLL accuracy we take the average of the maximal and minimal values obtained by all scale variations and differences as described above. Half of that difference is then assigned as the theoretical uncertainty associated with missing higher-order terms. This leads to \[\frac{d_{e}}{e}=\kappa_{q_{h}}\sin\phi_{q_{h}}\times\begin{cases}(3.03\pm 0.13) \cdot 10^{-29}\text{ cm}&\text{for}\quad q_{h}=b\,,\\ (1.39\pm 0.03)\cdot 10^{-29}\text{ cm}&\text{for}\quad q_{h}=c\,.\end{cases} \tag{40}\] The corresponding results are further illustrated in the two upper and two lower plots of Figure 4 for the bottom- and charm-quark cases, respectively. The plots on the left show the scale dependence of \(d_{e}/e\) on \(\mu_{q_{h}}\) at LL (dashed, black line) and NLL (red band) accuracy. The plots on the right show the corresponding dependence on \(\mu_{\text{ew}}\). For the LL result we use one-loop values for all masses and couplings, and both one- and two-loop values for the NLL result, as discussed above. The comparison of LL and NLL transparently shows how the NLL computation presented here drastically reduces the large uncertainties associated to QCD corrections. \begin{table} \begin{tabular}{r l r l} \hline Parameter & Value & Parameter & Value \\ \hline \(\alpha_{s}(M_{Z})\) & \(0.1179\) & \(\alpha_{e}(M_{Z})\) & \(1/127.952\) \\ \(G_{\text{F}}\) & \(1.1663787\times 10^{-5}\text{ GeV}^{-2}\) & \(M_{Z}\) & \(91.1876\text{ GeV}\) \\ \(M_{h}\) & \(125.25\text{ GeV}\) & \(m_{e}\) & \(5.1099895\times 10^{-4}\text{ GeV}\) \\ \(m_{b}(m_{b})\) & \(4.18\text{ GeV}\) & \(m_{c}(m_{c})\) & \(1.27\text{ GeV}\) \\ \hline \end{tabular} \end{table} Table 1: Input parameters used in evaluating the low-scale Wilson coefficient \(\widetilde{C}_{3}^{e}(\mu_{q_{h}})\) and equivalently \(d_{e}/e\). All values are taken from Ref. [20]; running parameters are evaluated in the \(\overline{\text{MS}}\) scheme. Furthermore, the gray dotted lines in Figure 4 show the naive result of the fixed-order calculation, Eq. (3), that has been used so far in the literature. The two lines correspond to using different values for the heavy-quark mass \(m_{q_{h}}\): the line marked "high" corresponds to using the value evaluated at the electroweak scale (\(m_{q_{h}}(M_{h})\)), while line marked "low" corresponds to using the mass evaluated at the low scale (\(m_{q_{h}}(m_{q_{h}})\)). The spread of these three lines illustrates the level of ambiguity in the fixed-order result. The NLL computation of the current work removes this ambiguity almost entirely. The ACME collaboration has constrained the electron EDM to \(|d_{e}|<1.1\times 10^{-29}\) at 90% confidence level [12]. To perform a rough combination of this experimental constraint with our derived theory uncertainties we interpret the measurement as a Gaussian centered at zero and the above bound as the correspodning 90% confidence level (CL) interval, i.e., including negative values for \(d_{e}\). Adding Figure 4: Residual scale dependence of the electric dipole moment induced by CP-odd bottom-quark couplings (upper two panels) and CP-odd charm-quark couplings (lower two panels). In the left two panels, the variation of the matching scale at the bottom and charm thresholds is shown, respectively, while the right two panels show the dependence on the electroweak matching scale. In all plots, the dashed lines show the scale variation of the LL result. The scale variation of the NLL results is indicated by the red bands. The boundaries of the red bands are obtained by the evaluating the NLL scale variation in two ways, as discussed in the main text. Finally, the two gray dotted lines show the fixed-order results that have been used in literature so far; they correspond to evaluating the quark masses in Eq. (3) at the electroweak scale (the line marked “high”) and at the heavy-quark threshold (the line marked “low”), and neglecting all other QCD corrections. the corresponding experimental "\(1\sigma\)" uncertainty in quadrature with the theory uncertainty we find based on an _one-parameter_\(\chi^{2}\) function in terms of \(\kappa_{q_{h}}\sin\phi_{q_{h}}\) \[\kappa_{b}|\sin\phi_{b}| <0.22\quad\text{[at 68.27\% CL]}\,, \kappa_{b}|\sin\phi_{b}| <0.36\quad\text{[at 90\% CL]}\,, \tag{41}\] \[\kappa_{c}|\sin\phi_{c}| <0.48\quad\text{[at 68.27\% CL]}\,, \kappa_{c}|\sin\phi_{c}| <0.79\quad\text{[at 90\% CL]}\,. \tag{42}\] ## 5 Conclusions The experimental bound on the electron EDM [12] translates into strong constraints on new CP-violating phases in various extensions of the SM (the SM contribution to the electron EDM is estimated to lie nine orders of magnitude below the current experimental sensitivity [21; 22]). CP-violating phases such as those in Eq. (1) appear in several well-motivated beyond standard model theories (see e.g. Refs. [9; 10]), and the electron EDM is capable of placing stringent bounds on these phases [6; 7; 4; 8]. However, implicit in these electron EDM bounds is a large \(\mathcal{O}(1)\) QCD uncertainty that has so far been neglected, even though it leads to sizeable ambiguities in the resulting constraints. Fortunately, since the electron EDM is a leptonic observable, this ambiguity can be removed systematically by a perturbative calculation, without the need for additional non-perturbative information. In this work, we have calculated the contribution to the electron EDM of CP-odd Higgs couplings to the bottom and charm quarks in RG-improved perturbation theory, summing the leading and next-to-leading large logarithms proportional to the strong coupling constant. This calculation has reduced the residual ambiguity in the bound to the level of a few percent, as discussed in detail in Section 4. The perturbation series shows good convergence, as expected for a leptonic observable. If, in the future, a non-zero electron EDM were observed, the error could be further reduced by summing the next-to-next-to-leading QCD logarithms, as well as taking into account the QED evolution of the electric dipole moment below the heavy-quark thresholds. ## Acknowledgments We thank Luca Merlo for a discussion that triggered this project. Z.P. acknowledges financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under grant agreement 833280 (FLAY), and from the Swiss National Science Foundation (SNF) under contract 200020-204428. J.B. acknowledges support in part by DoE grant DE-SC0011784. ## Appendix A Unphysical Operators For completeness, we list here the "unphysical" operators entering our calculation in intermediate steps. They are called "unphysical" because they vanish either via the equations of motion (e.o.m.) of the quark fields for onshell external states, or via algebraic relations that are valid in \(d=4\), but not in \(d\neq 4\). ### E.o.m.-vanishing Operators These operators have matrix-elements that vanish via the e.o.m. of the quark field. The following two gauge-invariant operators enter our computation at the two-loop level: \[\begin{split} N_{1}^{e}&=\frac{m_{q_{h}}}{2e^{2}} \bar{e}\big{[}\overleftarrow{\not{D}}\overleftarrow{\not{D}}\,i\gamma_{5}+i \gamma_{5}\not{D}\,\not{D}\big{]}e\,,\\ N_{2}^{e}&=\frac{m_{q_{h}}}{2e^{2}}\bar{e}\big{[} \overleftarrow{\not{D}}\overleftarrow{D^{\sigma}}\,\gamma^{\mu}\gamma^{\nu} \gamma^{\rho}-\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}D^{\sigma}\not{D}\big{]}e \ \epsilon_{\mu\nu\rho\sigma}\,.\end{split} \tag{43}\] The covariant derivative acting on electron fields is defined as \[D_{\mu}\equiv\partial_{\mu}+ieQ_{e}A_{\mu}\,, \tag{44}\] with \(Q_{e}=-1\) the electron electrical charge. ### Evanescent Operators Next we list the evanescent operators that enter our computation at one- and two-loop order. The leading-order ADM does not depend on their definition, but the next-to-leading-order ADM does. In the \(q\)-\(q\)-\(A\) sector we need the operator \[E_{\gamma}^{e}=\frac{Q_{e}}{4}\frac{m_{q_{h}}}{e}\bar{e}\{\sigma^{\mu\nu},i \gamma_{5}\}e\,F_{\mu\nu}-Q_{3}^{e}\,. \tag{45}\] The evanescent operators required in the \(e\)-\(q_{h}\) sector read \[\begin{split} E_{1}^{eq_{h}}&=\frac{1}{2}(\bar{e} \gamma_{[\mu}\gamma_{\nu]}e)\left(\bar{q}_{h}\{\gamma^{[\mu}\gamma^{\nu]},i \gamma_{5}\}q_{h}\right)+O_{2}^{eq_{h}}\,,\\ E_{1}^{q_{h}e}&=\frac{1}{2}(\bar{q}_{h}\gamma_{[ \mu}\gamma_{\nu]}q_{h})\left(\bar{e}\{\gamma^{[\mu}\gamma^{\nu]},i\gamma_{5}\} e\right)+O_{2}^{eq_{h}}\,,\\ E_{2}^{eq_{h}}&=\,\big{[}(\bar{e}\gamma_{[\mu}\gamma_ {\nu]}\gamma^{[\rho}\gamma^{\sigma]}e)\left(\bar{q}_{h}\gamma^{[\mu}\gamma^{ \nu]}\gamma^{[\tau}\gamma^{\upsilon]}q_{h}\right)+(\bar{e}\gamma^{[\rho}\gamma ^{\sigma]}\gamma_{[\mu}\gamma_{\nu]}e)\left(\bar{q}_{h}\gamma^{[\tau}\gamma^{ \upsilon]}\gamma^{[\mu}\gamma^{\nu]}q_{h}\right)\big{]}\epsilon_{\rho\sigma \tau\upsilon}\\ &\quad-48(O_{1}^{eq_{h}}+O_{1}^{q_{h}e})+16O_{2}^{eq_{h}}\,,\\ E_{3}^{eq_{h}}&=\frac{1}{2}(\bar{e}\gamma_{[\mu} \gamma_{\nu}\gamma_{\rho}\gamma_{\sigma]}e)\left(\bar{q}_{h}\{\gamma^{[\mu} \gamma^{\nu}\gamma^{\rho}\gamma^{\sigma]},i\gamma_{5}\}q_{h}\right)-24O_{1}^ {q_{h}e}\,,\\ E_{3}^{q_{h}e}&=\frac{1}{2}(\bar{q}_{h}\gamma_{[\mu} \gamma_{\nu}\gamma_{\rho}\gamma_{\sigma]}q_{h})\left(\bar{e}\{\gamma^{[\mu} \gamma^{\nu}\gamma^{\rho}\gamma^{\sigma]},i\gamma_{5}\}e\right)-24O_{1}^{eq_ {h}}\,,\\ E_{4}^{eq_{h}}&=\frac{1}{2}(\bar{e}\gamma_{[\mu} \gamma_{\nu}\gamma_{\rho}\gamma_{\sigma}\gamma_{\tau}\gamma_{\upsilon]}e) \left(\bar{q}_{h}\{\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma} \gamma^{\tau}\gamma^{\upsilon]},i\gamma_{5}\}q_{h}\right),\\ E_{4}^{q_{h}e}&=\frac{1}{2}(\bar{q}_{h}\gamma_{[ \mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma}\gamma_{\tau}\gamma_{\upsilon]}q _{h})\left(\bar{e}\{\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma} \gamma^{\tau}\gamma^{\upsilon]},i\gamma_{5}\}e\right),\\ E_{5}^{eq_{h}}&=\frac{1}{2}\big{[}(\bar{e}\gamma_{[ \mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma]}\gamma^{[\tau}\gamma^{\upsilon]} e)\left(\bar{q}_{h}\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma]} \gamma^{[\zeta}\gamma^{\xi]}q_{h}\right)\\ &\quad+(\bar{e}\gamma^{[\tau}\gamma^{\upsilon]}\gamma_{[\mu} \gamma_{\nu}\gamma_{\rho}\gamma_{\sigma]}e)\left(\bar{q}_{h}\gamma^{[\zeta} \gamma^{\xi]}\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma]}q_{h} \right)\big{]}\epsilon_{\tau\upsilon\zeta\xi}+48O_{2}^{eq_{h}}\,.\end{split} \tag{46}\] The square brackets denote antisymmetrisation normalised as \[\gamma_{[\mu_{1},\dots,\mu_{n}]}\equiv\frac{1}{n!}\sum_{\sigma}(-1)^{\sigma} \gamma_{\mu_{\sigma(1)}}\dots\gamma_{\mu_{\sigma(n)}}\,.\] ### Operators Related to the Infrared Rearrangement The last class of unphysical operators arises because our use of infrared rearrangement breaks gauge invariance in intermediate steps of the calculation. At the renormalisable level this method generates two gauge non-invariant operators corresponding to a gluon-mass and a photon-mass term, i.e. \[\mathscr{L}\supset\frac{1}{2}Z_{\text{IRA},g}Z_{G}\,m_{\text{IRA}}^{2}G_{\mu}^{ a}G^{\mu,\,a}+\frac{1}{2}Z_{\text{IRA},\gamma}Z_{A}\,m_{\text{IRA}}^{2}A_{\mu}A^{ \mu}\,. \tag{47}\] The mass, \(m_{\text{IRA}}\), is completely artificial and drops out of all physical results, and \(Z_{\text{IRA},g}\), \(Z_{\text{IRA},\gamma}\) are two additional renormalisation constants [23]. One-loop insertions of the dimension-five and dimension-six operators can induce further gauge-invariant, higher dimension operators that are relics of the infrared rearrangement. For our calculation, the only relevant one is \[P^{e}=m_{q_{h}}\frac{m_{\text{IRA}}^{2}}{e^{2}}\bar{e}i\gamma_{5}e\,. \tag{48}\] ## Appendix B Renormalization Constants The following are the SM counterterms necessary for the calculation: \[Z_{e}^{(0,1)} =\frac{2}{3\epsilon}\Big{[}N_{c}\big{(}n_{d}Q_{d}^{2}+n_{u}Q_{u} ^{2}\big{)}+n_{\ell}Q_{e}^{2}\Big{]}\,, \tag{49}\] \[Z_{e}^{(0,2)} =\frac{1}{\epsilon^{2}}\bigg{\{}\frac{2}{3}\Big{[}N_{c}\big{(}n_ {d}Q_{d}^{2}+n_{u}Q_{u}^{2}\big{)}+n_{\ell}Q_{e}^{2}\Big{]}^{2}+\epsilon\Big{[} n_{\ell}Q_{e}^{4}+N_{c}\big{(}n_{d}Q_{b}^{4}+n_{u}Q_{c}^{4}\big{)}\Big{]} \bigg{\}}\,,\] (50) \[Z_{A}^{(0,1)} =-\frac{4}{3\epsilon}\Big{[}N_{c}\big{(}n_{d}Q_{d}^{2}+n_{u}Q_{u} ^{2}\big{)}+n_{\ell}Q_{e}^{2}\Big{]}\,,\] (51) \[Z_{A}^{(1,1)} =\frac{1}{\epsilon}\big{(}1-N_{c}^{2}\big{)}\big{(}n_{d}Q_{d}^{2}+ n_{u}Q_{u}^{2}\big{)}\,,\] (52) \[Z_{A}^{(0,2)} =-\frac{2}{\epsilon}\bigg{[}n_{\ell}Q_{e}^{4}+N_{c}\big{(}n_{d}Q_ {d}^{4}+n_{u}Q_{u}^{4}\big{)}\bigg{]}\,,\] (53) \[Z_{m_{A}}^{(0,1)} =-\frac{8}{3\epsilon}\Big{[}N_{c}\big{(}n_{d}Q_{d}^{2}+n_{u}Q_{u} ^{2}\big{)}+n_{\ell}Q_{e}^{2}\Big{]}\,,\] (54) \[Z_{q_{h}}^{(1,0)} =-\frac{1}{\epsilon}C_{F}\xi_{g}\,,\] (55) \[Z_{f}^{(0,1)} =-\frac{1}{\epsilon}Q_{f}^{2}\xi_{a}\,,\] (56) \[Z_{q_{h}}^{(1,1)} =\frac{1}{\epsilon^{2}}C_{F}Q_{q_{h}}^{2}\bigg{(}\xi_{g}\xi_{a}+ \frac{3\epsilon}{2}\bigg{)}\,,\] (57) \[Z_{f}^{(0,2)} =\frac{1}{\epsilon^{2}}Q_{f}^{2}\bigg{\{}\frac{\xi_{a}^{2}}{2}Q_{ f}^{2}+\epsilon\bigg{(}\frac{3}{4}Q_{f}^{2}+n_{\ell}Q_{e}^{2}+N_{c}\big{(}n_{d}Q_ {d}^{2}+n_{u}Q_{u}^{2}\big{)}\bigg{)}\bigg{\}}\,,\] (58) \[Z_{m_{q_{h}}}^{(1,0)} =-\frac{3}{\epsilon}C_{F}\,,\] (59) \[Z_{m_{f}}^{(0,1)} =-\frac{3}{\epsilon}Q_{f}^{2}\,,\] (60) \[Z_{m_{q_{h}}}^{(1,1)} =\frac{9}{\epsilon^{2}}C_{F}Q_{q_{h}}^{2}\bigg{(}1-\frac{\epsilon }{6}\bigg{)}\,,\] (61) \[Z_{m_{q_{h}}}^{(2,0)} =-\frac{1}{\epsilon^{2}}\frac{C_{F}}{2N_{c}}\bigg{[}\frac{1}{2} \bigg{(}9-31N_{c}^{2}+4N_{c}(n_{u}+n_{d})\bigg{)}-\frac{\epsilon}{12}\bigg{(}9 -203N_{c}^{2}+20N_{c}(n_{u}+n_{d})\bigg{)}\bigg{]}\,, \tag{62}\] \[Z_{m_{f}}^{(0,2)} =\frac{1}{\epsilon^{2}}Q_{f}^{2}\bigg{\{}\frac{9}{2}Q_{f}^{2}-2\big{(} n_{\ell}Q_{e}^{2}+N_{c}n_{d}Q_{d}^{2}+N_{c}n_{u}Q_{u}^{2}\big{)} \tag{63}\] \[\qquad\qquad\qquad-\epsilon\bigg{(}\frac{3}{4}Q_{f}^{2}-\frac{5}{ 3}\big{(}n_{\ell}Q_{e}^{2}+N_{c}n_{d}Q_{d}^{2}+N_{c}n_{u}Q_{u}^{2}\big{)}\bigg{)} \bigg{\}}\,,\] where we have organised the contributions according to \[Z_{i}=\sum_{m,n}\tilde{\alpha}_{s}^{m}\tilde{\alpha}_{e}^{n}Z_{i}^{(m,n)}\,. \tag{64}\] and \(f\) denotes either a charged-lepton or a quark field. To obtain the two-loop anomalous dimension of the physical sector we need certain one-loop renormalisation constants involving unphysical operators. We collect them in this appendix. We write \(Z_{x\to y}\) where the subscripts \(x\) and \(y\) symbolize sets of Wilson coefficients, for which we use the following notation and standard ordering: \[P =\{C_{1}^{eq_{h}},\,C_{1}^{q_{h}e},\,C_{2}^{eq_{h}},\,C_{3}^{e} \}\,, \tag{65}\] \[E =\{C_{E_{1}^{eq_{h}}},\,C_{E_{1}^{q_{h}e}},\,C_{E_{2}^{eq_{h}}}, \,C_{E_{\gamma}^{e}}\}\,,\] \[M =\{C_{P^{e}}\}\,,\] \[N =\{C_{N_{1}^{e}},\,C_{N_{2}^{e}}\}\,.\] The first necessary input is the mixing of the physical operators into all the evanescent operators that are generated at one-loop. Using the same subscript notation as above, the renormalisation constants read \[Z_{P\to E}^{(0,1)}=\begin{pmatrix}Q_{e}Q_{q_{h}}&0&0&0\\ 0&Q_{e}Q_{q_{h}}&0&0\\ 0&0&-\frac{1}{2}Q_{e}Q_{q_{h}}&0\\ 0&0&0&-4Q_{e}^{2}\end{pmatrix}\,, \tag{66}\] and \(Z_{P\to E}^{(1,0)}=0\). The remaining mixing of physical operators into evanescent operators is zero at one-loop. Furthermore, the finite part of the mixing of evanescent into physical operators is subtracted by finite counterterms [24]. They read \[Z_{E\to P}^{(0,1)}=\begin{pmatrix}4Q_{e}Q_{q_{h}}&0&0&0\\ 0&4Q_{e}Q_{q_{h}}&0&0\\ 0&0&-8Q_{e}Q_{q_{h}}&0\\ 0&0&0&-\frac{2}{3}Q_{e}^{2}\end{pmatrix}\,. \tag{67}\] The remaining finite mixing of evanescent into physical operators is zero at one-loop. Furthermore, we need the mixing constants of the physical operators into the operators arising from infrared rearrangement; they are found to be \[Z_{P\to M}^{(0,1)}=\begin{pmatrix}0\\ -12\\ 0\\ -12Q_{e}^{2}\end{pmatrix}\,. \tag{68}\] All other mixing constants of physical into the IRA operators are zero. Finally, we need the mixing constants of the physical operators into the e.o.m.-vanishing operators. They are uniquely fixed by the \(q_{h}\to q_{h}\) Greens function. We find \[Z^{(0,1)}_{P\to N}=\begin{pmatrix}0&0\\ 0&0\\ 0&0\\ -2Q_{e}^{2}&-\frac{1}{6}Q_{e}^{2}\end{pmatrix}\,. \tag{69}\] The two-loop anomalous-dimension matrix is given in terms of the one- and two-loop renormalisation constants by \[\gamma^{(0,2)}=4Z^{(0,2;1)}-2Z^{(0,1;1)}Z^{(0,1;0)}\,, \tag{70}\] \[\gamma^{(1,1)}=4Z^{(1,1;1)}-2Z^{(0,1;1)}Z^{(1,0;0)}-2Z^{(1,0;1)}Z^ {(0,1;0)}\,,\] (71) \[\gamma^{(2,0)}=4Z^{(2,0;1)}-2Z^{(1,0;1)}Z^{(1,0;0)}\,, \tag{72}\] where we have further separated the renormalisation constants according to their poles, i.e. \[Z^{(m,n)}_{i}=\sum_{r}\frac{1}{\epsilon^{r}}Z^{(m,n;r)}_{i} \tag{73}\] The quadratic poles of the two-loop diagrams are fixed by the poles of the one-loop diagrams via \[Z^{(0,2;2)} =\frac{1}{2}Z^{(0,1;1)}Z^{(0,1;1)}-\frac{1}{2}\beta_{e}Z^{(0,1;1) }\,, \tag{74}\] \[Z^{(1,1;2)} =\frac{1}{2}Z^{(0,1;1)}Z^{(1,0;1)}+\frac{1}{2}Z^{(1,0;1)}Z^{(0,1; 1)}\,,\] \[Z^{(2,0;2)} =\frac{1}{2}Z^{(1,0;1)}Z^{(1,0;1)}-\frac{1}{2}\beta_{0}Z^{(1,0;1) }\,,\] where \(\beta_{0}=\frac{11}{3}N_{c}-\frac{2}{3}N_{f}\). As a check of our calculation, we computed these poles directly and verified that they satisfy Eq. (74).
2304.04463
Self-interactions of ULDM to the rescue?
One of the most important questions in cosmology is concerning the fundamental nature of dark matter (DM). DM could consist of spinless particles of very small mass i.e. $m \sim 10^{-22}\ \text{eV}$. This kind of ultralight dark matter (ULDM) would form cored density profiles (called "solitons") at the centre of galaxies. In this context, recently it has been argued that (a) there exists a power law relation between the mass of the soliton and mass of the surrounding halo called the Soliton-Halo (SH) relation, and, (b) the requirement of satisfying observed galactic rotation curves as well as SH relations is so stringent that ULDM is disfavoured from comprising $100\%$ of the total cosmological dark matter. In this work, we revisit these constraints for ULDM particles with non-negligible quartic self-interactions. Using a recently obtained soliton-halo relation which takes into account the effect of self-interactions, we present evidence which suggests that, for $m = 10^{-22}\ \text{eV}$, the requirement of satisfying both galactic rotation curves as well as SH relations can be fulfilled with repulsive self-coupling $\lambda \sim \mathcal{O}(10^{-90})$.
Bihag Dave, Gaurav Goswami
2023-04-10T09:05:58Z
http://arxiv.org/abs/2304.04463v2
# Self-interactions of ULDM to the rescue? ###### Abstract One of the most important unanswered questions in cosmology is concerning the fundamental nature of dark matter (DM). DM could consist of spinless particles of very small mass i.e. \(m\sim 10^{-22}\) eV. This kind of ultralight dark matter (ULDM) would form cored density profiles (called "solitons") at the centres of galaxies. In this context, recently it has been argued that (a) there exists a power law relation between the mass of the soliton and mass of the surrounding halo called the Soliton-Halo (SH) relation, and, (b) the requirement of satisfying observed galactic rotation curves as well as SH relations is so stringent that ULDM is disfavoured from comprising 100% of the total cosmological dark matter. In this work, we revisit these constraints for ULDM particles with non-negligible quartic self-interactions. Using a recently obtained soliton-halo relation which takes into account the effect of self-interactions, we present evidence which suggests that, for \(m\sim 10^{-22}\) eV, the requirement of satisfying both galactic rotation curves as well as SH relations can be fulfilled with repulsive self-coupling \(\lambda\sim{\cal O}(10^{-90})\). Dark Matter, Scalar Field Dark Matter, Ultra-Light Dark Matter, Fuzzy Dark Matter, Ultra-Light Axions, Rotation Curves, SPARC 2304.04463 000-0002-004-3301]Bihag Dave, \({}^{a}\) and Gaurav Goswami \({}^{b,c}\) \({}^{a}\)School of Engineering and Applied Science, Ahmedabad University, Commerce Six Roads, Navrangpura, Ahmedabad - 380009, Gujarat, India \({}^{b}\)Division of Mathematical and Physical Sciences, School of Arts and Sciences, Ahmedabad University, Commerce Six Roads, Navrangpura, Ahmedabad - 380009, Gujarat, India \({}^{c}\)International Centre for Space and Cosmology, Ahmedabad University, Commerce Six Roads, Navrangpura, Ahmedabad - 380009, Gujarat, India [email protected], [email protected] 0000-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-002-0002-002-002-002-0002-002-002-002-002-00202-002-002-002-00202-002-0022-0022-0022-0022-0022-0022-0022-0022-0022-00222-00222-00222-00222-00222-00222-00222-002222-002222-002222-002222-002222-0022222-0022222-002 + Footnote †: institutetext: Department of Physics and Astronomy, University of California, Berkeley, CA 94720-119, USA ## 1 Introduction The physical nature of Dark Matter (DM) has eluded physicists for nearly a century since its initial proposition [1; 2; 3]. One expects DM to consist of some new kind of elementary particles, as opposed to those in the standard model of elementary particle physics. However, the basic properties of these newer elementary particles making up DM, such as spin, mass, interaction strengths etc, are completely unknown. If the elementary particles forming DM are fermions, Pauli's exclusion principle implies that the corresponding particle mass must be larger than \({\cal O}(100)\) eV -- the well known Tremaine-Gunn bound [4] -- in this picture, DM could be thought of as a collection of non-relativistic particles. On the other hand, if DM consists of Bosons, the particle masses are allowed to be smaller than this bound. In particular, for DM Bosons with masses much lighter, the occupation numbers can be so large that it can be conveniently described as a classical field [5]. While this behaviour is expected for Bosons in general, to begin with, one could restrict one's attention to considering spin zero i.e. scalar particles. Most theories of new Physics do indeed predict new scalar fields which play important roles in cosmology. Existence of DM dominated dwarf galaxies suggests that, if all particles making up DM have the same mass, the particle mass can't be too small i.e. \(m\geq 10^{-22}\) eV [6; 7; 8]. The term Fuzzy Dark Matter (FDM) is used to refer to Ultra-Light Dark Matter (ULDM) particles in the mass range \(m\sim 10^{-22}-10^{-20}\) eV with negligible self-interactions [9; 10; 11]. An important feature of FDM is the core-halo structure, predicted by various independent simulations [12; 13; 14; 15] over the last decade. The inner regions of DM halos are described by flat density cores (or solitons) while far from the centre, the density transitions to a CDM-like profile, essentially not altering large scale cosmology. Due to this, FDM can solve the 'core-cusp' problem, the missing satellites problem, and the too-big-to-fail problem [16]. The simulations which suggest that FDM solves these problems also suggest a power law relation between the mass of the soliton and mass of the surrounding halo called the Soliton-Halo (SH) relation. Note that there is some disagreement on value of the exponent in the power law [17]. Recently, Refs. [18; 19] have argued that FDM in the mass range \(m\in\left[10^{-24}~{}\text{eV},10^{-20}~{}\text{eV}\right]\) cannot adequately describe observed rotation curves from the Spitzer Photometry & Accurate Rotation Curves (SPARC) catalogue while also satisfying the Soliton-Halo (SH) relation obtained by [12; 13]. This along with other concerns about FDM [20; 21; 22; 23; 24; 25; 26; 27; 28] suggest that the assumptions involved in the FDM paradigm need to be carefully examined. Fundamental physics suggests that scalars should have self coupling, in particular, the quartic self coupling. While the self-coupling of the scalar particles forming DM could indeed be negligible, whether this is the case needs to be established by observations [29; 30; 31; 32]. It is thus natural to consider DM consisting of ultra-light classical scalar fields with non-negligible self interactions. To aid the discussion, let us refer to ULDM consisting of a classical scalar field with attractive or repulsive self-interactions by the name scalar field dark matter (SFDM). To model cores of DM halos, self gravitating configurations of such self interacting scalar fields have to be considered in great detail. One can form stable configurations of different masses and sizes (depending on the sign and strength of self-interactions), such as Boson stars, Q-balls, Oscillatons, etc. [33]. It is well known that even a very small self-coupling can dramatically affect the structure and stability of the resulting pseudo-solitonic solutions [34; 35]. Given this, it is important to find out if SFDM with attractive or repulsive self coupling can help evade the constraints obtained by Refs. [18; 19] based on rotation curves and soliton-halo relations. In order to answer this question, it is also important to develop a clear understanding of the behaviour of solitonic cores in the presence of arbitrary self interactions of the scalar field. We explore these issues in the present paper. This paper is organised as follows: in section 2, we briefly discuss rotation curves, SH relations and the results of [19]. We also summarize the motivation for including self-interactions, SH relations in the presence of self-interactions, and introduce the Gross-Pitaevskii-Poisson system along with its time-independent solutions. In section 3.1, we revisit the so-called mass-radius relations for attractive and repulsive self-interactions using numerical solutions and scaling symmetry, while in section 3.2 we examine the impact of self-interactions on rotation curves of solitons. In section 3.3, using a modified SH relation, we show that SFDM with \(\lambda>0\) can simultaneously satisfy the said relation and is allowed by the data. In section 4 we summarize our results and motivate future work from our initial analysis. In appendix A, we describe the various regimes in the mass-radius plane for a fixed \(\lambda\), address a constraint from our previous work [36] for \(\lambda<0\) and show how the mass-radius curve is altered in the presence of a black hole at the centre. Recently it was shown that the SH relation for FDM can be written as the condition that peak velocities in the soliton and halo be approximately equal [18]. In appendix B we impose this condition while also allowing for attractive and repulsive self-interactions for a fixed \(m=10^{-22}\) eV and confront DM-only velocity curves of 17 LSB galaxies. We work in \(\hbar=c=1\) units unless mentioned otherwise, and denote Planck mass and reduced Planck mass by \(m_{pl}\) and \(M_{pl}\) respectively. ## 2 Observations and Motivation Galactic rotation curves i.e. orbital velocity of stars and gas as a function of distance from the centres of galaxies are an important probe of the matter (visible and dark) distribution in said galaxies [37; 38]. In general one can obtain the circular velocity of a test particle in an orbit of radius \(r\) in the gravitational potential of a spherically symmetric distribution of matter using \[v(r)=\sqrt{\frac{GM(r)}{r}}=\sqrt{\frac{4\pi G\int_{0}^{r}\rho(r^{\prime})r^{ \prime 2}dr^{\prime}}{r}}. \tag{1}\] The total observed velocity can be split into various components corresponding to different distributions of matter in the galaxy: (a) stellar disk (\(V_{d}\)), (b) stellar bulge (\(V_{b}\)), (c) gas (\(V_{g}\)) and (d) dark matter (\(V_{DM}\)). Hence, a typical observed rotation curve for a galaxy from, for instance the Spitzer Photometry & Accurate Rotation Curves (SPARC) catalogue [39], can be written as \[V_{obs}=\sqrt{V_{DM}^{2}+V_{g}|V_{g}|+\Upsilon_{d}V_{d}|V_{d}|+\Upsilon_{b}V_{ b}|V_{b}|}. \tag{2}\] Note that here contributions from the disk and bulge can be tuned using the stellar mass-to-light ratios \(\Upsilon_{d}\) and \(\Upsilon_{b}\) respectively. Usually, observed rotation curves exhibit a velocity that increases in the inner region, and then flattens as one goes further away from the centre. The inner regions of large galaxies are well-explained by the sizeable amount of baryonic matter contained near the centre. On the other hand, dark matter is required to explain the flat rotation curves at large \(r\) where baryonic contribution is very little. DM-only simulations [40] suggest a density profile for DM that goes like \(r^{-1}\) in the inner region and \(r^{-3}\) at large \(r\). This is the well-known Navarro-Frenk-White (NFW) profile given by \[\rho_{NFW}(r)=\frac{\rho_{s}}{\frac{r}{r_{s}}\left(1+\frac{r}{r_{s}}\right)^{ 2}}\, \tag{3}\] where \(\rho_{s}\) and \(r_{s}\) are parameters of the profile. These parameters can be chosen such that the corresponding velocity curve exhibits a flat portion at a desired scale. The NFW velocity curve also attains a maximum at \(\sim 2.16r_{s}\). For small \(r\), using eq. (1) one can see that NFW profile implies an increasing velocity where \(v\propto\sqrt{r}\). However, for many low mass and low surface brightness (LSB) galaxies where the baryonic contribution is thought to be small even at small radius, observed velocities in the inner regions (\(\sim\mathcal{O}(1)\) kpc) point to a more slowly increasing velocity curve, \(v\propto r\). This is the manifestation of the well-known core-cusp problem [16]. FDM resolves this issue by considering the inner regions to be described by stable solutions of the Schrodinger-Poisson equations. These core-like structures have flat density profiles and are called solitons. Independent numerical simulations [12; 13; 14; 15] have confirmed a core-halo structure for FDM, where the inner region is described by the FDM core (also called a soliton) while further from the centre, it behaves like CDM. Here, the total density profile can be written as, \[\rho(r)=\Theta(r_{t}-r)\rho_{SFDM}(r)+\Theta(r-r_{t})\rho_{NFW}(r)\, \tag{4}\] where imposing the continuity of density at \(r_{t}\) implies \(\rho_{SFDM}(r_{t})=\rho_{NFW}(r_{t})\). Note that this fixes one of the parameters of the NFW profile (\(\rho_{s}\)) leaving two free parameters for the outer envelope: \(\{r_{t},r_{s}\}\). ### Soliton-Halo (SH) relations Simulations in [12; 13] also obtained a power-law relationship between mass of the soliton \(M_{SH}\) and mass of the halo \(M_{h}\)1 of the form \(M_{SH}\propto M_{h}^{1/3}\), or, more precisely [13] Footnote 1: We remind the reader that halo mass is often defined as \(M_{h}=\frac{4\pi}{3}(200\rho_{c})R_{200}^{3}\), where \(R_{200}\) is the radius at which the average density of the mass contained is 200 times the critical density (\(\rho_{c}\)) of the universe. \[\left(\frac{M_{SH}}{10^{9}\ M_{\odot}}\right)=1.4\left(\frac{M_{h}}{10^{12}\ M_{\odot}}\right)^{1/3}\left(\frac{m}{10^{-22}\ \mathrm{eV}}\right)^{-1}. \tag{5}\] The simulations report a scatter of roughly a factor of 2 in this soliton-halo (SH) relation. It is important to note that other simulations [14; 15] have reported a different power law between soliton mass and halo mass: \(M_{SH}\propto M_{h}^{0.556}\): this disagreement could partly be the result of different merger histories and tidal stripping [17]. Therefore, if simulations in [12; 13] are correct, then along with describing observed rotation curves, FDM is expected to satisfy the SH relation in eq. (5). In other words, such relations can be considered to be a sharp prediction of the FDM paradigm. ### Rotation curves and SH relations for FDM Recently, using all 175 galaxies in the SPARC catalogue, Ref. [19] reported that, for FDM, the SH relation in eq. (5) is not consistent with the observed rotation curves. In this context, we direct the reader to figure 1 of Ref. [19] as well as figure 5 of this work. The details of the procedure followed in Ref. [19] which are relevant for our purpose are discussed in section 3.3.2. For now, we just highlight the following: the soliton masses allowed by the rotation curves data were much smaller than the corresponding soliton masses expected from the SH relation obtained from eq. (5) for \(m\in\left[10^{-24}\ \mathrm{eV},10^{-20}\ \mathrm{eV}\right]\).2 Footnote 2: Authors in [41] found that the SH relation in eq. (5) was equivalent to the ratio of kinetic energy and total mass being roughly the same for the soliton and halo: \((K/M)_{sol}\approx(K/M)_{halo}\). In particular, Ref. [18] showed that for \(m\in\left[10^{-22}\ \mathrm{eV},10^{-21}\ \mathrm{eV}\right]\), if a soliton with mass \(M_{s}\) is expected to satisfy the SH relation, then the corresponding velocity curve significantly overshoots observed velocity in the inner regions of dark matter dominated galaxies. Hence, the SH relation for an ultra-light scalar field with no self-interactions seems to be incompatible with observed rotation curves. This along with other constraints mentioned in section 1 potentially rules out FDM being a significant fraction of all dark matter. ### SFDM self-interactions Let us begin by noting that the existence of quartic (i.e. \(\lambda\varphi^{4}\)-type) self-interaction term in the Lagrangian of a scalar field is inevitable. In the non-relativistic limit (relevant to cold DM), this self-interaction leads to an inter-particle interaction potential energy function of the form \(U=U_{0}\ \delta^{3}(\mathbf{x}_{i}-\mathbf{x}_{j})\) i.e. it is a contact interaction. Depending on the sign of self coupling, this interaction could be attractive (\(\lambda<0\)) or repulsive (\(\lambda>0\)). Scalar field dark matter (SFDM) with attractive self-interactions is well motivated if one considers axions, where Taylor expansion of the cosine potential will lead to a quartic self-interaction term with \(\lambda<0\)[42]. On the other hand, repulsive (\(\lambda>0\)) self-interactions are expected from e.g. moduli fields ubiquitous in theories of high energy physics. If ultra-light axions (ULAs) are to comprise all of DM, the self-interaction strength must be of the order \(\sim 10^{-96}\) for mass \(\sim 10^{-22}\) eV [43]. Recent simulations in [44] focus on the impact of such attractive self-interactions on cosmological structure formation. In fact, it is well known that even very small self-interactions can dramatically change the resultant stable configuration [34]. The effect of self-interactions on the mass and radius of solitons is apparent even in the Newtonian-limit, as [35] demonstrated for both attractive and repulsive self-interactions. Thus, there are very good reasons to consider SFDM with small but non-negligible self-interactions. Before proceeding, we note that SFDM with self interactions, in particular in the Thomas-Fermi (TF) regime [45], has been constrained in the past in various ways e.g. by looking at cosmological evolution and structure formation [29, 46], and even using rotation curves [47, 48, 49, 50, 51]. See also [52, 53, 54, 55, 56, 57] for some selected references that consider scalar fields with self-interactions in this context. ### SH relations for SFDM with self-coupling Other parameters being fixed, the mass of a soliton in the presence of self-interactions gets changed. This suggests that, in the presence of self-interactions, the corresponding soliton-halo relation will take up a form different from eq. (5) which is not expected to be valid when \(\lambda\neq 0\). Recently, [58, 59] arrived at the corresponding SH relation, which takes up the following form (see also section V of Ref. [60]) \[\left(\frac{M_{SH}}{10^{9}\ M_{\odot}}\right)=1.4\left(\frac{M_{h}}{10^{12}\ M_{\odot}}\right)^{1/3}\left(\frac{m}{10^{-22}\ \text{eV}}\right)^{-1}\sqrt{1+(1.16\times 10^{-7}) \hat{\Lambda}\left(\frac{M_{h}}{10^{12}\ M_{\odot}}\right)^{2/3}}\, \tag{6}\] where the \(\hat{\Lambda}\) in the above equation is proportional to the self coupling \(\lambda\) of the scalar.3 The origin of the numerical factor in front of \(\hat{\Lambda}\) can be understood from the discussion above eq. (80) in [59]. Note that in the absence of self-interactions the SH relation reduces to eq. (5). While eq. (6) is valid for both attractive and repulsive self-interactions, for \(\lambda\) that is too negative, this SH relation will no longer be applicable (see also section 3.3.3). Footnote 3: More precisely, \(\hat{\Lambda}\) is the same as \(\frac{\lambda}{4}\left(\frac{M_{pl}}{m}\right)^{2}\) which is the same as \(2(s^{2}\hat{\lambda}_{ini})\) in our notation introduced in section 2.5. ### Gross-Pitaevskii-Poisson Equations Consider a classical real scalar field with the potential \(U(\varphi)=\frac{m^{2}\varphi^{2}}{2}+\frac{\lambda\varphi^{4}}{4!}\), where, \(\lambda\) dictates the strength of self-interactions. In the non-relativistic limit, the real \(\varphi\) can be written in terms of a complex field, \(\varphi=\frac{1}{\sqrt{2m}}\left(e^{-imt}\Psi+c.c.\right)\). By averaging out the rapidly oscillating modes and taking the weak-gravity limit, the scalar field can then be described using the Gross-Pitaevskii-Poisson (GPP) equations [61] (see [36] for a detailed derivation and other notations and conventions used in this paper): \[i\frac{\partial\Psi}{\partial t} = -\frac{\nabla^{2}}{2m}\Psi+m\Phi\Psi+\frac{\lambda}{8m^{3}}|\Psi |^{2}\Psi\, \tag{7}\] \[\nabla^{2}\Phi = \frac{|\Psi|^{2}}{2M_{\rm pl}^{2}}. \tag{8}\] Here the mass density of the scalar field is \(\rho=|\Psi|^{2}\). For modeling cores of DM halos, we are interested in stationary solutions of the GPP system such that \(\Phi\) is time-independent and one can separate time dependence for \(\Psi\) using \(\Psi(\vec{r},t)=\phi(\vec{r})e^{-i\gamma t}\). We also want solutions to be spherically symmetric, node-less (ground state), spatially localised and regular everywhere: in the present context, such solutions are referred to as solitons. Even if the microscopic parameters of DM species e.g. the particle mass \(m\) and self coupling \(\lambda\) are fixed, one can form many possible solitons (which are macroscopic objects) depending on how many particles form the soliton. In other words, even for fixed \(m\) and \(\lambda\), there is a family of solutions parameterised by the total number of particles used to form them. It is convenient to work with dimensionless variables which motivates the following re-scaling of relevant quantities: \(\hat{\phi}=\frac{\hbar\sqrt{4\pi G}}{mc^{2}}\phi\), \(\hat{\Phi}=\frac{\Phi}{c^{2}}\), \(\hat{\gamma}=\frac{\gamma}{mc^{2}}\), \(\hat{\lambda}=\frac{\lambda}{8}\left(\frac{M_{pl}}{m}\right)^{2}\), \(\hat{r}=\frac{mc\tau}{\hbar}\). Note that \(m\) will be absent in the dimensionless GPP equations. For the rest of the paper, any 'hatted' variable will denote a dimensionless quantity. One can then use the shooting method to solve the system with the following initial conditions: \(\hat{\phi}(0)=1\), \(\hat{\phi}^{\prime}(0)=0\), \(\hat{\Phi}(0)=0\), \(\hat{\Phi}^{\prime}(0)=0\). One can obtain the value of \(\hat{\gamma}\) which satisfies the boundary condition \(\hat{\phi}(\hat{r}\rightarrow\infty)=0\) (solution for \(\hat{\lambda}=0\) is shown by the purple curve in figure 1). Note that \(\hat{\lambda}\) is the sole free parameter of the dimensionless GPP system. To obtain the family of solutions mentioned above, one can get to a physical solution with the desired value of central density \(\hat{\phi}(0)\) using the scaling symmetry that the GPP system enjoys [62; 63; 64]: \[\left\{r,\lambda,\phi,\Phi,\gamma\right\}\rightarrow\left\{sr,s^{2}\lambda, s^{-2}\phi,s^{-2}\Phi,s^{-2}\gamma\right\}\, \tag{9}\] where \(s\) is the scaling parameter (see [36] for details). The value of the scaling parameter \(s\) is the numerical factor by which the soliton becomes larger or smaller due to scaling transformation. Taking into account this way of parameterizing the solutions, free parameters of the dimensionful GPP system then are \(\{m,\hat{\lambda}_{ini},s\}\), where \(\hat{\lambda}_{ini}\) is the value of \(\hat{\lambda}\) before scaling. Once a solution and the corresponding density profile is obtained, one can also calculate the total soliton mass by solving the integral, \[\hat{M}=\int_{0}^{\infty}\hat{r}^{2}\hat{\phi}^{2}d\hat{r}. \tag{10}\] Dimensions can be restored to obtain the physical soliton mass using \(M_{s}=\hat{M}\frac{m_{pl}^{2}}{m}\). At this stage, we need to distinguish between the soliton mass \(M_{s}\) obtained here and the soliton mass \(M_{SH}\) obtained from soliton halo relation such as eq. (6). If soliton halo relations are satisfied, these two must be equal (to within the scatter of the relation). On the other hand, if the soliton halo relations do not get satisfied, \(M_{s}\) and \(M_{SH}\) could be completely different. One can also define a characteristic length scale \(\hat{R}\equiv\hat{R}_{95}\), which is the radius within which 95% of the total mass is contained (similarly one can also use \(\hat{R}_{99}\)). Scaling for derived quantities like \(\hat{M}\) can be obtained from eq. (10) and (9) to be \(\hat{M}\rightarrow\hat{M}/s\). For the rest of the paper, we denote unscaled quantities with a subscript 'ini', e.g., \(\hat{\lambda}_{ini}\), \(\hat{M}_{ini}\), \(\hat{R}_{ini}\), etc. Once density profile \(\hat{\rho}(\hat{r})=|\hat{\phi}(\hat{r})|^{2}\) is obtained from the solution, one can also find the corresponding velocity curve from eq (1). It is easy to see that \(\hat{v}=v/c\) (velocity curve for \(\hat{\lambda}_{ini}=0\) is shown by the pink curve in figure 1). Note that scaling symmetry implies that velocity scales as \(\hat{v}\rightarrow\hat{v}/s\). ## 3 Self-interacting ULDM, rotation curves and soliton-halo relations In this section we present our main results. Before proceeding, note that we fix DM particle mass to the fiducial value of \(m=10^{-22}\) eV unless mentioned otherwise and hence focus our attention on the effect of (a) varying DM self-coupling \(\lambda\) (parameterised by the dimensionless quantity \(\hat{\lambda}_{ini}\)), and, (b) varying the total number of DM particles forming the soliton acting as the core of the DM halo of a given galaxy (this is parameterised by soliton mass \(M_{s}\) or scale \(s\)). For a particular DM species with a fixed physical \(m\) and \(\lambda\), as we consider various soliton solutions with different total masses, the size of the soliton is different. In section 3.1 we arrive at some important results about the connection between the mass of the soliton and its radius. Since we eventually need to satisfy rotation curves, in section 3.2 we shall briefly look at the impact of the free parameters \(\{\hat{\lambda}_{ini},s\}\) on the circular velocity (i.e. rotation curves). Finally, in section 3.3.2 we shall check the compatibility of the modified SH relation in eq. (6) with observed rotation curves for solitons formed from SFDM with self-interactions. ### Mass-Radius relations and their implications Solitonic solutions are the result of a delicate balance between the outward 'quantum pressure' arising from the gradient term in the Gross-Pitaevskii equation, the attractive or repulsive self-interactions of the scalar field, and its self-gravity, leading to a family of allowed masses Figure 1: Dimensionless solution satisfying the boundary condition \(\hat{\phi}(\infty)=0\) for \(\hat{\lambda}_{ini}=0\) is shown by the purple curve, while the corresponding velocity curve is shown by the pink curve. Note that \(\hat{r}_{p}\) is the radius at which the velocity peaks. \(M\) and corresponding sizes \(R\). Even small values of the self-coupling strength, \(\lambda\sim{\cal O}(10^{-98})\) can impact the allowed mass and size of solitons.4 Footnote 4: This can be easily seen from eq. (7), where the self-interaction term is comparable to other terms only when \(\hat{\lambda}\sim{\cal O}(1)\), which for \(m\sim 10^{-22}\) eV, gives the self-coupling strength \(\lambda\sim{\cal O}(10^{-98})\). To understand the relation between mass and radius of solitonic solutions, one usually proceeds by using an ansatz [35, 65] for the form of density profile -- this allows one to write an analytical expression for the energy of the system in terms the soliton mass \(M\) and soliton radius \(R\). The solutions then correspond to the critical points of the energy. One can then obtain an analytical mass-radius relation for various critical solutions [35, 65]. This approach, while beautiful, relies on assuming a form of the density profile ansatz. It is thus interesting to ask how the expected relationship between the size of the soliton and its mass arises from numerically solving the dimensionless GPP equations. We show that the scaling symmetry of GPP system can be exploited to understand this. #### 3.1.1 Mass-radius curves without a density profile ansatz We begin by solving the GPP system for various choices of \(\hat{\lambda}_{ini}\), and for each such choice, we calculate the corresponding soliton mass \(\hat{M}_{ini}\) and radius \(\hat{R}_{ini}=\hat{R}_{99}\) (see blue curve in figure 2 marked "Unscaled"). All the other solutions of GPP system can be obtained from this blue curve by employing scaling transformations as we now argue. Figure 2: Blue curve denotes unscaled \(\left(\hat{M}_{ini},\hat{R}_{ini}\right)\) for various values of \(\hat{\lambda}_{ini}\). Red curve shows the mass-radius curve for attractive self-interactions, while the green curve does the same for repulsive self-interactions, for a fixed \(|\hat{\lambda}_{fin}|=100\). Arrows denote transformation due to scaling from a fixed \(s\) to a fixed \(\hat{\lambda}_{fin}\) curve. NG corresponds to the non-gravitational regime, NI to the non-interaction regime, and TF (vertical dashed line) to the Thomas-Fermi regime. See appendix A for a discussion on different regimes. The mass-radius curves for some fixed value of scaled self-interaction strength \(|\hat{\lambda}_{fin}|\) are shown in figure 2 for both attractive (red) and repulsive (green) cases. Let us now understand how one gets these curves from the unscaled blue curve. For some fixed arbitrary scale value \(s\), using eq. (9) each point can be scaled to \(\left(\hat{R}_{fin},\hat{M}_{fin}\right)=\left(s\hat{R}_{ini},\hat{M}_{ini}/s\right)\). However, scaling symmetry also implies that each scaled point corresponds to a different value of the scaled self-interaction strength (\(\hat{\lambda}_{fin}=s^{2}\hat{\lambda}_{ini}\)). Since we are interested in the case where \(\hat{\lambda}_{fin}\) (negative or positive) is fixed, we choose \(s\) such that for any \(\hat{\lambda}_{ini}\), \[s=\sqrt{\frac{\hat{\lambda}_{fin}}{\hat{\lambda}_{ini}}}\, \tag{10}\] where \(\hat{\lambda}_{fin}\) remains fixed. The corresponding scaled radius and mass of the soliton \(\left(\hat{R}_{fin},\hat{M}_{fin}\right)\) represent the mass-radius curves for a fixed \(\hat{\lambda}_{fin}\) as shown in figure 2 for \(\hat{\lambda}_{fin}=+100\) (green curve) and \(\hat{\lambda}_{fin}=-100\) (red curve). Note from eqs. (9) and (10) that the scaled mass \(\hat{M}_{fin}\) and radius \(\hat{R}_{fin}\) change only when the products \(\hat{M}_{ini}|\hat{\lambda}_{ini}|^{1/2}\) and \(\hat{R}_{ini}|\hat{\lambda}_{ini}|^{-1/2}\) vary respectively. This enables one to go from solutions with different \(\hat{\lambda}_{ini}\) and the same scale (\(s=1\)) to solutions with different \(s\) values and the same \(\hat{\lambda}_{fin}\). The final mass-radius curves are consistent with what is obtained by assuming an ansatz for soliton density profile [35, 65] but can be obtained without making this assumption (see also [59, 62, 66]). An important thing to note is that the choice of \(\hat{\lambda}_{fin}\) fixes only the location of a mass-radius curve in the \(M-R\) plane. It is \(\hat{\lambda}_{ini}\) that decides the shape of the mass-radius curve, i.e. information about what regime one is in lies with the unscaled dimensionless solutions. This is discussed in greater detail in appendix A.1. We note some interesting features of the mass-radius curves in presence of self-interactions: (a) For attractive self-interactions, there exists a maximum mass \(M_{max}\) for \(\hat{\lambda}_{ini}=-0.4\); (b) For repulsive self-interactions, for large \(\hat{\lambda}_{ini}>0\), \(\hat{R}_{fin}\) appears to asymptote to a minimum radius (here the system is said to be in the Thomas-Fermi regime). These features are discussed in detail in appendix A. It is also important to note from figure 2 that for \(\lambda<0\) there exist two radii for the same soliton mass. However, only the larger of the two radii corresponds to a stable solution while the smaller one is unstable under small perturbations [35, 65, 66]. This establishes an upper limit on the amount of attractive self-interactions one can have if one desires a stable solitonic solution at the centre of DM halos. In terms of dimensionless self-coupling strength, this limit is given by \(\hat{\lambda}_{ini}>-0.4\) (see appendix A). #### 3.1.2 Implications for constraints in \(\lambda-m\) plane Having developed this machinery, we also revisit constraints from the previous work [36], where the the effect of a point-mass black hole at the centre of the halo (along with SFDM self-interactions) was considered. To understand this, in appendix A.2.2 we present the mass-radius curve and maximum allowed mass for a fixed \(\lambda<0\) in the presence of a black hole. Ref. [36] presented a method for obtaining observational constraints in \(\lambda-m\) plane. For attractive self interactions, as is seen from figure 3 of [36], there is a region of parameter space (the light grey region marked "Can't be probed") which is such that even though the corresponding parameter values lead to a stable soliton, it can not be probed by the method presented in [36]. In appendix A.3, we show that this inaccessible region in figure 3 of [36] is in-fact the region where \(M_{s}>M_{max}\) for the corresponding values of \(\lambda\) and \(m\), i.e. solitonic solutions do not exist for the said \(\lambda\) and \(m\). Hence, there will be no region in the parameter space that corresponds to stable solitons which can not be probed by the method presented in [36]. ### Impact of parameters on rotation curves Every combination of the free parameters \(\{m,\hat{\lambda}_{ini},s\}\) will lead to a unique density profile and a unique corresponding circular velocity profile. As \(m\) is fixed, it is useful to ask how varying other two free parameters \(\hat{\lambda}_{ini}\) and \(s\), affects the velocity curves of solitons. 1. **Impact of \(s\):** Scaling symmetry implies that \(v\to v/s\) and \(r\to r\cdot s\). Therefore, an increase in \(s\) leads to the stretching of the \(r\)-axis, while squeezing the \(v\)-axis, leading to a larger soliton but a smaller peak velocity. This effect is shown in the left panel of figure 3. Note that \(M_{s}\) scales in the same way as \(v\), implying that for a fixed \(m\) and \(\lambda\) a smaller peak velocity corresponds to a lighter soliton and vice-versa. 2. **Impact of \(\hat{\lambda}_{ini}\):** As \(\hat{\lambda}_{ini}\) increases, the corresponding \(\hat{R}_{ini}\) and \(\hat{M}_{ini}\) also increase (see blue curve in figure 2). Hence, for a fixed \(m\) and \(s\), increasing \(\hat{\lambda}_{ini}\) will stretch both the \(r\) and \(v\) axes leading to a larger peak velocity and \(r_{95}\). The opposite is true when \(\hat{\lambda}_{ini}\) decreases. The effect can then be described as the stretching and squeezing in roughly the direction of the slope of the linear region of the velocity curve (see right panel of figure 3). Note that the squeezing effect of \(\hat{\lambda}_{ini}=-1\) is much smaller than the expanding effect of \(\hat{\lambda}_{ini}=1\). ### Confronting observed rotation curves Armed with the knowledge of how different parameters impact soliton velocity curves, we can now confront observations. We consider low surface brightness (LSB) galaxies from the Spitzer Photometry & Accurate Rotation Curves (SPARC) catalogue [39], which hosts surface photometry at 3.6 \(\mu m\) and HI/H\(\alpha\) rotation curves for 175 galaxies. In this section, we probe the compatibility of the modified SH relation in eq. (6) with observed rotation curves. Figure 3: The left panel demonstrates how larger scale values lead to larger cores but smaller peak velocities, and vice-versa (\(\hat{\lambda}_{ini}=0\) is fixed). The right panel shows the impact of changing the self-interaction strength. For \(\hat{\lambda}_{ini}>0\), peak velocity and size of the core increases, while for \(\hat{\lambda}_{ini}<0\) peak velocity decreases and the core gets smaller (\(s=5000\) is fixed). #### 3.3.1 Dataset Before proceeding, we ensure that we are dealing with good quality rotation curves by eliminating galaxies with quality flag \(Q=3\). This removes galaxies with large asymmetries and non-circular motions. Since LSB galaxies are characterized by a low effective surface brightness (\(B_{\rm eff}\)), we only keep galaxies with \(\log(B_{\rm eff})\leq 1.5~{}L_{\odot}/{\rm pc}^{2}\)[67]. This leaves us with a sample of 56 galaxies. Note that galaxies in our sample are bulgeless i.e. \(V_{b}=0\) in eq. (2) at all radii for every galaxy. #### 3.3.2 Rotation curves and soliton-halo relation for repulsive self-interactions In this section, we check the compatibility of the modified SH relation in eq. (6) with observed rotation curves of the sample of LSB dwarf galaxies from the SPARC catalogue. We carry out a procedure similar to the one performed in Ref [19], however here instead of varying scalar field mass \(m\), we keep \(m=10^{-22}\) eV fixed and vary the dimensionless self-interaction strength \(\hat{\lambda}_{ini}\geq 0\). For the sake completeness we also show the results for a varying \(m\) with no self-interactions \(\lambda=0\) (for comparison with the results of [19]) and \(\lambda\neq 0\) (in section 3.3.4). We also talk briefly about the effect of negative self-interactions in section 3.3.3. To illustrate the procedure, we shall use the example of the galaxy 'UGC 1281' whose observed rotation curve is shown in figure 3(a) using data points. These data points correspond to \(V_{obs}\) while the solid curves in figure 3(a) correspond to \(V_{DM}\) in eq. (2). The dark matter velocity \(V_{DM}\) can be obtained from eq. (1) for the density profile given by eq. (4) in which \(\rho_{SFDM}\) can be obtained from the numerical solution of GPP equations. If \(V_{DM}\) happens to be smaller than \(V_{obs}\), the other terms on the RHS could be such that eq. (2) is still satisfied and the corresponding model parameters leading to said \(V_{DM}\) will be allowed. On the other hand, if \(V_{DM}\) is larger than \(V_{obs}\), eq. (2) will not be satisfied and the corresponding model parameters will be ruled out. Note that for \(r>r_{t}\), the density profile \(\rho_{NFW}\) will be determined by NFW parameters which we assume can be adjusted to ensure that \(V_{DM}\) is not larger than \(V_{obs}\). Since \(m\) is fixed, free parameters of the system are \(\{\hat{\lambda}_{ini},s\}\). Now for a fixed \(\hat{\lambda}_{ini}\), we have seen in figure 3 that a larger \(s\) corresponds to a soliton velocity curve with a smaller slope in the inner region and a smaller peak velocity. Hence, for a large enough value of scale \(s\) the soliton velocity curve does not overshoot the observed velocity at any point (see the green curve in figure 3(a)). As \(s\) decreases, the corresponding soliton mass increases (since \(M_{s}\to M_{s}/s\)) along with the slope of the inner region of the soliton velocity curve and its peak velocity. For a small enough value of \(s\), \(V_{DM}\) overshoots \(V_{obs}\), as shown by the blue curve in figure 3(a). The smallest value of scale that does not cause \(V_{DM}\) to overshoot \(V_{obs}\) (as shown by the purple curve in figure 3(a)) is denoted by \(s_{crit}\). The soliton mass corresponding to \(s_{crit}\) is the largest soliton mass \(M_{s}^{crit}\) allowed by the data. For galaxies in our sample, the typical values of \(s_{crit}\sim\mathcal{O}(10^{4})\). Before proceeding further, we make the following assumptions: 1. We set the following overshooting condition: At an ith observed radius we calculate \(\chi_{i}^{2}=\frac{(v_{i}^{pred}-v_{i}^{obs})^{2}}{\sigma_{i}^{2}}\), where \(v_{i}^{pred}\) is the predicted velocity, \(v_{i}^{obs}\) is the observed velocity and \(\sigma_{i}\) is the uncertainty at that radius. We exclude a soliton if for any ith observed radius bin, both \(v_{i}^{pred}>v_{i}^{obs}\) and \(\chi_{i}^{2}>1\) are satisfied.5 2. For \(\lambda=0\) there is an analytical expression for density profile [13] (also called the Schive profile) that can be evaluated at an arbitrary radius. However, for \(\lambda\neq 0\) the density profile is evaluated numerically. Since the numerical solution is calculated only till the dimensionless distance \(\hat{r}_{max}\), we only consider observed data points up to which a scaled soliton solution can be calculated (\(r_{max}\propto m^{-1}\textit{s}\hat{r}_{max}\)).6 Footnote 6: However, it is expected for the density to keep falling for \(\hat{r}>\hat{r}_{max}\), implying fall-off in velocity as well. Therefore if the velocity does not overshoot for any radius covered by the numerically solved part, it will not overshoot for the rest of the rotation curve as well. This is because the galaxies in our sample do not exhibit a fall-off in observed velocity at large \(r\). 3. We assume that the total halo mass is the same as the CDM halo mass and does not change in the presence of a soliton at the centre [20; 51]. This implies that for a given galaxy, \(M_{h}\) in eq. (6) can be fixed to the best-fit value obtained in [68]. #### Absence of self-interactions In the special case of SFDM with no self-interactions (\(\hat{\lambda}_{ini}=0\)) i.e. FDM, the only free parameter is \(s\) (when \(m\) is fixed). Furthermore, soliton mass expected from soliton halo relation i.e. \(M_{SH}\), is given by eq. (5) and hence is independent of \(s\). This implies that smaller values of the ratio \(M_{s}/M_{SH}\) corresponding to \(s>s_{crit}\) will be allowed by rotation curves. Ref. [19] found that for many galaxies the values of \(M_{s}\) allowed by rotation curves are smaller than \(\sim 0.5M_{SH}\) when \(m\) is allowed to vary within the range \(\left[10^{-24}\;\text{eV},10^{-20}\;\text{eV}\right]\) Figure 4: A demonstration of the procedure detailed in this section and the resulting exclusion region for the galaxy ‘UGC 1281’. implying that the SH relation is not satisfied (assuming a scatter of a factor of 2 in the SH relation eq. (5)) for solitons that are allowed by rotation curves. To verify this, we first conducted an analysis for \(\hat{\lambda}_{ini}=0\) and varied \(m\) in the range \(\left[10^{-25}\;\mathrm{eV},10^{-19}\;\mathrm{eV}\right]\) for the 56 LSB galaxies in our sample. The results are shown in figure 5. The asymptotic dependence of \(M_{s}/M_{SH}\) on \(m\) here is consistent with what is expected from the work done in Ref. [19] (i.e. \(M_{s}/M_{SH}\propto m^{-1/2}\) for small \(m\) and \(M_{s}/M_{SH}\propto m\) for large \(m\)). The galaxy that imposes the strongest constraint for \(m=10^{-22}\) eV is 'IC 2574', where all soliton masses with \(M_{s}/M_{SH}\gtrsim 0.2\) are excluded. However it is worth noting that while the general idea of the exclusion is the same as in Ref. [19], our approach is slightly different. For instance, as discussed earlier, overshooting is defined when \(\chi_{i}^{2}\) at any single radius bin exceeds 1. Further, we also utilize the exact SH relation in eq. (6) (which reduces to eq. (5) when \(\hat{\lambda}_{ini}=0\)) which requires an input of \(M_{h}\). This, along with the assumptions given earlier suggest that our results are not expected to be exactly identical. #### Presence of repulsive self-interactions Let us now investigate what happens in the presence of self-interactions. For fixed values of \(M_{h}\), \(m\) and \(\hat{\lambda}_{ini}\), every value of \(s\) will also correspond to a different soliton mass \(M_{SH}\) expected Figure 5: Here, \(\hat{\lambda}_{ini}=0\) is fixed and \(m\) is allowed to vary. The dark and light shaded pink regions correspond to a scatter of factor of 2 and 5 from the SH relation in eq. (5) respectively. from the SH relation in eq. (6), since \(M_{SH}\propto\sqrt{1+As^{2}}\) (\(A\) is just some numerical factor). As \(s\) decreases, while soliton mass \(M_{s}\) increases, \(M_{SH}\) decreases, causing the ratio \(M_{s}/M_{SH}\) to be larger. Hence for every \(\hat{\lambda}_{ini}\), all \(M_{s}/M_{SH}>M_{s}^{crit}/M_{SH}^{crit}\) correspond to \(s<s_{crit}\). As discussed earlier, this causes \(V_{DM}>V_{obs}\) and the corresponding \(M_{s}/M_{SH}\) is excluded. The solid blue curve in figure 3(b) is the ratio \(M_{s}^{crit}/M_{SH}^{crit}\) for different \(\hat{\lambda}_{ini}\), where the filled region above it represents the excluded soliton masses. Note that we do not expect the SH relation to be satisfied exactly. Here, the dark and light shaded pink regions in figure 3(b) represent a scatter from eq. (6) of factors of 2 and 5 respectively. As \(\hat{\lambda}_{ini}\) increases, more of the region around \(M_{s}=M_{SH}\) is allowed by observed rotation curves. This demonstrates that the requirement of satisfying SH relation as well as observed rotation curves can impose constraints on self-coupling of ultra-light scalar field dark matter. The rise of the boundary curve in figure 3(b) can be understood in the following manner: We know from the unscaled curve in figure 2 that as \(\hat{\lambda}_{ini}\) increases \(\hat{M}_{ini}\) also increases. Due to the shape of the inner region of the observed rotation curve, we also find that \(s^{crit}\) increases with \(\hat{\lambda}_{ini}\). From our earlier discussion, a larger \(s^{crit}\) implies a smaller \(M_{s}^{crit}\) and a larger \(M_{SH}^{crit}\) leading to an overall smaller \(M_{s}^{crit}/M_{SH}^{crit}\). However, the amount of increase in \(\hat{M}_{ini}\) outweighs the decrease in \(M_{s}^{crit}/M_{SH}^{crit}\) which leads to the rise of the boundary curve. We repeat the above procedure for the remaining 55 galaxies in our sample and obtain figure 6 (note that in the horizontal axis we use \(\lambda=8s^{2}\hat{\lambda}_{ini}(M_{pl}/m)^{2}\)). For all galaxies, Figure 6: Here, \(m=10^{-22}\) eV is fixed and \(\hat{\lambda}_{ini}\) is allowed to vary. The dark and light shaded pink regions correspond to a scatter of factor of 2 and 5 from eq. (6) respectively. the boundary between the shaded and un-shaded region is pushed upwards as \(\lambda\) increases. The strongest constraints are imposed by the galaxy 'IC 2574' where, for \(\lambda\sim 10^{-91}\), the ratio \(M_{s}^{crit}/M_{SH}^{crit}\sim 0.2\). As \(\lambda\) increases the boundary of the excluded region is pushed upwards. This means that for large repulsive self-interactions \(\lambda>{\cal O}(10^{-90})\), a larger region that satisfies the SH relation is allowed by rotation curves as shown in figure 6. The dark and light shaded pink regions correspond to a scatter of a factor of 2 and 5 around \(M_{s}=M_{SH}\) respectively. Note that eq. (5) is obtained from simulations, while eq. (6) is its extension in the presence of self-interactions and hence we do not have an estimate for the scatter in the relation. Also note that some of the blue curves in figure 6 have a larger minimum \(\lambda\). This is because the numerical solution has a finite size, requiring a larger \(\hat{\lambda}_{ini}\) for the numerically calculated soliton to be large enough to reach the first observed radius bin. This leads to a larger minimum \(\lambda\). On the other hand, for \(\hat{\lambda}_{ini}=3.5\) (the largest value for which we obtained a numerical solution), every galaxy will allow a different value of \(s_{crit}\), leading to different values of maximum \(\lambda\). For some galaxies the combination of these two effects lead to a smaller curve in figure 6. It must be noted that we have made assumptions in the beginning of this section, relaxing which could change the results of our analysis. From appendix A.1, \(\hat{\lambda}_{ini}>2.5\) implies that we are in the Thomas-Fermi (TF) regime. This is close to the values of \(\hat{\lambda}_{ini}\) required to push the boundary upwards sufficiently for many galaxies. For SFDM in the TF regime (SFDM-TF), there already exist constraints on \(\lambda/m^{4}\) from background evolution for a complex scalar field [29, 51]. In particular, requirement of a timely transition from radiation to matter domination from CMB power spectrum imposes an upper limit on \(\lambda/m^{4}\): For \(m=10^{-22}\) eV this corresponds to \(\lambda\leq 10^{-89.38}\) for a real scalar field, which can further constrain the value of \(\lambda\) we can allow in our analysis. #### 3.3.3 Impact of attractive self-interactions As discussed in section 2.3, axions can have attractive self interactions corresponding to a negative \(\lambda\). However, as we have mentioned in section 3.1, too strong attractive self-interactions lead to solutions that are unstable under small perturbations. The transition from stable to unstable solutions occurs at \(\hat{\lambda}_{ini}=-0.4\) (see discussion in appendix A.2.1). This sets the allowed range of \(\hat{\lambda}_{ini}\) to be between 0 and -0.4. Also note that the second term in the square-root in the SH relation in eq. (6) will be negative for attractive self-interactions. For a fixed value of \(M_{h}\), a large enough \(s^{2}\hat{\lambda}_{ini}\) will lead to an imaginary \(M_{SH}\) which is unphysical. Therefore, given the small range of allowed \(\hat{\lambda}_{ini}\) and the form of modified SH relation in eq. (6), the presence of attractive self-interactions is not expected to improve constraints from the analysis carried out in this work. We conducted a similar analysis to the one we did for repulsive self-interactions in section 3.3.2 and found that within the allowed values of \(\hat{\lambda}_{ini}\) the boundary of excluded region is not altered a lot for most galaxies in our sample. This is also seen from the exercise in appendix B where, we compare velocity curves for different self-interaction strengths (attractive and repulsive) but the same peak velocity. From figure 10 note that the velocity curve for the strongest allowed attractive self-interaction strength, i.e. \(\hat{\lambda}_{ini}=-0.4\) (red curve) is not very different from the velocity curve for no self-interaction case. #### 3.3.4 Impact of changing scalar field mass It is worth noting that in figure 6 we have kept \(m\) fixed at its fiducial value of \(10^{-22}\) eV. It is then important to ask how changing \(m\) changes (a) the velocity curves and (b) the behaviour of the boundary in figure 6. The effect of changing \(m\) on the velocity curves is shown in figure 7. From figure 5 it is also evident that compared to \(m=10^{-22}\) eV, FDM masses in the range \(10^{-22}\) - \(10^{-20}\) eV are more constrained while \(m<10^{-22}\) eV are less constrained by the LSB sample used. It is then important to verify whether the upwards movement of boundary occurs even when one considers a different \(m\). To demonstrate this, we consider two values of scalar field mass (\(m=0.5\times 10^{-22}\) eV and \(m=5\times 10^{-22}\) eV) and repeat the procedure. We find that the general behaviour of the boundary is similar to that for \(m=10^{-22}\) eV (see Figure 8: Plotting \(M_{s}^{crit}/M_{SH}^{crit}\) for a different fixed values of \(m\) for 56 LSB galaxies from SPARC, where soliton masses in blue region are excluded by the data. Figure 7: Velocity curves for different values of \(m\) are plotted using different colours. Here values of \(s=5000\) and \(\hat{\lambda}_{ini}=0\) are fixed. The dashed horizontal lines denotes the peak velocity of each curve which remains unaffected by a change in \(m\). figure 8). Note that since \(\lambda=64\pi\hat{\lambda}\left(\frac{m}{m_{pl}}\right)^{2}\), a larger value of \(m\) will probe larger values of \(\lambda\) for the same \(\hat{\lambda}\), while the opposite is true for smaller values of \(m\). #### 3.3.5 Peak velocity condition Authors in [18] showed that for a FDM (\(\lambda=0\)) core surrounded by a NFW halo, the SH relation in eq. (5) is equivalent to the soliton peak velocity being approximately equal to the halo peak velocity. This is also called the'velocity dispersion tracing' as seen in [58, 59, 60]. If this 'peak velocity condition' (PVC) is imposed, the authors found that for \(m\in\left(10^{-22}\text{ eV},10^{-21}\text{ eV}\right)\) FDM over-predicts velocities in the inner region for dark matter dominated galaxies. In other words, FDM velocity curves can either obtain the correct peak velocity or the observed slope of the inner region but cannot obtain both simultaneously. It is then natural to ask: Can self-interactions help? We try to answer this question in section B for a sub-sample of 17 galaxies from our sample of 56. We find that for \(m=10^{-22}\text{ eV}\) the presence of attractive self-interactions leads to more overshooting than the \(\lambda=0\) case. On the other hand for repulsive self-interactions, both PVC and the observed slope can be satisfied simultaneously. It is important to note that while PVC was obtained for eq. (5), we assume that satisfying PVC is also equivalent to satisfying the SH relation in eq. (6). This may not be true in general since the expression for total energy of the soliton will have extra terms due to self-interactions. However, in the absence of a SH relation, for \(m=10^{-22}\text{ eV}\) the corresponding FDM solitons can only describe a small part of the inner regions without overshooting. On the other hand, solitons with \(\lambda>0\) can describe a large part of the inner region (sometimes the entire rotation curve) while also satisfying its observed slope. ## 4 Summary In order to learn about the nature of Dark Matter, the spin, mass, couplings and other fundamental properties of DM particles need to be uncovered. We considered spinless DM particles which are ultra light (\(m\sim 10^{-22}\text{ eV}\)). What could be the self-coupling strength of these particles? For axions with mass \(m_{a}\) and decay constant \(f_{a}\), the self coupling is suppressed and is given by \(\lambda_{a}=-\left(\frac{m_{a}}{f_{a}}\right)^{2}\) and for ULAs forming dark matter, turns out to be \(-\mathcal{O}(10^{-96})\). For other particles, the self-coupling could be positive and much larger. Impact of self-interacting scalar field dark matter (SFDM) on galactic rotation curves has been studied extensively in recent years. For instance in [48, 49] authors test the SFDM model (with a complex scalar) in the TF regime against rotation curves from SPARC. They also consider additional contributions from the global rotation of the halo, random confining potentials and baryonic matter distribution. Authors in [51] also work in the TF regime. They show that an inner core described by SFDM-TF surrounded by a NFW envelope fits high mass dwarf galaxies (\(M_{h}\sim 10^{11}\text{ }M_{\odot}\)) better than CDM or FDM (\(\lambda=0\)) for \(m=0.8\times 10^{-22}\text{ eV}\). On the other hand, authors in [50] use a Gaussian ansatz for \(\lambda>0\) and try to fit the inner regions of 17 bulgeless galaxies from the SPARC catalogue. They obtain a best-fit value of \(\lambda\sim 2\times 10^{-90}\) and \(m=2.2\times 10^{-22}\text{ eV}\). In this paper we did not assume a particular approximation (e.g. the TF approximation) or an ansatz to estimate density profiles. We deal directly with the numerical solutions of Gross-Pitaevskii-Poisson (GPP) equations. Recently [18, 19], it has been argued that if FDM in the mass-range \(10^{-24}~{}\text{eV}\leq m\leq 10^{-20}~{}\text{eV}\) is to be allowed by rotation curves from the SPARC database, it cannot also satisfy the soliton-halo relation in eq. (5) at the same time (see figure 1 of [19]). In section 3.3, we obtained a similar result for a smaller sample of LSB galaxies from the SPARC database, see figure 5. Later in the same section we conducted an analysis similar to the one in [19] but with two key differences (along with a few other minor ones): (a) Instead of varying over a range of FDM masses, we fixed \(m=10^{-22}~{}\text{eV}\) and allowed the self-interaction strength \(\lambda\) to vary, and, (b) we used a modified SH relation, eq. (6), which takes into account the impact of self-interactions. We found that SFDM with \(m=10^{-22}~{}\text{eV}\) and \(\lambda\gtrsim 10^{-90}\) can in-fact be allowed by rotation curves while simultaneously satisfying the modified SH relation within a smaller scatter than before. The upward trend of the boundary in the figure 6 is indicative of this effect (see section 3.3.2 for a detailed discussion). Note that from the analysis in appendix B we found similar values of \(m\) and \(\lambda\) can satisfy the 'peak velocity condition' within a scatter of a factor of 2 for a sub-sample of LSB galaxies. We also note that our results are in agreement with those in the appendix E of Ref. [60] We briefly discussed why attractive self-interactions are not expected to play a big role in altering these constraints in section 3.3.3. We revisited the relation between the size and mass of ground state configurations of SFDM (solitons) which is sensitive to the sign and strength of \(\lambda\)[65, 35]. Instead of using an ansatz, we exploited the scaling symmetry to obtain the expected mass-radius curves from unscaled dimensionless numerical solutions. In appendix A.2.2, we also demonstrate how the presence of a black hole at the centre alters the maximum soliton mass for a fixed \(\lambda<0\) and its impact on constraints in the \(\lambda-m\) plane. In appendix B we impose the criterion that the peak velocity of the soliton is approximately equal to the peak velocity of the halo and explore its implications. The present work motivates a full parameter search in the \(\lambda-m\) parameter space which is left for future work. It will be interesting to see how constraints are altered when all 175 galaxies are taken into account, along with their baryonic contribution and parameters of the NFW envelope. The authors would like to thank Koushik Dutta (IISER Kolkata) and Sayan Chakrabarti (IIT Guwahati) for discussions at the initial stage of the work. BD would also like to thank Manush Manju (TIFR) for help with the SPARC dataset. This work is supported by Department of Science and Technology, Government of India under Indo-Russian call for Joint Proposals (DST/INT/RUS/RSF/P-21). BD acknowledges support from the above mentioned project as a Junior Research Fellow. This research was also supported in part by the International Centre for Theoretical Sciences (ICTS) for participating in the program - Less Travelled Path to the Dark Universe (code: ICTS/ltpdu2023/3).
2307.12751
ICF-SRSR: Invertible scale-Conditional Function for Self-Supervised Real-world Single Image Super-Resolution
Single image super-resolution (SISR) is a challenging ill-posed problem that aims to up-sample a given low-resolution (LR) image to a high-resolution (HR) counterpart. Due to the difficulty in obtaining real LR-HR training pairs, recent approaches are trained on simulated LR images degraded by simplified down-sampling operators, e.g., bicubic. Such an approach can be problematic in practice because of the large gap between the synthesized and real-world LR images. To alleviate the issue, we propose a novel Invertible scale-Conditional Function (ICF), which can scale an input image and then restore the original input with different scale conditions. By leveraging the proposed ICF, we construct a novel self-supervised SISR framework (ICF-SRSR) to handle the real-world SR task without using any paired/unpaired training data. Furthermore, our ICF-SRSR can generate realistic and feasible LR-HR pairs, which can make existing supervised SISR networks more robust. Extensive experiments demonstrate the effectiveness of the proposed method in handling SISR in a fully self-supervised manner. Our ICF-SRSR demonstrates superior performance compared to the existing methods trained on synthetic paired images in real-world scenarios and exhibits comparable performance compared to state-of-the-art supervised/unsupervised methods on public benchmark datasets.
Reyhaneh Neshatavar, Mohsen Yavartanoo, Sanghyun Son, Kyoung Mu Lee
2023-07-24T12:42:45Z
http://arxiv.org/abs/2307.12751v2
# ICF-SRSR: Invertible scale-Conditional Function _for_ ###### Abstract Single image super-resolution (SISR) is a challenging ill-posed problem that aims to up-sample a given low-resolution (LR) image to a high-resolution (HR) counterpart. Due to the difficulty in obtaining real LR-HR training pairs, recent approaches are trained on simulated LR images degraded by simplified down-sampling operators,,,. Such an approach can be problematic in practice because of the large gap between the synthesized and real-world LR images. To alleviate the issue, we propose a novel Invertible scale-Conditional Function (ICF), which can scale an input image and then restore the original input with different scale conditions. By leveraging the proposed ICF, we construct a novel self-supervised SISR framework (ICF-SRSR) to handle the real-world SR task without using any paired/unpaired training data. Furthermore, our ICF-SRSR can generate realistic and feasible LR-HR pairs, which can make existing supervised SISR networks more robust. Extensive experiments demonstrate the effectiveness of the proposed method in handling SISR in a fully self-supervised manner. Our ICF-SRSR demonstrates superior performance compared to the existing methods trained on synthetic paired images in real-world scenarios and exhibits comparable performance compared to state-of-the-art supervised/unsupervised methods on public benchmark datasets. ## 1 Introduction Single image super-resolution (SISR) as a fundamental vision problem is a procedure to reconstruct a super-resolution (SR) image from a single low-resolution (LR) image. SISR is an active research topic and has attracted increasing attention in low-level computer vision. It has many applications in various fields such as medical imaging [17, 43], face recognition [19, 60], satellite image processing [32, 51] and security video surveillance [35, 67]. Recent state-of-the-art (SOTA) SR methods have achieved remarkable progress due to the development of deep convolutional neural networks (CNNs). They are usually trained on synthetic inputs in a fully-supervised fashion where LR images are generated by bicubic down-sampling from their HR counterparts. Nevertheless, models trained on the synthetic datasets cannot generalize well when applied to real-world inputs [6, 7]. Another problem is that acquiring well-constructed LR-HR pairs from the real world is very challenging due to cost problems or hardware limitations [6, 7, 68]. Therefore, it is a common scenario that we have LR images only rather than having LR-HR training pairs. Several approaches adopt unsupervised adversarial training [16] and leverage unpaired LR-HR images to alleviate the situation. By jointly training down-sampling and up-sampling networks [5, 36, 37, 62, 72], those methods aim to generate synthetic LR images that have similar characteristics of given unpaired LR examples. Then, the synthesized training pairs can be leveraged to optimize the up-sampling network. However, such unsupervised strategies require appropriate HR images, even though those images are not paired with the given LR images. Also, Son [49] have identified that those methods are biased toward some handcrafted functions, nearest or bicubic interpolation, which limits the generalization. In this paper, we present a novel self-supervised real-world SR framework, ICF-SRSR, to overcome the aforemen Figure 1: **Real-world image super-resolution. We train our ICF-SRSR on a single real-world smartphone photo in a self-supervised manner to get the result for scale \(\times 2\). The other listed methods are also zero-shot [46, 48] or unsupervised [53] methods.** tioned challenges. To this end, we first propose a concept of Invertible scale-Conditional Function (ICF). It is designed to perform up-sampling and down-sampling within a single model, conditioned by the scale arguments \(s\) and \(\nicefrac{{1}}{{s}}\), respectively. Therefore, we can resize an input by a given scale \(s\) and restore the initial input by taking the inverse scale \(\nicefrac{{1}}{{s}}\). Without utilizing paired/unpaired training images nor any specific down-sampling operator _e.g_., bicubic, ICF-SRSR containing a learnable ICF can be trained in a fully self-supervised manner. Moreover, our method can generate realistic LR-HR image pairs from a set of given images useful for training the other off-the-shelf methods. In the experiments, we demonstrate the ability of our ICF-SRSR to learn from real-world datasets, restore high-/lower-resolution images, and evaluate our method on other datasets in a self-supervised manner. Our main contributions are threefold: * Our ICF-SRSR is a self-supervised framework for the SISR task that performs simultaneous SR and down-sampling based on the proposed ICF. * Our ICF-SRSR can learn a feasible resizing function directly from real-world LR images. Our self-supervised approach performs better on real-world SR than existing methods trained on synthetic datasets, even with training on a single image, as evident in Fig. 1. * Our ICF-SRSR can also down-sample given natural images, which enables us to construct realistic training pairs. Therefore, we can train off-the-shelf SR methods using the generated pairs by our ICF-SRSR in the absence of real paired training samples. ## 2 Related Works In this section, we review recent SR methods from the perspective of training supervision. ### Supervised image super-resolution Starting from Dong _et al_. [12], CNNs [13, 45] have become a standard for SISR. Following VDSR [28], several methods such as LapSRN [30], EDSR [34], and SRGAN [31] have leveraged benefits of residual learning. Advanced approaches utilize dense connections [56, 71], channel attention [70, 42, 11], and back-projection [21, 22], and even Transformers [8, 14, 40, 58, 63, 43] for high-performance SR architectures. Furthermore, recent attempts extend the task toward continuous scaling factors [23, 47, 54, 9] and even arbitrary shapes [50]. Nevertheless, supervised methods are still vulnerable when a given LR image is degraded by an unknown down-sampling function [49] that is not seen during training. Therefore, several methods [18, 25, 10] jointly estimate latent kernel parameters and SR images to alleviate the issue. Rather than up-sampling LR images directly, Correction filter [26] first converts a given input to resemble a bicubic down-sampled image and applies off-the-shelf SR methods. Still, they require supervision from synthetic LR-HR pairs for training, which prevents their real-world applications. ### Unsupervised super-resolution To reduce biases from synthetic training data, zero-shot methods are trained on a given LR input only, without relying on supervision from large-scale data. Ulyanov _et al_. [52] has shown that the structure of CNNs can be prior for natural image representation which can be utilized for the SR task. Based on internal patch recurrence [41], ZSSR [46] is trained on numerous sub-patches of the given image to construct an input-specific SR model. Later, there has been an attempt to integrate external and internal learning using model-agnostic meta-learning [15]. MZSR [48] is firstly trained on a large-scale paired dataset with multiple degradation parameters and then adopted to a given image during the inference time. However, the zero-shot methods assume that the degradation pipeline for a given image is known, which is less practical. To implement fully-blind SR methods, internal patch recurrence properties have played a critical role [41]. Based on such a background, KernelGAN [3] predicts a kernel that matches the distribution of the down-sampled image and the original input in an unsupervised manner. The estimated kernel can also be utilized for several SR models [46, 66] for more accurate reconstruction. Rather than explicitly utilize the concept of image distribution, we construct self-supervised chains to learn the SR model without assuming a specific degradation model. ### Cyclic architectures for super-resolution On the other hand, a class of methods interprets SR as a domain transfer problem between LR and HR distributions. They introduce cyclic architectures [27] with adversarial loss [16, 44, 73] to train consecutive down-sampling and SR networks. CinCGAN [62] utilizes the concept of cycle consistency to train the model on unpaired LR-HR images. Under the cyclic framework [36, 37, 72, 5], down-sampling models are trained to simulate the distribution of training LR images. Then, the following SR network can learn to generalize on given LR images even if the corresponding HR pairs do not exist. However, they are still biased toward handcrafted down-sampling functions [49] and lack generalization. Without using adversarial loss, Guo _et al_. [20] combine paired and unpaired data to train a dual regression network with a loop. In this paper, we further propose a self-supervised approach without requiring either paired/unpaired training data or a specific down-sampling operator. ### Real-world super-resolution To overcome the limitations of existing methods when handling real-world data, several approaches have captured paired LR-HR images in the wild. While they are still limited due to scene diversity [7], accurate alignment [6, 59], real-world datasets help generalization of existing SR models with more practical training data. Zhang _et al_. [68] and Xu _et al_. [61] leverage RAW and RGB images together to deliver better reconstruction quality. Nevertheless, those pairs require careful alignment and complicated hardware setup, which are not scalable. Recently, Real-ESRGAN [55] and BSRGAN [65] aim to synthesize more realistic and diverse LR images to improve the generalization ability of existing SR models. Still, they cannot leverage information from real-world images and heavily depend on such a synthesis process. On the other hand, our fully self-supervised framework does not require synthetic or real-world pairs and can be trained on arbitrary LR images. ## 3 Method We first introduce an Invertible scale-Conditional Function (ICF) to design our self-supervised real-world single image super-resolution framework (ICF-SRSR); then, we discuss our defined loss functions and the network architecture. For convenience, we denote \(X\in\mathbb{R}^{H\times W\times 3}\) as the input LR image with the arbitrary size of \(H\) and \(W\). ### Invertible scale-Conditional Function For a given input \(X\), a conditional function \(f(X|s)\) returns different outputs for different conditions \(s\). In this paper, we design an Invertible scale-Conditional Function (ICF) as a specific conditional function, which can act as an operation and the inverse operation for different scale conditions. Without losing generality, we consider \(f\) as an image-to-image mapping and \(s\) as an arbitrary scaling factor, respectively. Then, we can resize an arbitrary image \(X\) as follows: \[X_{s}=f\left(X|s\right), \tag{1}\] where \(X_{s}\in\mathbb{R}^{sH\times sW\times 3}\) is a resized image. Furthermore, for the same function \(f\), we can get the original input \(X\) again by the inverse scaling factor \(\nicefrac{{1}}{{s}}\) as follows: \[X=f\left(X_{s}|\nicefrac{{1}}{{s}}\right). \tag{2}\] Therefore, \(f\) as an ICF can project an image to its arbitrary-scale representation and back-project it to the original input for the scale conditions \(s\) and \(\nicefrac{{1}}{{s}}\), respectively. Fig. (a)a illustrates the concept of our ICF. We note that if \(s=\nicefrac{{1}}{{s}}=1\) the function is identity which implies \(f(X|1)=X\). ### Self-supervised SISR using ICF One of the challenges in real-world SR is that we cannot acquire the ground-truth HR image for an arbitrary LR image. To overcome this limitation, we develop a novel self-supervised SR framework, ICF-SRSR, based on the concept of ICF. As shown in Fig. (b)b, our method can simultaneously super-resolve and down-sample the given LR image \(X\) with different scale conditions \(s\) and \(\nicefrac{{1}}{{s}}\), without requiring any paired/unpaired LR-HR training samples. Specifically, we first parameterize an ICF \(f_{\theta}\) with CNNs and utilize its property to optimize the model. Then, we repeatedly apply \(f_{\theta}\) to an LR image \(X\) with different scale conditions to acquire two outputs \(\hat{X},\hat{X}\in\mathbb{R}^{H\times W\times 3}\) as follows: \[\begin{split} f_{\theta}(f_{\theta}(X|s)|\nicefrac{{1}}{{s}})& =f_{\theta}(X_{s}|\nicefrac{{1}}{{s}})=\hat{X},\\ f_{\theta}(f_{\theta}(X|\nicefrac{{1}}{{s}})|s)& =f_{\theta}(X_{\nicefrac{{1}}{{s}}}|s)=\hat{X},\end{split} \tag{3}\] where for \(s>1\), \(X_{s}\in\mathbb{R}^{sH\times sW\times 3}\) and \(X_{\nicefrac{{1}}{{s}}}\in\mathbb{R}^{\nicefrac{{H}}{{s}}\times W\nicefrac{{ 1}}{{s}}\times 3}\) are generated super-resolution (SR) and low-low-resolution (LLR) images, respectively. For simplicity, we assume that both \(\nicefrac{{H}}{{s}}\) and \(\nicefrac{{W}}{{s}}\) are integers. Figure 2: **Overview of our proposed method.** (a) We introduce an invertible scale-conditional function (ICF), which receives an input image and an arbitrary scale condition and generates a resized image. It outputs the same input image for the resized image and the inverse scale condition. (b) We propose a self-supervised SISR framework ICF-SRSR, in which a learnable ICF up-samples and down-samples a given image with different scale conditions and can reproduce the same input from the generated images by the inverse scales using the defined loss functions between the predicted images and the original input. For an ideal ICF \(f_{\theta}\), both \(\hat{X}\) and \(\hat{X}\) in Eq. (3) should be the same as the original LR image \(X\). Therefore, we train \(f_{\theta}\) in a self-supervised manner by reducing the distance between \(X\) and the generated images \(\hat{X}\) and \(\hat{X}\) in two stages simultaneously, as shown in Fig. 1(b). In the up-down stage, we minimize the distance between \(\hat{X}\) and \(X\). By doing so, the network can learn to down-sample the generated SR image \(X_{s}\) by restoring the output \(\hat{X}\) as the approximation of the original input \(X\). On the other hand, in the down-up stage, we aim to approximate the original input \(X\) by reducing the distance between \(\hat{X}\) and \(X\). Then, the network can learn to up-sample the generated LLR image \(X_{\nicefrac{{1}}{{s}}}\). Therefore, by leveraging the learned up-sampler and down-sampler applied on the generated images \(X_{\nicefrac{{1}}{{s}}}\) and \(X_{s}\), respectively, we can generate favorable SR and LLR images \(X_{s}\) and \(X_{\nicefrac{{1}}{{s}}}\) by employing the learned model \(f_{\theta}\) on the input \(X\) with the scale conditions \(s\) and \(\nicefrac{{1}}{{s}}\), respectively. We also note that our method is different from CycleGAN [73], which utilizes unpaired LR-HR images and performs two independent cycles, one on the LR and the other on the HR images. Rather, our model is trained in a self-supervised manner by optimizing the \(f_{\theta}\) jointly with two stages on LR images only, without requiring the adversarial loss. In other words, \(f_{\theta}\) can perform simultaneous up-sampling and down-sampling without requiring prior information or paired/unpaired data. ### Training loss functions To train the proposed ICF \(f_{\theta}\), we design a set of self-supervised loss functions. First, we formulate the consistency loss \(\mathcal{L}^{\text{Cons}}\), which preserves information during the simultaneous up-down and down-up stages. The proposed consistency loss \(\mathcal{L}^{\text{Cons}}\) on the approximated LR images \(\hat{X}\) and \(\hat{X}\), and the original input \(X\) is defined as follows: \[\mathcal{L}^{\text{Cons}}=\|\hat{X}-X\|+\|\hat{X}-X\|. \tag{4}\] For simplicity, we use \(\|\cdot\|\) to represent the L1 norm. The proposed consistency term \(\mathcal{L}^{\text{Cons}}\) guarantees to generate reliable up-sampled and down-sampled images simultaneously. Furthermore, to stabilize the training and preserve colors between the input and intermediate images \(X_{s}\) and \(X_{\nicefrac{{1}}{{s}}}\), we utilize the low-frequency loss [49]. We implement the low-pass filter with a spatial pooling operator \(\mathbf{P}\left(\cdot,w,s\right)\), where \(w\) and \(s\) are window size and stride, respectively. Our color-preserving loss \(\mathcal{L}^{\text{Color}}\) is defined as follows: \[\begin{split}\mathcal{L}^{\text{Color}}&=\| \mathbf{P}\left(X_{s},4s,4s\right)-\mathbf{P}\left(X,4,4\right)\|\\ &+\|\mathbf{P}\left(X_{\nicefrac{{1}}{{s}}},4,4\right)-\mathbf{ P}\left(X,4s,4s\right)\|,\end{split} \tag{5}\] where the window size and stride are adjusted to match dimensions between each of \(\left(X_{s},X\right)\) and \(\left(X_{\nicefrac{{1}}{{s}}},X\right)\). The total training objective \(\mathcal{L}^{\text{Total}}\) is the combination of the aforementioned two loss terms, which is defined as follows: \[\mathcal{L}^{\text{Total}}=\mathcal{L}^{\text{Cons}}+\lambda_{\text{Color}} \mathcal{L}^{\text{Color}}. \tag{6}\] ### Network architecture Our ICF-SRSR architecture leverages a single model to handle different scale conditions. To implement the proposed method, we modify the existing SISR model,, EDSR [34] as our baseline backbone architecture. Since the body part is invariant to the scale image (, the input and output have the same resolution), we introduce multiple tail parts for different scale conditions. Employing a single network with the shared body part is more efficient and can improve performance by observing more augmented data,, images with different scales, during the training. In the supplementary material, we provide the details of the network architecture and illustrate that our method is model-agnostic and can leverage different SOTA baseline models. We will also publish our ICF-SRSR implementation. ## 4 Experiments We first introduce training and evaluation configurations of the proposed ICF-SRSR framework. Then we conduct comprehensive experiments, extensive quantitative and qualitative comparisons with the other methods, and an in-depth analysis of our proposed method. ### Training details **Dataset.** We train and evaluate our method on two scenarios. 1) Synthetic datasets, where the training and testing LR images are synthesized by a uniform degradation process (, bicubic down-sampling) from HR images. 2) Real-world datasets, which provide paired LR-HR images from the real-world captured by adjusting the focal length of a camera. To train our ICF-SRSR, we use \(800\) bicubic LR images from the DIV2K [1] dataset. For evaluation, we adopt five standard benchmarks: Set5 [4], Set14 [64], BSD100 [38], Urban100 [24], and Manga109 [39]. We also use the high-quality DIV2K validation set for evaluation. To train and evaluate our ICF-SRSR under real-world scenarios, we utilize real-world datasets [6, 59] for the SISR task. RealSR-V3 [6] includes paired LR-HR images captured by two different cameras, Canon and Nikon. For each camera, about \(200\) training images are captured from different scenes for each scaling factor \(\times 2\), \(\times 3\), and \(\times 4\). We use only the LR images with scaling factors \(\times 2\) and \(\times 4\) for training and evaluate our method on the \(50\) test pairs for each scale. DRealSR [59] also contains images captured by five DSLR cameras. We conduct our experiments using images for \(\times 2\) and \(\times 4\) SR, containing \(884\) and \(840\) LR images, respectively. For evaluation, we use \(83\) and \(93\) test pairs in DRealSR for \(\times 2\) and \(\times 4\), respectively. **Hyperparameters.** During the training, we extract random patches of size \(48\times 48\) from LR images of both synthetic and real-world datasets. For all our experiments, we set the batch size to \(16\), and \(\lambda_{\text{Color}}=0.2\). Random flip and rotation augmentations are applied to the input images to increase the number of effective training samples. We train our model using ADAM [29] optimizer with the initial learning rate \(1\times 10^{-4}\), which decays by a factor \(0.5\) after every \(200\) epochs. For quantitative comparisons, we adopt structural similarity (SSIM) [57] and peak signal-to-noise ratio (PSNR) on the luminance channel for the experiments on synthetic datasets and real-world dataset DReaISR [59] and also on RGB channels for dataset RealSR-V3 [6]. All experiments are done using PyTorch 1.8.1 and Quadro RTX 8000 GPUs. ### Evaluation on synthetic datasets We train our ICF-SRSR on the DIV2K [1] dataset with EDSR-baseline [34] and test it on the public benchmark datasets [4, 24, 38, 64, 39] and also the validation set of DIV2K. We note that the proposed method is trained in a self-supervised manner by targeting a certain scale \(s\). Specifically, we train \((\times 2,\times\nicefrac{{1}}{{2}})\) ICF and \((\times 4,\times\nicefrac{{1}}{{4}})\) ICF independently. Tab. 1 shows extensive comparisons between the proposed self-supervised approach and the other representative supervised/unsupervised SR methods with PSNR metric. We demonstrate that our ICF-SRSR approach achieves superior performance compared to the SelfExSR [24] model and comparable performance to the other unsupervised and supervised methods. We note that ground-truth HR images in Set5 and Set14 are relatively noisier than the other datasets, preventing our self-supervised framework from learning accurate scaling functions. We will discuss more details about the noisy cases in our supplementary material. Notably, ICF-SRSR outperforms the unsupervised method ZSSR [46] by \(1.05\)dB on scale \(\times 2\) of Urban100 dataset and the supervised methods [9, 28] on both scales of DIV2K validation set. Moreover, we apply the trained ICF-SRSR to LR images from the DIV2K training dataset and get LLR-LR paired im \begin{table} \begin{tabular}{c l c c c c c c} \hline \hline \multirow{2}{*}{**Supervision**} & \multirow{2}{*}{**Method**} & **Set5** & **Set14** & **BSD100** & **Urban100** & **Manga109** & **DIV2K** \\ & & \(\times 2/\times 4\) & \(\times 2/\times 4\) & \(\times 2/\times 4\) & \(\times 2/\times 4\) & \(\times 2/\times 4\) & \(\times 2/\times 4\) \\ \hline \multirow{8}{*}{Supervised} & Bicubic & 33.66/28.42 & 30.24/26.00 & 29.56/25.96 & 26.88/23.14 & 30.80/24.89 & 31.01/26.66 \\ \cline{2-9} & VDSR [28] & 37.53/31.35 & 33.03/28.01 & 31.90/27.29 & 30.76/25.18 & 37.22/28.83 & 33.66/28.17 \\ & EDSR [34] & 38.11/32.46 & 33.92/28.80 & 32.32/27.71 & 32.93/26.64 & 39.10/31.02 & **36.22**/30.52 \\ & CARN [2] & 37.76/32.13 & 33.52/28.60 & 32.09/27.58 & 31.92/26.07 & 38.36/30.47 & - /30.10 \\ & RCAN [70] & 38.27/32.63 & 34.12/28.87 & 32.41/27.77 & 33.34/26.82 & 39.44/31.19 & 36.13/30.52 \\ & RDN [71] & 38.24/32.47 & 34.01/28.81 & 32.34/27.72 & 32.89/26.61 & 39.18/31.00 & - / - \\ & DRN-S [20] & 37.80/32.68 & 33.30/28.93 & 31.97/27.78 & 31.40/26.84 & 38.11/31.52 & 35.77/**30.79** \\ & LIIF [9] & 38.17/32.50 & 33.97/28.80 & 32.32/27.74 & 32.87/26.68 & - / - & 34.99/29.27 \\ & ELAN [69] & **38.36/32.75** & **34.20/28.96** & **32.45/27.83** & **33.44/27.13** & **39.62/31.68** & - / - \\ \hline \multirow{8}{*}{Unsupervised} & SelfExSR [24] & 36.49/30.31 & 32.22/27.40 & 31.18/26.84 & 29.54/24.82 & 35.78/27.82 & - / - \\ & ZSSR [46] & 37.37/31.13 & 33.00/28.01 & 31.65/27.12 & 29.34/24.12 & 35.57/27.04 & **34.45**/**29.08** \\ \cline{1-1} & MZSR [48] & 37.25/31.59 & 33.16/27.90 & 31.64/ - & 30.41/25.52 & **36.70/29.58** & - / - \\ \cline{1-1} & DASR [53] & **37.87/31.99** & **33.34/28.50** & **32.03/27.52** & **31.49/25.82** & - / - & - / - \\ \hline \multirow{2}{*}{Self-supervised} & **ICF-SRSR** (Ours) & 37.01/30.81 & 32.86/27.76 & 31.54/26.99 & 30.39/24.72 & 36.45/28.01 & 35.19/29.48 \\ \cline{1-1} & **EDSR (LLR,LR)** (Ours) & **37.09/31.06** & **32.91/27.97** & **31.63/27.10** & **30.51/24.92** & **36.68**/**28.29** & **35.26**/**29.64** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative comparisons on synthetic datasets.** We compare ICF-SRSR with several supervised/unsupervised methods on the benchmarks [4, 24, 38, 39, 64] and DIV2K [1] validation set for scales \(\times 2\) and \(\times 4\) with PSNR metric. ICF-SRSR refers to our self-supervised method, while EDSR (LLR,LR) is the model EDSR trained on our generated pairs (LLR,LR) of the DIV2K. Figure 3: **Qualitative comparisons on a synthetic dataset.** We compare our ICF-SRSR method with bicubic up-scaling, supervised methods EDSR [34], DRN-S [20], and LIIF [9] and also unsupervised methods DASR [53], MZSR [48], and ZSSR [46] trained on the DIV2K [1] training set and evaluated on the DIV2K validation set for scale \(\times 2\). ages. Then, we train off-the-shelf EDSR on the synthesized paired data from scratch and evaluate it on the test datasets as shown in Tab. 1. The results demonstrate that EDSR (LLR, LR) trained on our generated pairs (LLR, LR) achieves superior performance than ICF-SRSR, which illustrates the merit of our method to generate useful training image pairs. Fig. 3 further visualizes the qualitative results of ICF-SRSR on two validation images from the DIV2K [1] dataset. Our method achieves comparable results to the supervised methods [9, 34] while restoring more details compared to the unsupervised methods [46, 48]. We note that the results on ZSSR [46] show lost information and scratched texts, and on MZSR [48] include severe artifacts and color shifting. For an in-depth comparison, we also provide quantitative results with SSIM metric in our supplementary material. ### Evaluation on real-world datasets We train and evaluate ICF-SRSR for each scale \(\times 2\) and \(\times 4\) independently on the LR images of each Canon and Nikon camera from the real-world dataset RealSR-V3 [6] separately and also on the LR images of the real-world dataset DRealSR [59] in a self-supervised manner. We further train the model EDSR [34] on our generated (LLR, LR) image pairs. We compare our method with the supervised methods [6, 20, 34, 56, 70] trained on real paired images, which serve as the upper bounds for the SR problem. On the other hand, we employ the pre-trained supervised models EDSR [34], RRDB [56], IKC [18], BlindSR [10] and DRN-S [20] on the synthetic DIV2K [1] dataset to superresolve the LR images in the testing sets of RealSR-V3 [6] and DRealSR [59]. Moreover, we utilize Kernel-GAN [3] to approximate the down-sampling kernel from a single LR image and use ZSSR [46] as a zero-shot SR to apply to real LR images. Our extensive comparisons with the various methods trained on real and synthetic datasets are summarized in Tab. 2. We illustrate that our self-supervised method can achieve superior performance compared to the methods pre-trained on the synthetic datasets and unsupervised method ZSSR [46]+Kernel-GAN [3] in terms of both PSNR and SSIM metrics, which emphasizes the fact that the trained models on synthetic datasets with known degradations cannot perform well on real-world scenarios. We qualitatively compare our method with the various existing methods on the RealSR-V3 dataset and visualize the SR results and their corresponding error maps with respect to the GT (HR) in Fig. 4. We demonstrate that our self-supervised method can achieve comparable and sometimes better performance to the supervised method LP-KPN [6] trained on real paired images. We note that our method is generally more suitable for restoring the texture and preserving color compared to supervised method IKC [18] and unsupervised method ZSSR [46]+Kernel-GAN [3] as evident in appearance and PSNR, SSIM, and mean absolute error (MAE) metrics. We show more qualitative results in the supplementary material. ### Ablation study We conduct various ablation studies on the model design, down-sampling operators, few-shot learning, augmentation, and the effect of loss functions to better analyze our method. **Model design.** We conduct an experiment to show the superiority of a developed baseline as a single conditional model compared to two independent models and also the effect \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Training Set**} & \multirow{2}{*}{**Supervision**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**RealSR (Canon)**} & \multicolumn{3}{c}{**RealSR (Nikon)**} & \multicolumn{3}{c}{**DRealSR**} \\ & & & \(\times 2\) & \(\times 4\) & \(\times 2\) & \(\times 4\) & \(\times 2\) & \(\times 4\) \\ & & & (PSNR/SSIM) (PSNR/SSIM) (PSNR/SSIM) (PSNR/SSIM) (PSNR/SSIM) (PSNR/SSIM) (PSNR/SSIM) \\ \hline \hline \multirow{6}{*}{Synthetic} & \multirow{6}{*}{Supervised} & Bicubic & 30.35/0.876 & 25.80/0.744 & 29.66/0.854 & 25.50/0.718 & 32.67/0.877 & 30.56/0.820 \\ \cline{2-9} & & EDSR [34] & **30.58**/**0.880** & 26.05/0.754 & **30.00**/**0.861** & 25.89/0.735 & **32.82**/0.869 & **30.64**/**0.821** \\ & & RRDB [56] & - / & - & 26.05/ & - & - / & - & 25.91/ & - & - & - & 30.55 / \\ & & IKC [18] & - / & - & 25.71/0.751 & - / & - & 25.27/**0.740** & - / & - & - / & - \\ & & BlindSR [10] & 27.99/0.822 & - / & - & 26.68/0.794 & - / & - & - & - & - & - \\ & & DRN-S [20] & 30.57/0.879 & **26.07**/**0.755** & 29.99/0.860 & **25.92**/0.736 & 32.81/**0.879** & 30.63/**0.821** \\ \hline \multirow{6}{*}{Real-world} & \multirow{6}{*}{Supervised} & EDSR [34] & 32.45/0.913 & 27.59/0.792 & 31.59/**0.888** & 27.14/0.771 & 34.24/**0.908** & **32.03**/0.855 \\ & & RRDB [56] & - / & - & **27.90** / & - & - & **27.39** / & - & 33.89/0.906 & 31.92/0.856 \\ & & RCAN [70] & **32.69**/**0.919** & 27.66/0.793 & **31.61**/**0.888** & 27.09/0.771 & **34.34**/**0.908** & 31.85/**0.857** \\ & & LP-KPN [6] & - / & - & 27.76/**0.807** & - & - & 26.34/**0.774** & 33.88/ & - & 31.58/ \\ & & DRN-S [20] & 32.50/0.912 & 27.79/0.805 & 31.43/0.884 & - / & - & 33.91/0.898 & - & / - \\ \hline Unsupervised & ZSSR [46]+ [3] & 28.79/0.826 & 23.68/0.673 & 27.54/0.799 & 22.46/0.645 & - / & - & - & / & - \\ \hline \multirow{2}{*}{Self-supervised} & \multirow{2}{*}{**ICF-SRSR** (Ours)} & 30.98/0.885 & 26.27/0.763 & 30.31/0.864 & 25.89/**0.742** & 32.87/0.880 & 30.65/0.821 \\ & & **EDSR (LLR,LR)** (Ours)} & **31.13**/**0.888** & **26.32**/**0.764** & **30.33**/**0.865** & **25.92**/**0.742** & **32.91**/**0.881** & **30.68**/**0.823** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative comparison on real-world datasets.** We compare our self-supervised ICF-SRSR and EDSR (LLR,LR), _i.e._, the model EDSR [34] trained on our generated paired dataset (LLR,LR), to several supervised/unsupervised methods trained on synthetic DIV2K [1], real-world RealSR-V3 [6] and DRealSR [59] datasets for scales \(\times 2\) and \(\times 4\) with PSNR and SSIM metrics. of training our two-stage framework compared to training each Up-Down and Down-Up stage separately. Our results on synthetic dataset DIV2K [1] and Canon and Nikon images from real-world dataset RealSR-V3 [6] for scale \(\times 2\) show that training with two independent models or using only one stage (half) results in unsatisfactory performance, demonstrating the uniqueness of our method in using a single invertible scale-conditional model as shown in Tab. 3. **Evaluation of down-sampling.** Due to the invertibility attribute of ICF, our method can be interpreted as a learnable down-sampler. Therefore, we analyze our model \(f_{\theta}\) as a down-sampling operator in three aspects. **First.** We train ICF-SRSR on HR images from RealSR-V3 [6] and evaluate the model on HR images of the test dataset to gather the generated down-sampled images. Then, we compare ground-truth LR images with our generated LR images, as well as LR images obtained by down-sampling functions _e.g._, Nearest, Bicubic, Gaussian+Nearest, and Gaussian+bicubic (\(\sigma=0.4\)). Tab. 4 provides a comparison of LR images for different down-sampling models based on PSNR. The values show the superiority of our learnable down-sampling method in generating more realistic LR images compared to ones with other down-sampling operators. **Second.** We further analyze our learnable down-sampling operator \(f_{\theta}\) compared to non-learnable down-sampling approaches. We use our learnable down-sampling operator \(f_{\theta}\), bicubic down-sampling, and Gaussian (\(\sigma=0.4\)) filtering followed by different nearest and bicubic down-sampling operators to generate the LLR images from given input LR images on the training sets. Then, we train the model EDSR on the generated paired images (LLR, LR) to learn generating SR images given LR counterparts. We summarize the results for scale \(\times 2\) of the benchmarks Set5 [4] and Set14 [64], and Canon and Nikon sets of RealSR-V3 [6] dataset for both non-learnable and our learnable down-sampling operators in Tab. 5. The results indicate the effect of our learnable \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Down-sampling**} & \multicolumn{2}{c}{**Canon**} & \multicolumn{2}{c}{**Nikon**} \\ & \(\times 2\) & \(\times 4\) & \(\times 2\) & \(\times 4\) \\ \hline Nearest & 29.35 & 24.51 & 28.54 & 23.91 \\ Bicubic & 30.27 & 25.76 & 29.71 & 25.56 \\ Gaussian+Nearest & 29.62 & 24.65 & 28.87 & 24.09 \\ Gaussian+Bicubic & 30.61 & 25.95 & 30.12 & 25.81 \\ \hline **ICF-SRSR** & **32.46** & **28.93** & **32.12** & **29.15** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation on down-sampling performance.** Figure 4: **Qualitative comparisons on a real-world dataset.** We visualize the super-resolution results (first row) and their corresponding error maps with respect to the GT (second row) for an image captured by each Nikon and Canon camera. We compare our self-supervised method ICF-SRSR with the supervised method LP-KPN [6] and the unsupervised method ZSSR [46]+ [3] trained on the RealSR-V3 [6] dataset and the supervised method IKC [18] trained on synthetic dataset DIV2K [1] for scale \(\times 4\) with PSNR, SSIM, and MAE metrics. down-sampling operator to generate appropriate image pairs for training, which results in a significant improvement compared to known down-sampling operators. **Third.** By using different down-sampling methods, we first generate LR samples from the real training HR images and then train a vanilla EDSR model using the generated pairs, _i.e._, (LR, HR). As shown in Tab. 6, our synthesized pairs can provide more suitable training data compared to ones by previous learnable down-sampling methods ADL [49] and DRN-S [20] as the EDSR performs much better for the \(\times 2\) SR tasks on real dataset RealSR-V3 [6]. **Few-shot learning.** We train and evaluate our method on small datasets to show the advantage of our method to learning from only a few images without requiring a large-scale training dataset. Therefore, we train the model ICF-SRSR (Small) on the test sets of synthetic datasets Set14 [64], BSD100 [38] and Urban100 [24] and also real-world datasets RealSR-V3 [6] and DRealSR [59] and show their results on the corresponding test datasets in Tab. 7. We demonstrate that our method can achieve slightly lower performance even when trained on very small datasets compared to our model ICF-SRSR (Large) trained on large-scale training datasets. **Multi-scale augmentation.** As we mention in Sec. 3.4, augmented data with different scales can lead to performance improvement. Therefore, when we train ICF-SRSR directly on the test samples, we adopt diverse scaling factors as well as their reciprocals to compensate for the limited number of training data. In Tab. 8, we show that increasing the number of inputs induced by various scaling factors, _e.g._, \(\times 2\), \(\times 4\), and \(\times 8\), and their inverses can lead to obtaining superior performance on the RealSR-V3 [6] dataset. More details about our multi-scale augmentation strategy are described in our supplementary material. **Effects of loss functions.** We also analyze the effect of each loss function discussed in Sec. 3.3. As shown in Tab. S3, our novel self-supervised consistency loss \(\mathcal{L}^{\text{Cons}}\) can drastically improve the model performance when it is added to color preserving loss \(\mathcal{L}^{\text{Color}}\) on both synthetic and real-world datasets. In our supplementary material, we further discuss the effect of the weight \(\lambda_{\text{Color}}\). ## 5 Conclusion We propose ICF, a novel invertible scale-conditional function that receives an image and an arbitrary scaling factor and generates the resized image, and can reconstruct the same input image by the given resized image and the inverse scaling factor. Then, we utilize ICF to design a self-supervised real-world single-image super-resolution framework ICF-SRSR. Accordingly, our framework is able to generate up-sampled and down-sampled images simultaneously, where the generated down-sampled images can be used to construct paired images appropriate for training existing models. Extensive experiments demonstrate the strengths of our self-supervised method on both synthetic and real-world datasets and superior performance on the real-world dataset compared to supervised models trained on the synthetic datasets. **Limitations and future works.** One remaining limitation is that we only apply our method to a few real-world datasets due to the lack of aligned LR-HR image pairs for evaluation in other real-world datasets. Therefore, we aim to provide a large-scale real-world dataset from various scenes for better evaluation in our future work. Moreover, we will investigate the applications of our defined ICF to self-supervised image warping and other image restoration tasks. \begin{table} \begin{tabular}{l c c} \hline \hline **Downsampling** & **Canon (\(\times 2\))** & **Nikon (\(\times 2\))** \\ \hline ADL [49] & 30.76 & 30.44 \\ DRN-S [20] & 30.82 & 30.24 \\ \hline **ICF-SRSR** & **31.94** & **31.24** \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison with learnable down-sampling operators to generate paired training data for SR task.** \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Loss** & **DIV2K (\(\times 2\))** & **Canon (\(\times 2\))** & **Nikon (\(\times 2\))** \\ \hline \(\mathcal{L}^{\text{Color}}\) only & 30.31 & 29.12 & 28.38 \\ \(\mathcal{L}^{\text{Color}}\),\(\mathcal{L}^{\text{Cons}}\) & **35.19** & **30.98** & **30.31** \\ \hline \hline \end{tabular} \end{table} Table 9: **Effect of loss functions.** \begin{table} \begin{tabular}{l c c c c} \hline \hline **Down-sampling** & **Set5** & **Set14** & **Canon** & **Nikon** \\ \hline Bicubic & 35.30 & 31.53 & 30.41 & 29.80 \\ Gaussian+Nearest & 30.79 & 28.39 & 29.41 & 28.60 \\ Gaussian+Bicubic & 35.43 & 31.84 & 30.47 & 29.86 \\ \hline **ICF-SRSR** & **37.09** & **32.91** & **31.13** & **30.33** \\ \hline \hline \end{tabular} \end{table} Table 5: **Comparison with non-learnable down-sampling operators to generate paired training data for SR task.** \begin{table} \begin{tabular}{l c c c} \hline \hline **Loss** & **DIV2K (\(\times 2\))** & **Canon (\(\times 2\))** & **Nikon (\(\times 2\))** \\ \hline \(\mathcal{L}^{\text{Color}}\) only & 30.31 & 29.12 & 28.38 \\ \(\mathcal{L}^{\text{Color}}\),\(\mathcal{L}^{\text{Cons}}\) & **35.19** & **30.98** & **30.31** \\ \hline \hline \end{tabular} \end{table} Table 8: **Multi-scale augmentation.** \begin{table} \begin{tabular}{l c c c} \hline \hline **Downsampling** & **Canon (\(\times 2\))** & **Nikon (\(\times 2\))** \\ \hline ADL [49] & 30.76 & 30.44 \\ DRN-S [20] & 30.82 & 30.24 \\ \hline **ICF-SRSR** & **31.94** & **31.24** \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison with learnable down-sampling operators to generate paired training data for SR task.**
2307.03884
Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization
Variational Quantum algorithms, especially Quantum Approximate Optimization and Variational Quantum Eigensolver (VQE) have established their potential to provide computational advantage in the realm of combinatorial optimization. However, these algorithms suffer from classically intractable gradients limiting the scalability. This work addresses the scalability challenge for VQE by proposing a classical gradient computation method which utilizes the parameter shift rule but computes the expected values from the circuits using a tensor ring approximation. The parametrized gates from the circuit transform the tensor ring by contracting the matrix along the free edges of the tensor ring. While the single qubit gates do not alter the ring structure, the state transformations from the two qubit rotations are evaluated by truncating the singular values thereby preserving the structure of the tensor ring and reducing the computational complexity. This variation of the Matrix product state approximation grows linearly in number of qubits and the number of two qubit gates as opposed to the exponential growth in the classical simulations, allowing for a faster evaluation of the gradients on classical simulators.
Dheeraj Peddireddy, Utkarsh Priyam, Vaneet Aggarwal
2023-07-08T03:14:28Z
http://arxiv.org/abs/2307.03884v1
Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization ###### Abstract Variational Quantum algorithms, especially Quantum Approximate Optimization and Variational Quantum Eigensolver (VQE) have established their potential to provide computational advantage in the realm of combinatorial optimization. However, these algorithms suffer from classically intractable gradients limiting the scalability. This work addresses the scalability challenge for VQE by proposing a classical gradient computation method which utilizes the parameter shift rule but computes the expected values from the circuits using a tensor ring approximation. The parametrized gates from the circuit transform the tensor ring by contracting the matrix along the free edges of the tensor ring. While the single qubit gates do not alter the ring structure, the state transformations from the two qubit rotations are evaluated by truncating the singular values thereby preserving the structure of the tensor ring and reducing the computational complexity. This variation of the Matrix product state approximation grows linearly in number of qubits and the number of two qubit gates as opposed to the exponential growth in the classical simulations, allowing for a faster evaluation of the gradients on classical simulators. ## I Introduction Quantum computing has been far touted for its potential to solve some complex problems much more efficiently than the classical computers [1; 2]. Although the fruition of the idea is further into the future, researchers have been exploring the real-time applicability of the current generation quantum computers. Most of the quantum processors in their current state are severely limited in the number of qubits, noise levels and inefficient error mitigation techniques, calling for a class of algorithms robust to the noise and error. Variational Quantum Algorithms (VQA) have been studied widely for their resilience to the noise from decoherence making them an ideal choice of algorithms for various applications on gate-based Noisy Intermediate Scale Quantum (NISQ) devices. Two such algorithms of prominence, Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) evaluate the expected energy of a state resulting from a short parameterized circuit (frequently referred to as ansatz) with respect to an observable defined by a given problem. A classical outer-loop optimizer tries to find the optimal circuit parameters that minimize the expected energy. While QAOA implements a fixed ansatz inspired from adiabatic quantum computing, VQE utilizes a variable ansatz offering flexibility to engineer the ansatz based on the hardware constraints and the problem at hand. This work chooses to focus on VQE, inspired by the recent advances of variable ansatz in quantum machine learning [3]. VQE, initially developed by Peruzzo et al. [4], has seen a number of applications in condensed matter physics [5; 6; 7], quantum chemistry [8; 9; 10] and quantum mechanics [11; 12]. Optimization is one of the frontrunners among the applications being studied for potential quantum advantage from VQE and adjacent algorithms [13; 14; 15]. Combinatorial optimization is a class of problems of practical relevance with applications spanning across transportation, logistics, manufacturing etc. Studies have indicated that the exponentially growing state space and quantum entanglement can improve the chances of finding the right solution with a potential speedup [16; 17]. Even minor improvements to optimization problems from quantum algorithms can potentially have a large impact on the society. In the context of VQE, a multi-qubit Hamiltonian is prepared with its ground state encoding the solution of the optimization problem and the algorithm optimizes their parameters to minimize the energy of the Hamiltonian. The algorithm has been extended to use filtering operators [13] and iterative approaches [18], to improve the performance with combinatorial optimization. The approach has also been validated on several practical applications using optimization (e.g., Job Shop Scheduling [19], Vehicle Routing [20]) Despite promising prospects, VQAs and more broadly quantum circuits are hindered by a plethora of problems in the current era of quantum computing, with the primary forces of impedance being the limited number of qubits, physical cost of implementation of quantum circuits and decoherence noise. Hybrid algorithms also suffer from the asymmetric scaling of quantum and classical resources with the circuit execution scaling linearly in number of qubits and circuit depth and the classical gradient evaluation scaling exponentially. Note that the gradients of the variational parameters in VQAs were evaluated using either automatic or numeric differentiation until Schuld et al. [21] formalized the notion for gradients computed on quantum hardware popularized as the parameter shift rule. This method estimates the gradients by computing the energy of the wave functions generated by identical circuits with the parameter for which the gradient is to be estimated, shifted by certain values. Parameter shift rule alleviates the imbalance in the scalability, albeit at the cost of executing a much larger number of quantum circuits than the other methods. Given the inconsistency in evaluating the expected values from circuits due to decoherence and inefficient error mitigation techniques on top of the statistical noise from measurement, a larger number of circuits can lead to inaccurate results. In order to address the issues of scalability, accuracy and cost of execution, this manuscript proposes a classically simulated quantum circuit execution method that approximates the initial and intermediate quantum states using a low-rank tensor ring (TR) to compute the expected energy, which in turn are used in approximating the gradients of a VQE. Built upon the Matrix Product State (MPS) approximation of many body quantum states [22], the tensor ring VQE (TR-VQE) formulates a combinatorial optimization in the same way as a naive VQE, using parameter shift rule to compute the gradients. However, the expected values of the shifted circuits used to compute the gradients are evaluated by approximating the initial quantum state with a TR as opposed to MPS, where the single qubit and two qubit gates corresponding to the circuit ansatz are evaluted using tensor contractions. It must be noted that while a single qubit gate does not change the structure of the tensor network, a two qubit gate contracted with the two corresponding tensors can alter the network by increasing the tensor size or its rank. The proposed method retains the tensor ring structure and rank by truncated singular value decomposition of the higher order tensor resulting from the application of two-qubit gate. The consistent low-rank structure allows for an exponential speedup with respect to the number of qubits and circuit depth, compared to the MPS approximation and the brute force approximation with full state vector. This truncation however, induces a noise in the circuit executions similar to the decoherence in actual quantum computers. Therefore, classically simulating a noisy quantum computer instead of a perfect quantum computer only scales linearly in the number of qubits and circuit depth [23]. MPS representation tries to simulate ideal quantum computation without noise but literature suggests that the noise in the current generation quantum computers limits the amount of entanglement that can be built into a quantum state. Given the computational cost of simulating ideal quantum computers, this may not be an ideal prospect since they are not representative of the noisy quantum computations. Moreover, given the robustness of VQAs to noise, this kind of noisy simulation with the benefits of scalability can be specifically useful for machine learning and optimization. Furthermore, Liu et al. [24] highlights that the presence of noise in VQAs can naturally help the optimizer avoid saddle points. We posit that this advantage extends to the TR-VQE as well due to the induced noise. The proposed method is validated on multiple instances of max-cut problem compared against F-VQE [13] and a naive VQE using parameter shift rule. The expected values of the circuit for the benchmarks are computed using simulations implementing a non-noisy MPS approximation highlighting the improved performance of noisy TR approximation over MPS approximation. The rest of the manuscript is organized as follows: Section I.1 recounts the existing literature related to the use of Tensor networks in approximating quantum circuits and applications in QML. Section II.2 formulates the notion of VQE to solve maximum cut problem introduced in Section II.1. Section III.1 discusses the proposed method used to compute the gradients of a variational quantum circuit using the TR approximation of a quantum state and Section III.2 addresses the complexity analysis of the proposed method. The numerical simulations are explained in Section IV followed by a discussion on limitations and future direction in Section V Figure 1: (Left) High-level architecture of a quantum-classical hybrid algorithm with the quantum processor implementing a circuit parameterized by \(\theta_{t}\) at iteration \(t\), which is used to compute the expected value in regards to the observable \(O\). The classical processor processes the gradients of the expected value with respect to the parameters \(\theta\) and update them by gradient-descent like methods (Right) An example of parameterized circuit with 4 qubits with single qubit gates parameterized by \(\theta_{i}\) and the entanglement is encoded into the quantum state using the \(CX\) gates. A layer comprises of a certain set of gates and each layer can be repeated \(D\) times based on the complexity of the problem ### Related Work Since its inception, the tensor network approach has been much more widely explored in the context of classical simulation of quantum computations, compared to the brute-force statevector simulation or other graphical and distributed methods [25; 26]. Matrix Product states especially were widely regarded for their ability to efficiently represent moderately entangled quantum many body states [22]. The idea has been further extended to techniques that efficiently simulate quantum circuits [27] by contracting tensor networks at a fraction of cost of the statevector simulation which holds the full \(2^{N}\) sized vector. Building upon the literature several variations have emerged for specific cases like Projected Entangled Pair States (PEPS) for two-dimensional circuits [28] and Tree Tensor networks (TTN) for circuits with tree-like connectivity [29] and Multi-scale Entanglement Renormalization Ansatz (MERA) [30] etc. Note that the naive MPS-based circuit simulation (which will be referred to as non-noisy MPS approximation in this manuscript) as formulated in [27] and widely implemented across quantum computing platforms like Qiskit, do not efficiently encode circular entanglement from first to last qubits. Further, any application of two-qubit gate contractions result in increasing tensor size which in turn increases the computational complexity as the number of two-qubit gates in the circuit increases. To circumvent this shortcoming, Zhou et al. [23] proposed a truncated MPS approximation to simulate noisy quantum computers, which demonstrates a linear complexity in number of qubits and circuit depth. The noisy simulation addresses the issue of increasing tensor size by approximating the larger tensor after the application of a two qubit gate with tensors of smaller size. The higher order tensor is decomposed into two lower order tensors by truncated singular value decomposition. This approximation preserves the tensor sizes after the application of each gate unlike in the previous iterations of MPS-based simulation. A number of quantum-inspired tensor network methods have been explored in the machine learning literature for supervised learning. Huggins et al. [31] implements MPS and Tree Tensor Network models to solve binary classification. Other tensor network based methods using PEPS and MPS were demonstrated to be effective in image classification tasks [32; 33; 34]. The aforementioned literature mostly explores quantum-inspired classical machine learning techniques but very few works have probed into the utility of tensor networks in augmenting quantum machine learning techniques. Peddireddy et al. [35] extends the singular value thresholding method from Zhou et al. [23] to tensor rings implemented with variational quantum classifiers demonstrating the scalability and improved performance over non-noisy MPS approximation. Tensor rings also encode circular entanglement more efficiently than MPS due to the ring structure. While Zhou et al. [23] evaluates the approximated expectations using noisy MPS representation, they do not explore the notion of extending it to computing gradients of variational circuits. Therefore, the application of noisy circuit simulation to scale the classical optimization loop of VQE is still an open problem. Furthermore, extending this approximation method from MPS to tensor rings can also improve representability. This work builds up on [35] and [23] by adapting the noisy tensor ring representation to compute the approximate gradients of the parameters of a variational quantum eigensolver using the parameter-shift rule. Although the proposed TR based representation computes less accurate gradients than non-noisy MPS based representations owing to the additional information that is removed in the form of truncated singular values, TR based approach scales much more efficiently. ## II Problem Setup ### Max-Cut Optimization Problem This section will briefly introduce the maximum cut (max-cut) problem and its mathematical notion in the context of quantum computers. Max-Cut is an NP-hard binary optimization problem with a history of applications in statistical physics, VLSI design and clustering etc. Given an undirected graph \(G=(V,E)\), with \(V\)and \(E\) representing the nodes and edges of the graph, the problem aims to maximize the summed weights of the edges that are cut by grouping the nodes of the graph into two subsets by choosing the optimal subgroups. The mathematical definition follows the QUBO formulation [36]: a graph of \(n\) nodes with the weights of the edges given by \(w_{ij}\) for \((i,j)\in E\). The nodes of the graph are cut into two subgroups labelled \(+1\) and \(-1\) Figure 2: **Illustration of a tensor train and tensor ring decomposition** A higher order tensor with order-\(d\) and dimensionality \(n_{k}\) at the \(k\)-th edge is decomposed into (i) a Tensor Ring with all the \(d\) tensors multiplied at the indices denoted by \(r_{k}\) and (ii) a Tensor Train whose border tensors are constrained to have an order of 2 while the internal tensors have an order of 3 The problem attempts to maximize the objective function \(C(x)\) given by the sum of the weights of the edges connecting the nodes in \(+1\) to the nodes in \(-1\) which assumes the form: \[C(x)=\sum_{i,j}w_{ij}x_{i}(1-x_{j}) \tag{1}\] where \(x\in\{0,1\}^{n}\) and \((i,j)\in E\). The bitstring \(x\) corresponds to an instance of the grouping schema where \(x_{i}=0\text{ or }1\) represents the i-th node being assigned to the subgroup \(+1\) or -1 respectively. In order to find the solution to the given objective function with a quantum computer, we construct an Ising Hamiltonian [37] corresponding to the function by substituting \(x_{i}\) with its matrix transformation \(\frac{I-Z_{i}}{2}\) where \(Z_{i}\) are the Pauli \(Z\) operators that act on qubit \(i\) and \(I\) is the identity matrix: \[C(x)=\sum_{i,j}\frac{1}{4}w_{i,j}(I-Z_{i})(I+Z_{j}) \tag{2}\] \[C(x)=\frac{1}{2}\sum_{i<j}w_{ij}-\frac{1}{2}\sum_{i<j}Z_{i}Z_{j} \tag{3}\] Essentially, maximizing the objective of the given optimization problem is equivalent to minimizing the energy of Ising Hamiltonian given by: \[\mathcal{H}=\sum_{i,j}w_{i,j}Z_{i}Z_{j} \tag{4}\] whose ground state corresponds to the solution of the optimization.The full Hamiltonian \(\mathcal{H}\in\mathbb{C}^{2^{n}}\) is never constructed explicitly but is represented using a combination of the Pauli Z operators. ### Variational Quantum Eigensolver VQE is one of the algorithms that utilizes parameterized quantum circuits to solve for an approximate solution of combinatorial optimization problems. Unlike QAOA, VQE does not enforce any constraints on the circuit ansatz and therefore can be altered to suit the hardware that it's being implemented on. The optimization problem is first translated to a qubit Hamiltonian \(\mathcal{H}\) whose eigenvalues correspond to the costs of various solutions with the ground state being associated with the optimal solution of the problem. A quantum circuit with parameterized unitary rotations denoted by \(U(\theta)\) is applied to an initial state \(\ket{\psi_{0}}\) (generally chosen to be the basis state \(\ket{0}^{\otimes n}\)) resulting in a trial wavefunction. \[\ket{\psi(\theta)}=U(\theta)\ket{\psi_{0}} \tag{5}\] Here, \(U(\theta)\) represents a chosen ansatz \(U\) with variational parameters given by \(\theta\). The energy landscape of the Hamiltonian can be traversed using this wavefunction to estimate the expected energy. We choose the notation \(\langle H(\theta)\rangle\) to represent the expectation value of \(\ket{\psi(\theta)}\) with respect to the observable Hamiltonian \(\mathcal{H}\). \[\langle H(\theta)\rangle=\bra{\psi(\theta)}\mathcal{H}\ket{\psi(\theta)} \tag{6}\] The algorithm then updates the variational parameters of the circuit employing an outer loop optimizer using gradient descent or other adjacent methods. The process is repeated until we arrive at a sufficiently low energy. The quality of the solution at the \(t\)-th iteration is evaluated using the approximation ratio which is defined as follows: \[\alpha=\frac{M-\langle H(\theta_{t})\rangle}{M-m} \tag{7}\] where \(M\) represents the maximum possible Hamiltonian value and \(m\) the minimum. In other words, \(\alpha=1\) represents the optimal solution, and \(\alpha=0\) represents making no cuts. Most of the variational quantum algorithms including VQE are implemented as hybrid models that compute the expected value of the observable on a quantum computer while calculating gradients and updating the weights on a classical computer. The fundamental mechanics of the VQE algorithm is illustrated in Figure 1. Following the parameter shift rule [21; 38], when the variational parameters are components of a single qubit rotation gate, the gradient takes the following form: \[\frac{\partial\left\langle H(\theta)\right\rangle}{\partial\theta^{i}}=\frac{1 }{2}[\left\langle H(\theta+\tfrac{\pi}{2}\mathbb{1}_{i})\right\rangle-\left \langle H(\theta-\tfrac{\pi}{2}\mathbb{1}_{i})\right\rangle] \tag{8}\] Given the choice of ansatz, we choose a circuit that only comprises \(CX\) (\(CNOT\)) gates and single qubit rotation gates which form a universal gate set, thus simplifying the gradients to the closed form given in Equation 8 where \(\theta^{i}\) is the \(i\)-th element of \(\theta\), \(\left\langle H(\theta)\right\rangle\) corresponds to the energy of the Hamiltonian \(\mathcal{H}\) with respect to the wavefunction generated by the circuit \(U(\theta)\) and \(\mathbb{1}_{i}\) is a one-hot vector with the \(i\)-th value as \(1\). ## III Methodology ### Computing gradients using Tensor Rings Since the gradients of VQE can be computed by implementing quantum circuits, it is crucial to be able to carry out the circuits efficiently. Although the parameter-shift method is faster than the automatic differentiation, it requires a quantum processor to run three identical ansatz with different parameters numerous times to arrive at the gradients (More discussion on this is provided in section III.2). This could present an impediment given the limited availability of quantum computers and the cost of each implementation. Therefore, it is essential to study the utility of classical simulation of quantum circuits in assisting the optimization procedure. Tensor networks have been shown to be effective in approximating quantum many body systems and are thus a strong contender among the methods for efficiently simulating quantum circuits. A tensor network can be easily understood via Penrose diagrams or Tensor Network diagrams where each diagram corresponds to a graph of multiple nodes with each node representing a tensor. A tensor is a multidimensional array of with its order denoting the number of its dimensions or edges. A popular approximation strategy for quantum systems involve Matrix Product States(MPS) or Tensor Trains (TT), a class of tensor networks that aim to represent a higher order tensor as a chain of order-3 tensors (See Figure 2). This representation has the advantage of the topological similarity with a multi-qubit system where each tensor corresponds to a single qubit and the contraction between the tensors encodes the entanglement between the qubits. However, TTs are limited in their flexibility and representation ability due to the constraint on their border rank. Since the border ranks are much lower than the inner ranks, this representation may not be optimal for some specific quantum systems. Also, an optimal TT representation greatly depends on the order of the products restricting the choice of ansatz. Note that the border rank constraints present the same hindrances in the application of TTs to classical datasets as well. In order to ameliorate these issues, researchers in the area of classical machine learning have adopted Tensor Rings(TR) to represent the data [39; 40]. TR structures relaxes the rank constraints on the border tensors increasing the expressibility of the tensors. TR decomposition multiplies the tensors circularly therefore removing the variance to permutations of the multiplicative order. Notable advantages of TR representation with respect to quantum states involves flexibility in the choice of the ansatz. To explain this further, let us assume a circuit similar to the one shown in Figure 1b where entanglement was introduced between the first and the last qubits using a \(CX\) between the said qubits. TR representations are a better fit to encode this kind of cyclic entanglement, therefore improving the choice set of ansatz for the problem. A quantum state \(\ket{\psi}\in\mathbb{C}^{2^{N}}\) can be approximated by a tensor ring with N tensors (corresponding to N qubits) circularly multiplied with each tensor denoted by \(\tau(n)\). \[\ket{\psi}=\sum_{i_{1}\dots i_{N}}\sum_{\Gamma_{1}\dots\tau_{N}}\tau(1)^{i_{1 }}_{\tau_{N}\Gamma_{1}}\tau(2)^{i_{2}}_{\Gamma_{1}\Gamma_{2}}\dots\tau(N)^{i_ {N}}_{\tau_{N}\Gamma_{1}}\ket{i_{1}i_{2}\dots i_{N}} \tag{9}\] Here, free indices \(i_{n}\in\{0,1\}\) span the \(2^{N}\) dimensional Hilbert space corresponding to the quantum state whereas \(r_{n}\) represent the bond indices (indices connecting the tensors) with rank \(\chi_{n}\), which determines the quality of the approximation with entangled states i.e., higher values of \(\chi_{n}\) are better able to represent strongly entangled states. The rank of the given tensor representation for \(\ket{\psi}\) is denoted by \((\chi_{1},\chi_{2},\ldots,\chi_{N})\). Throughout the manuscript we choose \(\chi_{n}=\chi\) for all \(n\), reducing the number of hyperparameters. The choice of \(\chi\), hereafter referred to as the tensor ring bond, for a specific problem significantly determines the representation ability and therefore performance of the algorithm. Each tensor in the the proposed TR representation is a third order tensor with a dimension of \(\chi\times\chi\times 2\). The exponential reduction in storage complexity can be observed where a quantum state is represented by \(2^{N}\) parameters, its TR approximation can be represented using only \(2N\chi^{2}\) parameters. The approximation for a typical initialization for VQAs i.e., \(\ket{0}^{\otimes N}\) can be easily computed to be a tensor ring with each tensor of dimension \(\chi\times\chi\times 2\) where the value of the tensor is 1 at the index (1,1,1) and 0 elsewhere, represented by \(\mathbbm{1}_{(1,1,1)}\). However, if a different initialization is to be chosen, constructing an approximation may not be as straightforward but efficient algorithms for TR decomposition have been studied at length in [41]. While a TR can represent a quantum state, it would also need to be transformed by parameterized rotations in order to function as specified in VQAs. Given the assumption of utilizing only single qubit gates and \(CX\) gates in order to simplify the parameter shift rule, it would be sufficient to study the transformations of the TR corresponding to the aforementioned gate set. Unitary transformations of single qubits are represented by a \((2\times 2)\) matrix which is a 2nd order tensor. The matrix multiplication associated can be implemented by contracting the unitary tensor along the free edge of the tensor corresponding to a qubit as specified in the following equation: \[\tau^{\prime}(n)^{i^{\prime}_{r_{n-1}r_{n}}}_{r_{n-1}r_{n}}=\sum_{i_{n}}U_{i^{ \prime}_{n}i_{n}}\tau(n)^{i_{n}}_{r_{n-1}r_{n}} \tag{10}\] \(U_{i^{\prime}_{n}i_{n}}\) is the 2nd order tensor with indices \(i^{\prime}_{n}\) and \(i_{n}\) corresponding to the unitary matrix acting on \(n\)-th qubit which is contracted along the edge \(i_{n}\) with the \(n\)-th tensor denoted by \(\tau(n)\) spanning the indices \(r_{n-1},r_{n}\) and \(i_{n}\), resulting in the new tensor \(\tau^{\prime}(n)_{r_{n-1}r_{n}}\). Note that the transformation associated with a single qubit rotation (visually illustrated in Fig 3) does not alter the structure of the tensor ring preserving the storage complexity. Two qubit rotations like \(CX\) however, can change the tensor ring structure increasing the storage complexity. In order to alleviate this, we use truncated singular value decomposition with the enlarged tensor to break it down to two tensors of the original smaller size. Say a two qubit gate \(U\in\mathbb{R}^{4\times 4}\) is to be applied to the adjacent qubits \(m\) and \(n\) (including the circular entanglement). We begin by contracting the two tensors \(\tau(m)^{i_{m}}_{r_{m-1}r_{m}}\) and \(\tau(n)^{i_{n}}_{r_{n-1}r_{n}}\) along their shared index \(r_{m}=r_{n-1}\) to compute a new tensor: \[M^{i_{m}i_{n}}_{r_{m-1}r_{n}}=\sum_{r_{m}}\tau(m)^{i_{m}}_{r_{m-1}r_{m}}\tau( n)^{i_{n}}_{r_{n-1}r_{n}} \tag{11}\] The two qubit gate \(U\) is then reshaped into the tensor \(U_{i^{\prime}_{m}i^{\prime}_{n}i_{m}i_{n}}\) and multiplied with the tensor \(M^{i_{m}i_{n}}_{r_{m-1}r_{n}}\) along the shared edges: Figure 3: Tensor Ring transformation with a single qubit gate \[(\tau^{\prime})^{i^{\prime}_{m}i^{\prime}_{n}}_{r_{m-1}r_{n}}=\sum_{i_{m}i_{n}}U_{ i_{m}\frac{i^{\prime}_{n}i^{\prime}_{n}}{i_{m}i_{n}}}M^{i_{m}i_{n}}_{r_{m-1}r_{n}} \tag{12}\] The resultant tensor is reshaped into a matrix of shape \((i^{\prime}_{m}\times r_{m-1})\times(i^{\prime}_{n}\times r_{n})\) whose singular value decomposition is performed as follows: \[(\tau^{\prime})^{i^{\prime}_{n}\times r_{n}}_{i^{\prime}_{m}\times r_{m-1}}= \sum_{r_{m}}X^{i^{\prime}_{m}}_{r_{m-1}r_{m}}S_{r_{m}}Y^{i^{\prime}_{n}}_{r_{n -1}r_{n}} \tag{13}\] where the orthogonal vectors of \(\tau^{\prime}\) populate the matrices \(X\) and \(Y\) whereas \(S_{r_{m}}\) is a diagonal matrix with the singular values. Since we assume a constant TR bond \(r_{m}=\chi\) and we know the dimensionality of \(i\) to be 2 ( the free indices span the quantum state), in this case, \(\tau^{\prime}\) has \(2\chi\) singular values. \(S_{r_{m}}\) is truncated resulting in a new diagonal matrix \(S^{\prime}_{r_{m}}\) with only the largest \(\chi\) values remaining. We also truncate \(X\) and \(Y\) accordingly to keep only the orthogonal vectors corresponding to the remaining singular values. We compute products of the matrices \(X,Y\) and \(S\) as follows to make up the new tensors at the sites \(m\) and \(n\) of the tensor ring. Note that while this method can only work with two qubit gates acting on adjacent qubits,this can be extended to a generic circuit using SWAP gates. \[\tau^{\prime}(m)^{i^{\prime}_{m}}_{r_{m-1}r_{m}}=X^{i^{\prime}_{m}}_{r_{m-1}r_ {m}}S^{\prime}_{r_{m}} \tag{14}\] \[\tau^{\prime}(n)^{i^{\prime}_{n}}_{r_{n-1}r_{n}}=Y^{i^{\prime}_{n}}_{r_{n-1}r_ {n}} \tag{15}\] Following the procedure specified, the resulting tensor ring would culminate with the same structure and dimensionality as before the procedure, preserving the storage complexity after each application of a two qubit rotation. It is to be noted, the specified operations at worst scale at \(O(\chi^{3})\), and without this approximation, the dimensionality of the tensor network approximation scales exponentially in the number of two-qubit rotations or the depth of the circuit, therefore increasing the computational complexity. Different stages of the two qubit rotation procedure with a TR is demonstrated in Figure 4. Given that an ansatz has been chosen for a variational algorithm (assuming the conditions of only constructing a circuit with parameterized single qubit gates and \(CX\) gates), it can be represented as a set of gates denoted by \(\mathbb{U}\), ordered by their position in the circuit i.e. a gate that is applied first to the quantum gate is placed at the beginning of the set, with the single qubit gates parameterized by \(\theta_{t}\). The final quantum state produced by the circuit can be approximated by a tensor ring that is initialized as \(\mathbb{1}_{(1,1,1)}\) and transformed with each gate in \(\mathbb{U}\) as specified in the procedure in the preceding paragraphs. In order to compute the expected energy with respect to the final quantum state, it must be decomposed into its linear sum of the expected energy of the unitary components of the Hamiltonian composed of Pauli matrices. \[\left\langle\psi(\theta)\right|\mathcal{H}\left|\psi(\theta)\right\rangle= \sum_{i,j}w_{i,j}\left\langle\psi(\theta)\right|Z_{i}Z_{j}\left|\psi(\theta)\right\rangle \tag{16}\] We propose to compute the expected energy with respect to a component \(Z_{p}Z_{q}\) using the TR representation by the application of single qubit Pauli Z gate at sites \(p\) and \(q\) and contracting it with the ring before the Z transformations along the edges that span the quantum Hilbert space (See Fig 5). Figure 4: Tensor ring transformation with a two qubit gate \[\tau^{\prime}(\theta)_{i_{1}\ldots,i^{\prime}_{p},\ldots,i^{\prime}_{q},\ldots i _{N}}=\sum_{i_{p},i_{q}}Z^{i^{\prime}_{p}}_{i_{p}}Z^{i^{\prime}_{q}}_{i_{q}}\tau( \theta)_{i_{1}\ldots,i_{p},\ldots,i_{q},\ldots i_{N}} \tag{17}\] \[\left\langle\psi(\theta)\right|Z_{p}Z_{q}\left|\psi(\theta_{t})\right\rangle =\sum_{i_{1},i_{2},\ldots i_{N}}\tau^{\prime}(\theta)_{i_{1},i_{2}\ldots i_{N }}\tau(\theta)_{i_{1},i_{2}\ldots i_{N}} \tag{18}\] In the equations above, \(\tau(\theta)\) represents the final state produced by the ansatz \(\mathbb{U}\) parameterized by \(\theta\) approximated by a TR and \(\tau^{\prime}(\theta)\) is produced after the Pauli Z transformations on the final state. Note that the indices \(i^{\prime}_{p}\) and \(i^{\prime}_{q}\) in \(\tau^{\prime}(\theta)\) have been renamed to \(i_{p}\) and \(i_{q}\) for a simplified representation. When computing the expected value, the order of the contractions becomes crucial to the computational complexity but it has been established [42] that it can be computed effectively in \(O(N\chi^{3})\) steps. The total procedure to compute the expected value has been presented in a more compact form in Algorithm 2. We utilize this algorithm to evaluate the gradients of the variational quantum eigensolver by computing the expected energy of the two circuits with shifted parameters as shown in Algorithm 3. The gradients are then used to update the weights of the variational parameters in the same manner as the naive VQE. ### Complexity In terms of memory, we note that we construct and manipulate only a tensor ring with \(N\) tensors corresponding to \(N\) qubits which grows at the scale of \(O(N\chi^{2})\) as opposed to the \(O(2^{N})\) for the full quantum state. Zhou et al. [23] establishes that the tensor network bond \(\chi\) can be chosen to be sufficiently low in order to simulate a noisy quantum computer at a linear computational complexity in the number of qubits \(N\) and circuit depth \(D\) (defined as the number of repeating parametrized blocks). Parameter shift rule, popularized for its ability to compute the gradients on a quantum computer, evaluates the gradients by computing the expectations with shifted weights.However, computing the expected values with an additive error \(\epsilon\) requires a many-fold implementation of the same circuit generally in the order of \(O(1/\epsilon^{2})\) which adds to the statistical noise. The proposed method can compute each gradient classically with a single iteration of two circuits each of which scales as \(O(ND\chi^{3})\) with an error rate controlled by \(\chi\). The error rate introduced by the truncation decreases with an increasing bond dimension \(\chi\) and generally saturates at a finite value in the order of \(10^{-2}\) per two qubit gate for circuits with large \(N\) and \(D\). This is in contrast to the error rate on a quantum computer characterized by the fidelity per two qubit gate which exponentially decays in the overall number of gates in the circuit [23]. The finite fidelity per gate allows us to scale the proposed algorithm in circuit depth and qubits for larger applications. Automatic differentiation (AD), a tool prevalent in classical machine learning literature and applications, grows at least as fast as the forward pass of the network in terms of computational complexity. This indicates that classically computing the gradients of VQE by AD scales exponentially as it would for classically computing the energy expectation of a circuit. It must be noted that the proposed method of tensor ring transformations can be used with AD as well, which again provides an exponential speedup in \(N\) and \(D\). ## IV Experiments To demonstrate the runtime performance and accuracy of the TR-VQE presented in Algorithm 3, we compare several instances of training TR-VQE for MaxCut problem with Filtering VQE (F-VQE) [13] and naive VQE implemented on the Qiskit framework (MPS-VQE). Both the benchmarks use a non-noisy MPS representation to simulate the quantum computations from the circuit as formulated in [22; 27] and the F-VQE is additionally implemented with an identity filter to equate the number of parameters in all the experiments. A sampling noise is introduced in the implementation of MPS-VQE and F-VQE to compute the expected values from the circuit. As discussed before, MPS-VQE is expected to compute more accurate gradients than TR-VQE owing to the induced noise in the proposed TR representation. Therefore MPS-VQE converges faster, however takes longer runtimes per iteration because the tensor sizes in MPS-VQE increase with circuit depth. F-VQE additionally implements filtering operators to change the optimization landscape thereby improving the training convergence. Amaro et al. [13] claims that the inclusion of filtering Figure 5: Evaluating the expected energy with respect to a quantum state using the TR approximation operators leads to a faster and more reliable convergence to the optimal solution. This improvement, however, is dwarfed with larger circuits with more number of qubits (Readers can refer to [13] for additional details on the implementation of F-VQE). We further collected data on TR-VQE to analyze how internal configurations, namely bond rank, and graph size, i.e., number of qubits affect the performance relative to filtering and naive VQE. All of the graphs used were randomly generated with two to three edges per node, and uniformly distributed weights (between 1 to 10) and edge pairs. We use the same circuit ansatz for all experiments, with an initial parameterized layer of \(R_{y}\) gates on all qubits and a variational block repeated \(D\) times, where \(D\) represents the circuit depth. Each variational block contains a set of circular \(CX\) or \(CNOT\) gates followed by parameterized \(R_{y}\) gates on all qubits followed by another set of \(CX\) and \(R_{y}\) gates. The circuit depth and the tensor ring rank is set to 1 and 10 respectively for all experiments, unless otherwise specified. Figure 6 indicates how each of the three algorithms performs in terms of iteration runtime across randomly generated graphs of varying sizes and different circuit ansatz. The results for each algorithm were averaged across 10 initializations each with multiple unique MaxCut graphs of fixed size. For MPS-VQE and F-VQE, the number of shots used in the Hamiltonian evaluation was increased quadratically in graph size. Across varying graph sizes, TR-VQE's per-iteration runtime, computed as the time taken for computing the expected value of the Hamiltonian and updating the parameters from the evaluated gradients, is faster than both filtering and non-filtering VQE with smaller graphs and by extension, smaller number of qubits. As illustrated in Figure 6b, the iteration runtimes of TR-VQE consistently improve by a large margin over the benchmarks when the number of qubits are increased. Figure 6a demonstrates the iteration runtime of each algorithm with increasing circuit depths for a graph with 10 nodes. TR-VQE again shows a significant improvement in runtime compared to MPS-VQE and F-VQE with increasing number of layers. The results from both the experiments are compatible with the theoretical claims of improved runtime complexity as discussed in Section III.2. The runtime speedup can be attributed to the consistent rank and tensor sizes irrespective of the circuit depth whereas in the naive MPS based approach, the tensor sizes increase with the circuit depth. On the other hand, TRVQE performs with near-equivalent accuracy to the other algorithms, despite the runtime speedup. Figure 7a displays per-iteration accuracy for the algorithms, averaging data from 10 runs on various randomly generated graphs with a fixed size of 10 nodes. The accuracy was compared using the approximation ratio at each iteration computed as defined in equation 7. The resulting data from Figure 7a indicate that TR-VQE performs similar to F-VQE in terms of accuracy, diverging on average by no more than 3% at any point during training. When extended to variable graph sizes, TR-VQE once again performs on par or better than the alternative algorithms. The data in Table 1 was collected using a TR-VQE bond rank of 10 and 1000 shots per circuit evaluation for MPS-VQE and F-VQE. Excluding an outlier at small graph sizes due to instability, MPS-VQE performed the most accurately due to the availability of more information, albeit at the cost of larger runtime. However, TR-VQE followed closely behind, with a large but inconsistent gap in accuracy between it and the least accurate F-VQE algorithm. We also plot the approximation ratio of TR-VQE with varying TR bond rank and it is to be noted that TR Figure 6: Runtime for each iteration of the optimization with (Left) varying circuit depth or number of layers and (Right) varying graph size or number of qubits across F-VQE, MPS-VQE and TR-VQE VQE performs almost as good as MPS-VQE at ranks as low as 12, indicating that an exponential speedup can be achieved at smaller ranks, improving the storage complexity. All experiments including the benchmarks see a wide variance in terms of accuracy with larger graph sizes due to a phenomenon called the barren plateau effect [43] which is informally defined as the impaired performance due to the exponential flattening of loss landscape in the number of qubits. Martin et al. [44] demonstrate that barren plateau effect persists in quantum MPS circuits \begin{table} \begin{tabular}{|c|c c c|} \hline **Graph Size** & **TR-VQE** & **MPS-VQE** & **F-VQE** \\ \hline 6 & 97.68\% & 83.26\% & 94.11\% \\ 8 & 93.47\% & 97.14\% & 91.55\% \\ 10 & 93.17\% & 94.26\% & 84.44\% \\ 16 & 90.70\% & 94.33\% & 93.83\% \\ \hline \end{tabular} \end{table} Table 1: Optimal approximation ratio (Equation 7 averaged across 10 different graphs for each graph size, trained over 100 iterations) of TR-VQE compared against those of MPS-VQE and F-VQE for various graph sizes. An additional sampling noise in the order of \(10^{-1.5}\) has been considered for MPS-VQE and F-VQE Figure 8: (Left) Plot depicts the mean gradient distance of TR-VQE and noisy quantum simulation over problems of multiple graph sizes. The circuit depth and TR bond rank are set to 1 and 10, respectively. (Middle) Mean gradient distances of TR-VQE and noisy quantum simulation are plotted against the circuit depth. Graph size and bond rank are both set to 10. (Right) Mean gradient distance with increasing bond rank. The dotted line represents the same value from noisy quantum simulation. The circuit depth and graph size are set to 3 and 10, respectively. Figure 7: (Left) Plot depicts the improvement of approximation ratio with the iterations for TR-VQE, F-VQE and MPS-VQE. A statistical noise is introduced in MPS-VQE and F-VQE with the expectations sampled over 1000 shots (Right) Illustration of varying accuracy with different tensor ring bond rank and therefore we can surmise that Tensor ring circuits, as an extension of MPS, will face a similar challenge in training. To assess the accuracy of approximate gradients, we employ the \(l^{2}\)-norm to compare gradients obtained from state vector simulations and those generated using the TR-VQE method. The mean gradient distance, computed as the average norm difference across 500 randomly selected points on the optimization landscape, is used as a metric. We compare this metric with values obtained from noisy simulations that emulate the gradients on an actual quantum computer using noise models from the \(ibm\)\(montreal\) machine. We examine the mean gradient distance for various circuit depths and graph sizes. Figure 8(Left) illustrates that the gradients produced by the TR-VQE method closely resemble those obtained from exact state vector simulations, with almost negligible differences. In contrast, gradients derived from quantum simulation deviate significantly from the exact gradients, a trend that becomes more pronounced as the number of qubits increases, as expected. As shown in Figure 8(Middle), TR-VQE's effectiveness diminishes with higher circuit depths due to the cumulative impact of two-qubit gates. However, this performance decline can be mitigated by increasing the tensor rank, as demonstrated in Figure 8(Right). In conclusion, gradients computed from approximate classical simulations can achieve accuracy comparable to those obtained from quantum computers. Consequently, they can be a valuable addition to the optimization process in hybrid algorithms. ## V Conclusion This work proposes a novel technique for combinatorial optimization problems with Variational Quantum Eigensolvers by approximating the circuit computations with noisy tensor ring contractions. The proposed algorithm uses parameter shift rule to evaluate the gradients used to update the variational parameters, but computes the expected values of the shifted circuits using tensor ring approximation. The computational complexity of circuit evaluation grows linearly in the number of qubits and the circuit depth which offers a quadratic speedup over the perfect classical simulation. Evaluating gradients using TR-VQE can also eliminate the additive error present in circuit computations on quantum computers. We validate the algorithm by implementations on several instances of Max-Cut problem and compare with algorithms that use the full state information. The results demonstrate the vast improvement in runtime with respect to the number of qubits and circuit depth validating the complexity analysis at a minor cost of accuracy. ## Appendix A Commonly used gates The matrix representation of some of the commonly used gates in the manuscript are listed below: \[R_{x}(\theta)=\begin{bmatrix}cos(\theta/2)&-isin(\theta/2)\\ -isin(\theta/2)&cos(\theta/2)\end{bmatrix},\] \[R_{y}(\theta)=\begin{bmatrix}cos(\theta/2)&-sin(\theta/2)\\ sin(\theta/2)&cos(\theta/2)\end{bmatrix},\] \[R_{z}(\theta)=\begin{bmatrix}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{bmatrix}\] \[H=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\] \[CNOT=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{bmatrix}\] \[R(\alpha,\beta,\gamma)=\begin{bmatrix}cos(\alpha/2)&-e^{i\gamma}sin(\alpha/2 )\\ e^{i\beta}sin(\alpha/2)&e^{i\beta+i\gamma}cos(\alpha/2)\end{bmatrix}\] A variational eigenvalue solver on a photonic quantum processor, Nature communications **5**, 1 (2014).
2302.10798
Learning a Consensus Sub-Network with Polarization Regularization and One Pass Training
The subject of green AI has been gaining attention within the deep learning community given the recent trend of ever larger and more complex neural network models. Existing solutions for reducing the computational load of training at inference time usually involve pruning the network parameters. Pruning schemes often create extra overhead either by iterative training and fine-tuning for static pruning or repeated computation of a dynamic pruning graph. We propose a new parameter pruning strategy for learning a lighter-weight sub-network that minimizes the energy cost while maintaining comparable performance to the fully parameterised network on given downstream tasks. Our proposed pruning scheme is green-oriented, as it only requires a one-off training to discover the optimal static sub-networks by dynamic pruning methods. The pruning scheme consists of a binary gating module and a novel loss function to uncover sub-networks with user-defined sparsity. Our method enables pruning and training simultaneously, which saves energy in both the training and inference phases and avoids extra computational overhead from gating modules at inference time. Our results on CIFAR-10 and CIFAR-100 suggest that our scheme can remove 50% of connections in deep networks with less than 1% reduction in classification accuracy. Compared to other related pruning methods, our method demonstrates a lower drop in accuracy for equivalent reductions in computational cost.
Xiaoying Zhi, Varun Babbar, Pheobe Sun, Fran Silavong, Ruibo Shi, Sean Moran
2023-02-17T09:37:17Z
http://arxiv.org/abs/2302.10798v4
# A New Baseline for GreenAI: Finding the Optimal Sub-Network via Layer and Channel Pruning ###### Abstract The concept of Green AI has been gaining attention within the deep learning community given the recent trend of ever larger and more complex neural network models. Some large models have billions of parameters causing the training time to take up to hundreds of GPU/TPU-days. The estimated energy consumption can be comparable to the annual total energy consumption of a standard household. Existing solutions to reduce the computational burden usually involve pruning the network parameters, however, they often create extra overhead either by iterative training and fine-tuning for static pruning or repeated computation of a dynamic pruning graph. We propose a new parameter pruning strategy that finds the effective group of lightweight sub-networks that minimizes the energy cost while maintaining comparable performances to the full network on given downstream tasks. Our proposed pruning scheme is green-oriented, such that the scheme only requires one-off training to discover the optimal static sub-networks by dynamic pruning methods. The pruning scheme consists of a lightweight, differentiable, and binarized gating module and novel loss functions to uncover sub-networks with user-defined sparsity. Our method enables pruning and training simultaneously, which saves energy in both the training and inference phases and avoids extra computational overhead from gating modules at inference time. Our results on CIFAR-10 and CIFAR-100 suggest that our scheme can remove \(\approx 50\%\) of connections in deep networks with \(<1\%\) reduction in classification accuracy. Compared to other related pruning methods, our method has a lower accuracy drop for equivalent reductions in computational costs. ## 1 Introduction The benefit of large, sparse, and over-parameterised models have brought significant energy cost as a sacrifice to their state-of-the-art (SOTA) performances [15]. For example, the vision transformer model (ViT-L16) with 307M parameters can achieve 99.42% accuracy on the CIFAR-10 dataset and 87.76% on the ImageNet dataset [14]. The training of ViT-L16 model equires 680 TPUv3-core-days 1 and 3672kWh energy consumption, equivalent to 32.5% of the annual energy consumption of an average US household [14, 15, 16]. Footnote 1: Multiplication of the number of used TPUv3 cores and the training time in days Network pruning is one direction aiming at searching for greener AI models, based on the assumption that over-parameterized networks can safely remove parameters before or after training, without significantly affecting the network performances [13]. There are two common types of network pruning methods - static and dynamic. Static network pruning generates a unified sub-network for all data, while dynamic pruning computes for different suitable sub-networks to different data samples. Static network pruning often requires pre-defined neuron importance measure that thresholds trained neurons to be pruned [13, 12, 14, 15, 16]. Further fine-tuning or regrowing of the selected sub-network are often involved after training, which can potentially lead to further improvement in performance [12, 13, 14]. Dynamic pruning, on the other hand, applies a parameterized and learnable gate function that computes the neuron importance on the fly, leading to a different computational graph for each data sample. The training phase optimizes the learnable gating functions with the empirical loss, and the inference phase computes the appropriate sub-network from gates' forward propagation results [20, 18, 19, 21]. From a Green AI perspective, neither of the existing dynamic or static pruning approach is ideal. Dynamic pruning is not optimal for parallel computing due to necessary indexing operations at inference and causes overhead from extra connection importance computations. Static pruning can reduce computational resources at inference, but the itera tive pruning-and-fine-tuning process consumes more computational resources and time during the training phases. One-shot pruning after training is no better than the iterative procedure as its effectiveness heavily depends on the assumed priors, which lack verification in their validity prior to training [14]. Our pruning method is able to compute for a smaller network without costing significant training resources, by simultaneously optimizing the network structure and parameters. This simultaneous optimization is realized with a _light-weight trainable binary gating module_ along with _polarising regularization_. The polarising regularization allows for the emergence of a stable sub-network that performs well for all data points at the end of training. The inference time is then reduced since the static sub-network is ready-to-use. We verify the scheme's validity on two types of pruning (layer and channel) on different ResNets [14], applied to two differently-sized datasets (CIFAR-10 and CIFAR-100 [15]). Comparisons with naive baselines and previous works are also presented. ## 2 Related Works ### Energy-Aware AI [17] were among the first authors to define the concepts of Green AI (environmentally friendly AI) and Red AI (heavily energy-consuming AI), and suggest that models should be evaluated beyond accuracy by taking into account their carbon emission and electricity usage, elapsed time, parameter count, and floating point operations (FPOs/FLOPs). [18] and [19] proposed frameworks to quantify the carbon emission resulting from usage of specific AI models on various common devices. In order to reduce the carbon emission from model training, different approaches have been commonly used. For example, model quantization can be used to reduce the elapsed time and processor memory usage [1], while network distillation and network pruning approaches can be used to reduce the number of parameters and total FLOPs [10]. ### Network Pruning Network pruning aims to rank the importance of the edges in a neural network model in order to find a sub-network with the most important edges. There are two approaches to achieve this goal: _static_ or _dynamic_ methodologies. Static network pruning finds a unified sub-network at the end of the training, and is usually followed-up by a fine-tuning procedure to further improve the sub-network performance. This pruning scheme relies on the calculated importance scores of the edges of interest. The edge importance can be calculated, for example, by the magnitude or the influence of an edge to the final output. In convolutional neural networks (CNNs), eExperiments on static feature map pruning [13] and channel pruning [14] demonstrated a 30% reduction in FLOPs or a 2.5\(\times\) reduction in GPU time with only negligible performance degradation or even improvement in some cases. [1] expanded the problem to multi-stage pruning to make the pruning approach adaptable to different size requirements. This goal was achieved by training and fine-tuning the sub-networks with incremental size reduction while making sure the model accuracy stays the same each time the size requirement gets reduced. Dynamic pruning, on the other hand, aims to find input-variant sub-networks, and. Thus input-dependent elements are usually added to the original network often adds input-variant elements to compute the importance of the edges under interest. [20][20] proposed the adaptive inference graph that computes the importance of the CNN layers with a probabilistic and learnable gating module before each layer. [12] proposed a similar framework to prune the CNN channels by using reinforcement learning to train an optimal channel importance scorer. Combining both static and dynamic pruning methods can potentially achieve a greater impact on green AI. This approach can leverage static pruning's compatibility with parallel computing, saving energy especially on GPU computation. This approach can also leverage dynamic pruning's advantage of adaptation to different inputs by different optimal sub-network structures. [14], for example, proposed a sub-differentiable sparsification method where parameters can potentially be zeroed after optimization under stochastic gradient descent. However, the non-unified sub-networks still cause excess indexing computation in parallel computing. Our work focuses on the problem of finding a unified sub-network for data following a certain distribution, by using the dynamically pruned sub-networks as intermediate states. To unify the sub-networks, some works proposed a concept of dynamic neural graph, where each neural network is represented as a graph with nodes being the variables and edges being the connections [21]. The pruning procedure involves updating edge weights by back-propagation and selecting a fixed proportion to prune according to the weight magnitudes. In this scheme, the graph representation and corresponding pruning can cover recurrent neural networks (cyclic graphs), beyond sequential networks (acyclic graphs) as in most work. Some later research expands the methods to find sub-networks without training for edge weights, proving that randomly weighted neural networks already contain sub-networks that give satisfactory performance results [15]. Our findings also corroborate recent work on the existence of several 'lottery tickets', i.e. pruned sub-networks that can achieve similar accuracy as the original network [13]. To generate such networks, the IMP (Iterative Magnitude Pruning) scheme involves iterative pruning and training over multiple rounds until convergence. This is different from our proposed method, which performs simultaneous training and pruning in one training session and is therefore computationally cheaper to train. ### Discrete Stochasticity and Gradient Estimation To obtain a stable sub-network structure through gradient-based optimization _i.e._ binary activation statuses for each connection, a gating module is needed with differentiable and discrete latent variables. Discrete variables often requires gradient estimation when their gradients are not directly computable. One estimation approach is the Gumbel-Softmax (GS) estimator [15], which enables differentiation in sampling categorical data from a GS distribution. Gradients _w.r.t._ the categorical output distribution is well-defined from the GS distribution. This technique is often applied to generative models for sequences requiring sampling under multinomial distributions [17, 14, 15]. A simpler approach is the straight-through estimator (STE) [1], which binarizes the stochastic output based on a threshold in the forward pass, and heuristically copies the gradient of next layer to the estimator. Experiments show that neural networks gated by the STE give the lowest error rate among other differentiable gates (multinomial and non-multinomial) [1]. More details about the STE are presented in Section _Gating Modules and Straight-through Estimator_. ### Sparsity Regularizers In the network pruning task, often a sparsity regularizer is involved to encourage the pruning ratio during training, among which \(l_{1}-\) and \(l_{2}-\) regularizers are two most common ones. However, standard regularization functions might lead to unnecessary pruning or mis-estimation of network connectivity importance. Regularizers that takes more network structure into consideration include \(l_{2,0}-\) and \(l_{2,1}-\) structured sparsity regularization [13], grouped (on samples or on feature maps) sparsity regularization [13], etc. [13] proposed a binarizing regularizer that encourages each network connection to approach either 1 or 0 for all samples. The binarizing idea can also be extended to continuous activation rates. For example, [15] integrated a polarization regularizer into network pruning to force the deactivation of neurons. Networks pruned under this setting achieve the highest accuracy even at high pruning rate compared to other pruning schemes. ## 3 Problem Setup: Simultaneous Parameter and Architecture Learning We denote a neural network with full connectivity in the form of a graph as \(\Phi:=(V,E)\), where \(V\) is a set of nodes, \(E\) is the set of edges \(E:=\{e^{(x,y)},\forall x,y\in V\}\). A sub-network with partial connectivity can thus be represented as \(\Phi^{\prime}=(V,E^{\prime})\) where \(E^{\prime}\subseteq E\). We also denote the transformation of a network as \(f_{\theta}(\cdot)\equiv f(\cdot;\theta)\), where \(\theta\) denotes all the parameters in a network. Each edge \(e\in E\) is associated with a weight \(\theta^{e}\). For the full network \(\theta=\theta_{\Phi}\), and for the sub-network \(\theta=\theta_{\Phi^{\prime}}\). A sub-network can be expressed in terms of a full-network using an activation matrix \(\mathbf{W}_{e}\) with certain elements zeroed, _i.e._ \[\theta_{\Phi^{\prime}}=\mathbf{W}_{e}^{\top}\theta_{\Phi}, \tag{1}\] where \(w_{e,c}\in\{0,1\}\) for every entry in the edge activation matrix \(\mathbf{W}_{e}\) is binary. In network pruning, we aim to find a sub-network \(\Phi^{\prime}\) and the optimal network parameters \(\theta_{\Phi^{\prime}}^{*}\) simultaneously. We estimate the optimal solution \(\theta_{\Phi^{\prime}}^{*}\) with \(\theta_{\Phi^{\prime}}^{*}\) by minimising the empirical loss, _i.e._ \[\min_{\theta_{\Phi^{\prime}},\Phi^{\prime}}\mathcal{L}(f(\mathbf{x};\theta_{ \Phi^{\prime}}),\mathbf{y}). \tag{2}\] Using the settings in Eq.1, we can safely reform the above objective as: \[\min_{\theta_{\Phi},\mathbf{W}_{e}}\mathcal{L}(f(\mathbf{x};\mathbf{W}_{e}^{ \top}\theta_{\Phi}),\mathbf{y}). \tag{3}\] ## 4 Methodology In practice, the edge activation matrix \(\mathbf{W}_{e}\) is not learned as a whole and each entry in the matrix is not independent. When training a sequential network, the activation of former connections can affect the outputs of the later connections, and thus also affect the gradients back-propagation computations. A naive binary/categorical \(\mathbf{W}_{e}\) would make gradients unable to propagate back as a function of constant value has zero gradient. Therefore, a gradient estimator is needed as the core gating element of each connectivity. We choose the straight-through estimator (STE), as introduced in Section _Discrete Stochasticity and Gradient Estimation_, as this core element. ### Network Architectures Figure 1 illustrates the design for gating module integration into ResNet. Our pruning scheme has slightly different workflows for the training phase and the testing phase. In training, the gating modules with learnable dense layers are trained as part of the network. At inference (for validation or test), the resultant \(\mathbf{W}_{e}\) is loaded, which decides the subset of parameters to be selected - only the connections with a non-zero \(w_{e,c}\) will be loaded for parameters and included in the forward pass. The choice of ResNet as the base network is based on the necessity of residual connections under our proposed scheme to avoid potential computational path termination in the middle of the full network due to a deactivated full layer. Under the discussion of ResNet, we focus on CNN-centered layer and channel (feature map) pruning. However, we also argue that this methodology has the potential be applied to any type of connection, even in less structured pruning (_e.g._ selected kernel-to-kernel connections between convolutional layers). While our method has similarities with dropout-based methods in ResNets, these involve pruning specific connections between nodes - from an architectural standpoint this doesn't necessarily reduce the number of FLOPs unless ### Straight-through Estimator We chose Straight-through estimator (STE) as the binary head for the gating module. The forward path of STE is a hard thresholding function: \[STE(x)=\begin{cases}1,&\text{if }x>0\\ 0,&\text{if }x\leq 0\end{cases}. \tag{4}\] The backward gradient reflects why it's named as'straight-through': \[\frac{\partial\mathcal{L}}{\partial x}=\frac{\partial\mathcal{L}}{\partial STE (x)}\cdot\frac{\partial STE(x)}{\partial x}=\begin{cases}\frac{\partial \mathcal{L}}{\partial STE(x)},&\text{if }|x|\leq 1\\ 0,&\text{if }|x|>1\end{cases}, \tag{5}\] where the insensitive state is triggered when \(|x|>1\). This is to avoid a possible scenario where a large gradient makes the STE output value stay at either 1 or 0 permanently. An immediately observable advantage of the STE as the gating head is that it is a lightweight module for both forward and backward propagation. In the forward pass, no other computation than a sign check is needed. In the backward pass no computation needed. The gradient estimation, often viewed as a coarse approximation of the true gradient under noise, has been proved to positively correlate with the population gradient, and therefore gradient descent helps to minimize the empirical loss [20]. ### Polarisation Regularizer During the dynamic-pruning-style training, the matrix \(\mathbf{W}_{e}(x)\) might not be the same for all \(x\in\mathcal{X}\). To encourage a unified edge activation matrix that \(\mathbf{W}_{e}(x)=\mathbf{W}_{e}(x^{\prime}),\forall x,x^{\prime}\in\mathcal{X}\), we introduce a polarisation regularizer \(\mathcal{R}_{polar}(\{\mathbf{W}_{e}(x)\big{|}x\in\mathcal{X}\})\). The complete loss function is: \[\mathcal{L}(f(\mathbf{x}),\mathbf{y})=\mathcal{L}_{task}(f(\mathbf{x}), \mathbf{y})+\lambda\mathcal{R}_{polar}(\mathbf{W}_{e}(\mathbf{x})) \tag{6}\] where \(\mathcal{L}_{task}\) is the task loss, _e.g._ cross-entropy loss for classification tasks and mean-squared error for regression tasks, and \(\lambda\) is the scale factor for polarisation regularizer. The general form of \(\mathcal{R}_{polar}(\mathbf{W}_{e}(\mathbf{x}))\) is in the form of an inverted parabola. Supposing \(\mathbf{W}_{e}(x)\in\mathbb{R}^{|\mathcal{C}|}\) is flattened for all covered connections \(c\in\mathcal{C}\): \[\mathcal{R}_{polar}(\mathbf{W}_{e}(\mathbf{x})):=\frac{1}{|\mathcal{C}|}( \mathbf{1}-\bar{\mathbf{W}}_{e}(\mathbf{x}))^{\top}\bar{\mathbf{W}}_{e}( \mathbf{x}), \tag{7}\] where \(\bar{\mathbf{W}}_{e}(x)=\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}\mathbf{ W}_{e}(x)\) is the averaged edge activation matrix over all data samples. Given the range of \(\bar{\mathbf{W}}_{e,c}\in[0,1]\), this form of inverted parabola ensures that an equivalent optimum can be reached when \(\bar{\mathbf{W}}_{e,c}\) reaches either boundary of the range. Specifically, in our ResNet layer-pruning scenario, the regularisation term is written as: \[\mathcal{R}_{polar}:=\frac{1}{|L|}\sum_{ly\in L}(1-\bar{g}_{ly})\bar{g}_{ly}, \tag{8}\] where \(\bar{g}_{ly}=\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}g_{ly}(x)\in[0,1]\) is the average of the gating module outputs over all input samples of the layer. Similarly, in our ResNet channel-pruning scenario, the regularisation term is written as: \[\mathcal{R}_{polar}:=\frac{1}{|L|}\sum_{ly\in L}\frac{1}{|C|}\sum_{ch\in C}(1- \bar{g}_{ly,ch})\bar{g}_{ly,ch} \tag{9}\] where \(\bar{g}_{ly,ch}=\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}g_{ly,ch}(x)\in[0,1]\) is the average of the gating module outputs over all input samples of the channel \(ch\in\mathcal{C}\) in the layer \(ly\in\mathcal{L}\).. ## 5 Experiments ### Datasets and Architecture Specifications We test the effectiveness of our proposed method on ResNet-110 [13] as the full network. We chose CIFAR-10 and CIFAR-100 [15] as our datasets, which have widely-acknowledged test results on most variants of ResNets. Both datasets contain 50,000 images for training and 10,000 images for testing, both are 32x32 color images. The CIFAR-10 dataset has 10 classes, and CIFAR-100 has 100 classes. Testing both datasets show the effectiveness of our method under both simple and complex data distributions. ResNet-110 has 54 residual layers, each consisting of 2 convolution layers. In layer pruning, we add the gating module at the beginning of each residual layer to decide whether the layer is to be computed or not. Figure 2 shows the designed position of the layer gating module. In channel pruning, we experiment on two layer designs and three positions of the gating module, and select the one with the best performance on test set. Figure 3 shows the three possible positions of the gating module. Table 1 shows the detailed design of the gating module. Our experiments are conducted on one NVIDIA T4 GPU with 16GB Memory. The batch size is set to 256 for CIFAR-10 and 64 for CIFAR-100. Our training is done under a staged decaying learning rate for 350 epochs (although convergence can usually be achieved before 250 epochs.) The initial learning rate for both dataset is 0.1, and at each next stage will decrease to \(10\%\). On CIFAR-10, the learning rate is adjusted at epochs 60, 120, and 160. On CIFAR-100, the learning rate is adjusted at epochs 125, 190, and 250. We chose stochastic gradient descent (SGD) as the optimizer, with a momentum of 0.9 and a weight decay of \(5\times 10^{4}\). The networks and training Figure 1: Illustration of a gating module with binary decision into the original residual model. In training, the learnable gating modules are trained as other parts of the network. At inference, the gate decisions are pre-loaded, and only the network parameters whose gate decision is open are loaded and computed. Figure 2: Illustration of layer-pruning gating modules in ResNet. procedures are implemented in PyTorch. When randomness is involved, we set the random seed to 1. We apply the network to the image classification task. The pruned networks are evaluated by top-1 accuracy and FLOPs (floating-point operations). The FLOPs count is approximated by the fvcore package 2. Footnote 2: [https://github.com/facebookresearch/fvcore](https://github.com/facebookresearch/fvcore) ### Pruning Results Table 2 shows the results of the pruned networks on CIFAR-10 and CIFAR-100 datasets. It is easily observable that with the layer pruning scheme, we are able to save at least \(\frac{1}{3}\) of computations (FLOPs) while sacrificing accuracy of less than 2%. Under the channel pruning scheme, we can save \(\frac{1}{4}\) computations (FLOPs) while sacrificing accuracy of less than 3%. In general, we note that the layer pruned models perform better than channel pruned models (i.e. there is a lower accuracy drop) for both CIFAR-10 and CIFAR-100, even when relative differences in FLOPs are taken into account. We believe this is because under the design of ResNet, the intermediate feature maps in each residual layer is sufficiently information-compact, and any removal on the feature maps can lead to information loss. The pruning ratio of channel pruning is positive though, with an almost 50% FLOPs reduction. ### Comparison with Baselines We compare its performance with naive baselines and other methods in the literature. For naive baselines, we consider the following: * Naive Dropout ResNet-56: A standard classifier but with \(k\in\{20\%,30\%,50\%,60,80\%\}\) parameters randomly pruned during testing. * Naive Layer Pruned ResNet-56: The same classifier but with \(k\in\{20\%,30\%,50\%,60,80\%\}\) layer activations randomly set to \(0\) during testing. A visualized comparison with the naive baselines is shown in Figure 4, demonstrating our pruning method's ability to maintain network performance with a high pruning ratio. For comparison, we consider the methods that also followed the idea of simultaneous pruning and learning, as listed in Table 3. Figure 5 shows the performance of our scheme against some methods found in literature, all of which use the ResNet-56 as the base network. We note that our scheme provides competitive results not only in terms of the absolute accuracy, but also the accuracy drop resulting from pruning. To better compare to these methods quantitatively at similar pruning rates, we control the pruning rate by an additional sparsity regularization term in the loss function that penalises \begin{table} \begin{tabular}{c|c} **Layer Gating Module** & **Channel Gating Module (K=2)** \\ \hline avg\_pool\_2d (output\_size=channel\_mu) & flatten() \\ dense(in\_dim=channel\_mu,in\_dim=16) & dense(in\_dim=out\_channel\_feature\_size,out\_dim=16) \\ \hline batch\_norm\_1d) & batch\_norm\_1d) \\ \hline ReLU\({}_{10}\) & ReLU\({}_{10}\) \\ \hline dense (in\_dim=16,out\_dim=1) & dense(in\_dim=16,out\_dim=1) \\ \hline STE\({}_{1}\) & STE\({}_{1}\) \\ \end{tabular} \end{table} Table 1: Layer and channel gating module design. **Left**: “channel_in” is the input channel number for the first convolution layer in the residual layer. **Right**: “mid_channel” is the output channel number for the first convolution layer, equal to the input channel number for the second convolution layer. “feature_size” is the dimension of a flattened feature map. For other K value, we simply vary the dense layer number. Figure 4: Comparison of our method with some naive baselines on CIFAR-10 with ResNet-56. **Left**: Pruning rate vs Top-1 accuracy. **Right** % FLOPs reduction vs Top-1 accuracy. Here, the naive dropout method does not reduce FLOPs, so we omitted it. Figure 3: Illustration of channel-pruning gating modules in ResNet: The gating module (a) between two convolutional layers; (b) before the first convolution layer; (c) after the second convolution layer. K=1 or 2 in our experiments. non-zero activations. The resulting loss function is: \[\mathcal{L}(f(\mathbf{x}),\mathbf{y}) =\mathcal{L}_{task}(f(\mathbf{x}),\mathbf{y})+\lambda_{polar} \mathcal{R}_{polar}(\mathbf{W}_{e}(\mathbf{x}))\] \[+\lambda_{act}\mathcal{R}_{act}(\mathbf{W}_{e}(\mathbf{x})) \tag{10}\] where \[\mathcal{R}_{act}(\mathbf{W}_{e}(\mathbf{x}))=\frac{1}{|L|}\sum_{ ly\in L}\bar{g}_{ly} \tag{11}\] is the overall average layer activation, where the average is taken across layers \(\overline{l}_{y}\in\mathcal{L}\) and inputs \(x\in\mathcal{X}\). We now set \(\lambda_{polar}=1\), the gating function as Gumbel-softmax, and vary \(\lambda_{act}\in[0,1]\) to change the pruning rate. ### Ablation Studies We test the individual utility of the two major modules, STE and polarisation regularizer, through ablation studies. To test the utility of STE, we replace STE with sampling from Bernoulli distribution and with Gumbel-softmax. When sampling from Bernoulli distribution, we set up a threshold equal to the mean of the gating module's outputs right after the last dense layer (_i.e._ right before the original STE). If the output is larger than the mean, we keep the layer; Otherwise we prune the layer. Table 4 shows results from the three gating functions, experimented on CIFAR-10. We observed that other than STE, no other gating head function can result in a perfectly unified and stable sub-network. We can thus conclude on the utility of STE in terms of stabilising on the dynamic sub-networks and keeping the expected performances. However, we should also note that the Gumbel-softmax has the potential to achieve a better task performance while keeping a set of merely lightly dynamic sub-networks indicated from the low level of \(\mathcal{R}_{polar}\). The discussion should then be open that, whether a suitable unification tweak on the resul \begin{table} \begin{tabular}{c c c} \hline \hline **Function** & **Top-1 accuracy (\%)** & \(\mathcal{R}_{polar}\) \\ \hline STE & 92.86 & **0.00** \\ Gumbel-softmax & **94.05** & 0.0039 \\ Bernoulli & 91.96 & 0.2454 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of different gating functions on CIFAR-10. \(\mathcal{R}_{polar}\) taken at the end of training session, each with the same number of epochs. A larger \(\mathcal{R}_{polar}\) corresponds to a less unified sub-networks after convergence (not ideal). These experiments were performed on ResNet110 with \(\lambda_{polar}=3\). \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **Model** & **Top-1 accuracy (\%)** & **Gate open ratio (\%)** & **FLOPs (M)(rel)** \\ \hline CIFAR-10 & baseline & 93.68 (0) & 100.00 & 255.3 (1) \\ & layer pruned & 92.82 (-0.86) & 53.70 & 137.7 (0.54) \\ & channel pruned & 91.01 (-2.67) & 48.76 & 189.9 (0.74) \\ \hline CIFAR-100 & baseline & 71.85 (0) & 100.00 & 255.3 (1) \\ & layer pruned & 70.01 (-1.84) & 66.67 & 171.1 (0.67) \\ & channel pruned & 66.91 (-4.94) & 51.14 & 135.41 (0.52) \\ \hline \hline \end{tabular} \end{table} Table 2: Results of pruned networks on CIFAR-10 and CIFAR-100 datasets. Numbers in brackets in top-1 accuracy shows the relative difference from the baseline model. FLOPs is counted in millions (M). Numbers in brackets in FLOPs shows the relative ratio from the baseline model. Baseline model is ResNet110. Figure 5: Comparison between our scheme and related methods in literature on CIFAR-10 with ResNet-56. **Left**: Pruning rate vs Top-1 accuracy. **Right** % FLOPs reduction vs Top-1 accuracy drop. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Unpruned Accuracy** & **Pruned Accuracy** & **\% FLOPS Reduction** & **Accuracy Drop** \\ \hline **Ours** & 93.43 & **92.42 \(\pm\) 0.14** & 41.81 \(\pm\) 4.01 & **1.01 \(\pm\) 0.14** \\ AMC [11] & 92.80 & 91.90 & 50.00 & 1.10 \\ Importance [7] & **93.60** & 91.90 & 39.90 & 1.14 \\ SFP [11] & 93.59 & 92.26 & **52.60** & 1.33 \\ CP [11] & 92.80 & 91.80 & 50.00 & 1.00 \\ PFEC [11] & 93.04 & 91.31 & 27.60 & 1.73 \\ VCP [12] & 93.04 & 92.26 & 20.30 & 0.78 \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of our method over \(5\) trials against some established, related methods in literature for \(\approx 50\%\) FLOPs reduction (**Dataset**: CIFAR-10, **Model**: ResNet-56). We note that our method offers a competitive trade-off between accuracy and FLOPs while being simple to implement. For SFP, we consider only the pre-trained variant for fair comparison as the fine-tuning variant in the paper incurs extra computational costs that are not necessarily considered. tant dynamic sub-networks from Gumbel-softmax can further improve the performances that STE can reach. To test the utility of polarisation regularizer, we experimented on a series of \(\lambda_{polar}\) values, from 0 to 3. We also experimented on the effect of gradually increasing regularizer weight \(\lambda_{polar}\) during a training session in order to verify whether a partially trained network would affect the pruning results. Figure 6 shows the layer pruning evolution under different \(\lambda_{polar}\) settings and Table 5 shows the performance of the resulting sub-networks. The layer pruning evolutions show that as \(\lambda_{polar}\) increases, the convergence to a unified sub-network is accelerated. Figure 5(e) demonstrates a clear evolution pattern before and after \(\lambda_{polar}\) turned on. We thus can conclude on the individual effect of the polarisation regularization. The test set results, however, show no clear correlation between \(\lambda_{polar}\) value and the resultant accuracy or gate open ratio. Therefore, the selection of \(\lambda_{polar}\) can be mostly determined via empirical results. ### Channel Pruning Design Choices We tested multiple channel pruning designs and selected the one with the most outstanding performances in classification accuracy and channel pruning ratio. Table 6 shows the pruning results under different gating module architectures and positions, tested on CIFAR-10. For gating module architectures (recall Table 1), we experimented on one dense layers and two dense layers. For gating module positions (recall Figure 3), we experimented on the gating module in front of each residual layer, in the middle of two convolution layers in a residual layer, and at the end of each residual layer. On CIFAR-10, results show that while all designs achieve a similar channel pruning ratio, the design with 2 dense layers and placed at the end of each residual layer (2FC-after) achieves the best classification accuracy that is significantly higher than most others. However, the design with 1 dense layer placed between two convolution layers (1FC-middle) also achieves a similar accuracy. ## 6 Conclusion We proposed a network pruning scheme that maintains performance while being more computationally efficient (thus greener). Through simultaneous parameter and structure optimization, our pruning scheme finds a stable sub-network with similar performances on the same downstream task as the full network. The scheme consists of a differentiable, lightweight binary gating module and novel regularisers to enforce unification (data-invariance) of the pruned sub-networks. Experiments on two types of pruned network elements (layer and channel) show that the scheme can find sub-networks with significant reduction in FLOPs (\(>\)\(50\%\)) with minimal sacrifice in downstream performance (\(<\)\(1\%\)). Compared to other similar methods, the sub-network's accuracy and accuracy drop from our method are among the best. With fine tuning of our uncovered sub-networks, we anticipate further performance improvements - however, this is beyond the scope of our work, which aims to emphasise maximal performance gains within limited computational budgets. Beyond convolutional networks, we look forward to testing the applicability of our pruning scheme to other base networks, datasets and tasks. We hope that our encouraging results facilitate the transition towards energy efficient deep learning models. \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **top-1 accuracy (\%) (rel)** & **gate open ratio (\%)** \\ \hline baseline & 93.68 (0) & 100.00 \\ 1FC-before & 69.71 (-23.97) & 47.82 \\ 1FC-middle & 90.60 (-3.08) & 47.77 \\ 1FC-after & 83.89 (-9.79) & 41.91 \\ 2FC-before & 85.39 (-8.29) & 48.93 \\ 2FC-middle & 82.94 (-10.74) & 49.36 \\ 2FC-after & **91.01 (-2.67)** & 48.76 \\ \hline \hline \end{tabular} \end{table} Table 6: Channel pruning results under different gating module specifications on CIFAR-10 dataset. Numbers in brackets in top-1 accuracy shows the relative difference (rel) from the baseline model. “\(N\)FC” refers to \(N=\{1,2\}\) dense layer(s) in gating module and “before”, “middle”, and “after” for the three gating module positions, illustrated in Figure 3. Figure 6: Layer opening ratio during training under different \(\lambda\) values. Each row represents one layer among the 54 layers. Each column corresponds to one epoch. For \(\lambda\in\{0,1,2,3\}\), we include the first 50 training epochs. For \(\lambda\uparrow\), we take the part before and after the \(\lambda\) change (separated by the pink line). \begin{table} \begin{tabular}{c c c} \hline \hline **Spec** & **Top-1 accuracy (\%) (rel)** & **gate open ratio (\%)** \\ \hline baseline & 93.68 (0) & 100.00 \\ \(\lambda_{polar}=0\) & 90.79 (-2.89) & 18.52 - 53.70 \\ \(\lambda_{polar}=1\) & 92.44 (-1.24) & 53.70 \\ \(\lambda_{polar}=2\) & 91.67 (-2.01) & **40.74** \\ \(\lambda_{polar}=3\) & 91.85 (-1.83) & 57.41 \\ \(\lambda_{polar}\uparrow\) & **93.18 (-0.50)** & 55.56 \\ \hline \hline \end{tabular} \end{table} Table 5: Results of pruned networks under different \(\lambda_{polar}\) values on CIFAR-10 (ResNet110). “\(\lambda_{polar}\uparrow\)” uses the settings of \(\lambda_{polar}=0\) for the first 125 epochs; \(\lambda=2\) for the next 65 epochs; and \(\lambda=3\) for the resting epochs until end of training.
2301.09799
LDMIC: Learning-based Distributed Multi-view Image Coding
Multi-view image compression plays a critical role in 3D-related applications. Existing methods adopt a predictive coding architecture, which requires joint encoding to compress the corresponding disparity as well as residual information. This demands collaboration among cameras and enforces the epipolar geometric constraint between different views, which makes it challenging to deploy these methods in distributed camera systems with randomly overlapping fields of view. Meanwhile, distributed source coding theory indicates that efficient data compression of correlated sources can be achieved by independent encoding and joint decoding, which motivates us to design a learning-based distributed multi-view image coding (LDMIC) framework. With independent encoders, LDMIC introduces a simple yet effective joint context transfer module based on the cross-attention mechanism at the decoder to effectively capture the global inter-view correlations, which is insensitive to the geometric relationships between images. Experimental results show that LDMIC significantly outperforms both traditional and learning-based MIC methods while enjoying fast encoding speed. Code will be released at https://github.com/Xinjie-Q/LDMIC.
Xinjie Zhang, Jiawei Shao, Jun Zhang
2023-01-24T03:47:37Z
http://arxiv.org/abs/2301.09799v3
# LDMIC: Learning-based Distributed Multi-view Image Coding ###### Abstract Multi-view image compression plays a critical role in 3D-related applications. Existing methods adopt a predictive coding architecture, which requires joint encoding to compress the corresponding disparity as well as residual information. This demands collaboration among cameras and enforces the epipolar geometric constraint between different views, which makes it challenging to deploy these methods in distributed camera systems with randomly overlapping fields of view. Meanwhile, distributed source coding theory indicates that efficient data compression of correlated sources can be achieved by independent encoding and joint decoding, which motivates us to design a learning-based distributed multi-view image coding (LDMIC) framework. With independent encoders, LDMIC introduces a simple yet effective joint context transfer module based on the cross-attention mechanism at the decoder to effectively capture the global inter-view correlations, which is insensitive to the geometric relationships between images. Experimental results show that LDMIC significantly outperforms both traditional and learning-based MIC methods while enjoying fast encoding speed. Code is released at [https://github.com/Xinjie-Q/LDMIC](https://github.com/Xinjie-Q/LDMIC). ## 1 Introduction Multi-view image coding (MIC) aims to jointly compress a set of correlated images captured from different viewpoints, which is promising to achieve high coding efficiency for the whole image set by exploiting inter-image correlation. It plays an important role in many applications, such as autonomous driving (Yin et al., 2020), virtual reality (Fehn, 2004), and robot navigation (Sanchez-Rodriguez and Aceves-Lopez, 2018). As shown in Figure 1(a), existing multi-view coding standards, e.g., H.264-based MVC (Vetro et al., 2011) and H.265-based MV-HEVC (Tech et al., 2015), adopt a joint coding architecture to compress different views. Specifically, they follow the predictive compression procedure of video standards, in which a selected base view is compressed by single image coding. When compressing the dependent view, both the disparity estimation and compensation are employed at the encoder to generate the predicted image. Then the disparity information as well as residual errors between the input and predicted image are compressed and passed to the decoder. In this way, the inner relationship between different views decreases in sequel. These methods depend on hand-crafted modules, which prevents the whole compression system from enjoying the benefits of end-to-end optimization. Inspired by the great success of learning-based single image compression (Balle et al., 2017, 2018; Minnen et al., 2018; Cheng et al., 2020), several recent works have investigated the application of deep learning techniques to stereo image coding, a special case of MIC. In particular, Liu et al. (2019), Deng et al. (2021) and Wodlinger et al. (2022), mimicking traditional MIC techniques, adopt a unidirectional coding mechanism and explicitly utilize the disparity compensation prediction in the pixel/feature space to reduce the inter-view redundancy. Meanwhile, Lei et al. (2022) introduces a bi-directional coding framework, called as BCSIC, to jointly compress left and right images simultaneously for exploring the content dependency between the stereo pair. These rudimentary studies demonstrate the potentials of deep neural networks (DNNs) in saving significant bit-rate for MIC. However, there are several significant shortcomings hampering the deployment and application scope of existing MIC methods. **Firstly**, both the traditional and learning-based approaches demand inter-view prediction at the encoder, _i.e._, joint encoding, which requires the cameras to communi cate with each other or to transmit the data to an intermediate common receiver, thereby consuming a tremendous amount of communication resources and increasing the deployment cost (Gehrig & Dragotti, 2007). This is undesirable in applications relevant to wireless multimedia sensor networks (Akyildiz et al., 2007). An alternative is to deploy special sensors like stereo cameras as the encoder devices to acquire the data, but these devices are generally more expensive than monocular sensors and suffer from limited field of view (FoV) due to the constraints of distance and position between built-in sensors (Li, 2008). **Secondly**, most of the prevailing schemes, except BCSIC, are developed based on disparity correlations defined by the epipolar geometric constraint (Scharstein & Szeliski, 2002), which usually requires to know the internal and external parameters of the camera in advance, such as camera locations, orientations, and camera matrices. Whereas, it is difficult for a distributed camera system without communication to access the prior knowledge of cameras (Devarajan et al., 2008). For example, the specific location information of cameras in autonomous driving is usually not expected to be perceived by other vehicles or infrastructure in order to avoid leaking the location and trajectory of individuals (Xiong et al., 2020). **Finally**, as shown in Table 1 and Figure 4, compared with state-of-the-art (SOTA) learning-based single image codecs (Mimen et al., 2018; Cheng et al., 2020), existing DNN-based MIC methods are not competitive in terms of rate-distortion (RD) performance, which is potentially caused by inefficient inter-view prediction networks. To address the above challenges, we resort to innovations in the image coding architecture. Particularly, our inspiration comes from the Slepian-Wolf (SW) theorem (Slepian & Wolf, 1973; Wolf, 1973) on distributed source coding (DSC) 1. The SW theorem illustrates that separate encoding and joint decoding of two or more correlated sources can theoretically achieve the same compression rate as a joint encoding-decoding scheme under lossless compression. It has been extended to the lossy case by Berger (1978) and Tung (1978), which provides the inner and outer bounds of the achievable rate region. Based on these information-theoretic results on DSC, we develop a learning-based distributed multi-view image coding (LDMIC) framework. Specifically, **to avoid collaboration between different cameras**, as shown in Figure 1(b), each view image is mapped to the corresponding quantized latent representation by an individual encoder, while a joint decoder is used to reconstruct the whole image set, which can successfully avoid the communication among cameras or the usage of special sensors. This architectural innovation is theoretically supported by the DSC theory. **Instead of disparity-based correlations**, we design a joint context transfer (JCT) module based on the cross-attention mechanism agnostic to geometry priors to exploit the global content dependencies between different views at the decoder, making our approach applicable to arbitrary multi-camera systems with overlapping FoV. **Finally**, since the separate encoding and joint decoding scheme is implemented by DNNs, the end-to-end RD optimization strategy is leveraged to implicitly help the encoder to learn to remove the partial inter-view redundancy, thus improving the compression performance of the overall system. In summary, our main contributions are as follows: Footnote 1: More details about the theorem and proposition of distributed source coding are provided in Appendix 6.4. * To the best of our knowledge, this is the first work to develop a novel deep learning-based _view-symmetric_ framework for multi-view image coding. It decouples the inter-view operations at the encoder, which is highly desirable for distributed camera systems. * We present a joint context transfer module at the decoder to explicitly capture inter-view correlations for generating more informative representations. We also propose an end-to-end encoder-decoder training strategy to implicitly make the latent representations more compact. Figure 1: Overview of different multi-view image coding architectures, including (a) a joint encoding architecture and (b) the proposed symmetric distributed coding architecture. * Extensive experimental results show that our proposed framework is the first distributed codec achieving comparable coding performance to the SOTA joint encoding-decoding schemes, implying the effectiveness of the inter-view cross-attention mechanism compared with the conventional disparity-based prediction. Moreover, our proposed framework outperforms the asymmetric-based coding framework NDIC (Mital et al., 2022b), which demonstrates the advantage of the view-symmetric design over the asymmetric one. ## 2 Related Works **Single Image Coding.** In the past decades, various standard image codecs have been developed, including JPEG (Wallace, 1992), JPEG2000 (Skodras et al., 2001), BPG (Bellard, 2014), and VVC intra (Bross et al., 2021). They generally apply three key ideas to reduce redundancy: (i) transform coding, e.g., discrete cosine transform, to decrease the spatial correlation, (ii) quantization of transform coefficients to filter the irrelevancy related to the human visual system, and (iii) entropy coding to lessen the statistical correlation of the coded symbols. Unfortunately, these components are separately optimized, making it hard to achieve optimal coding efficiency. Recently, end-to-end image compression has engaged increasing interests, which is built upon the transform coding paradigm with nonlinear transform and powerful entropy models for higher compression efficiency. Nonlinear transform is used to produce compact representations, such as generalized divisive normalization (GDN) (Balle et al., 2015), the self-attention block (Cheng et al., 2020), wavelet-like invertible transform (Ma et al., 2020) and stacks of residual bottleneck blocks (He et al., 2022). To approximate the distribution of latent representations, many advanced entropy models have been proposed. For example, Balle et al. (2017, 2018) put forward the factorized and hyper prior entropy models for the first time. Then the auto-regressive context model (Minnen et al., 2018) is combined into the hyper prior to effectively reduce the spatial redundancy of images at the expense of high decoding latency. In order to improve the decoding speed, Minnen & Singh (2020) and He et al. (2021) investigate the channel-wise and spatial-wise context versions, respectively. These existing works are considered as important building blocks for our scheme. **Multi-view Image Coding.** Conventional MIC standards (Vetro et al., 2011; Tech et al., 2015) are derived from key frame compression methods designed for multi-view video codecs. Since these methods are still in the development stage and only support YUV420 format, they are uncompetitive against single image codecs that allow the YUV444 or RGB format. Meanwhile, existing learning-based MIC approaches (Liu et al., 2019; Deng et al., 2021; Wodlinger et al., 2022; Lei et al., 2022) mainly focus on stereo images, and it is difficult to effectively extend them to the general multi-view scenario. Moreover, they can only handle a fixed number of views. In contrast, our framework exerts average pooling to merge the information between multiple views, making it insensitive to the number of viewpoints. **Distributed Source Coding.** There have been some works developing multi-view compression methods based on DSC. They are typically built on the setting of coding with side information (Zhu et al., 2003; Thirumalai et al., 2007; Chen et al., 2008; Wang et al., 2012), where one view is selected as a reference and compressed independently. For other views, the joint decoder uses the reference as side information to capture the inter-view correlations to reduce the coding rate. Recent learning-based distributed multi-view image compression concentrates on this asymmetric paradigm (Ayzik & Avidan, 2020; Whang et al., 2021; Wang et al., 2022; Mital et al., 2022a;b). Nevertheless, this architecture suffers from high transmission cost for the primary sensor, since it requires a hierarchical relationship between different cameras, leading to the unbalanced coding rates among them (Tosic & Frossard, 2009). Different from the above works, we consider a more practical symmetric coding pattern illustrated in Figure 1(b), where all cameras are treated as equal status. While traditional symmetric coding schemes (Thirumalai et al., 2008; Gehrig & Dragotti, 2009) utilize disparity-based estimation at the decoder to reduce the transmission cost, we get rid of the disparity compensation prediction and adopt the cross-attention mechanism (Vaswani et al., 2017) to capture the global relevance between different views, which effectively improves the compression performance and broadens the application scope. As far as our knowledge, our study is the first in applying DNNs into symmetric distributed coding and achieving the RD performance comparable to joint encoding-decoding schemes. ## 3 Proposed Method ### The Overall Architecture of LDMIC Figure 2 depicts the network architecture of the proposed method. Let \(\mathbb{K}=\{1,\cdots,K\}\) denote the image index set. Given a group of multi-view images \(\mathbf{x}_{\mathbb{K}}=\{\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{K}\}\), each image \(\mathbf{x}_{k}\) is independently mapped to the corresponding representation \(\mathbf{y}_{k}\) by the encoder \(E_{k}\) with shared network parameters. Then \(\mathbf{y}_{k}\) is quantized to \(\hat{\mathbf{y}}_{k}\). After receiving all the quantized representations \(\mathbf{\hat{y}}_{\mathbb{K}}\), the joint decoder \(JD\) exploits the inter-view correlations among \(\mathbf{\hat{y}}_{\mathbb{K}}\) to reconstruct the whole image set \(\mathbf{\hat{x}}_{\mathbb{K}}\). The compression procedure is described as \[\mathbf{y}_{k} =E_{k}(\mathbf{x}_{k},\mathbf{\phi}),\forall k\in\mathbb{K}, \tag{1}\] \[\mathbf{\hat{y}}_{k} =Q(\mathbf{y}_{k}),\forall k\in\mathbb{K},\] \[\mathbf{\hat{x}}_{\mathbb{K}} =JD(\mathbf{\hat{y}}_{\mathbb{K}};\mathbf{\theta}),\] where \(\mathbf{\phi}\) and \(\mathbf{\theta}\) are optimized parameters of the encoder and decoder. Since the quantizer \(Q\) is not differentiable, we apply the mixed quantization approach proposed in Minnen and Singh (2020) during training. Specifically, the latent representation \(\mathbf{y}_{k}\) with an additive uniform noise is taken as the input to the entropy model for estimating the bitrate, while the rounded representation with a straight-through-estimation (STE) gradient flows to the joint decoder for reconstruction. To apply entropy coding to reduce the statistical correlation of the quantized representation \(\mathbf{\hat{y}}_{k}\), each element \(\hat{y}_{k,i}\) is modelled as a univariate Gaussian random variable with its mean \(\mu_{k,i}\) and standard deviation \(\sigma_{k,i}\) by introducing a side information \(\hat{z}_{k,i}\), where \(i\) denotes the position of each element in a vector-valued signal. The probability distribution \(p_{\mathbf{\hat{y}}_{k}|\mathbf{z}_{k}}\) of \(\mathbf{\hat{y}}_{k}\) is expressed as follows: \[p_{\mathbf{\hat{y}}_{k}|\mathbf{z}_{k}}(\mathbf{\hat{y}}_{k}|\mathbf{z}_{k})\sim\mathcal{N}( \mathbf{\mu}_{k},\mathbf{\sigma}_{k}^{2}). \tag{2}\] Meanwhile, a context model is also combined with the entropy model for effectively reducing the spatial redundancy of latent \(\mathbf{\hat{y}}_{k}\). The selection of the context model depends on the specific needs of different applications. We choose an auto-regressive model (Minnen et al., 2018) and a checkerboard model (He et al., 2021) for better coding efficiency and faster coding speed, respectively. ### Joint Context Transfer Module Due to the overlap between the cameras' FoV, there exist significant inter-view correlations in the feature space, which inspires us to propose a joint context transfer (JCT) module to exploit this property for generating more informative representations. As shown in Figure 3, the proposed JCT module receives multi-view features \(\mathbf{f}_{\mathbb{K}}\) as inputs, learns an inter-view context for each view feature, and refines the input features based on the corresponding inter-view contexts. Note that there are \(K\) parallel paths in the JCT module. Each path shares the same network parameters and follows a three-step process described below to obtain the refined representations \(\mathbf{f}_{\mathbb{K}}^{*}\). **Feature extraction.** We firstly utilize two residual blocks to extract the representative feature \(\mathbf{f}_{k}^{{}^{\prime}}\) from the \(k\)-th view \(\mathbf{f}_{k}\). Each residual block, as depicted in Figure 3, is composed of two consecutive convolution layers with Leaky ReLU activation functions. Figure 2: The proposed LDMIC framework with an auto-regressive entropy model. \(\mathbf{\hat{y}}_{\mathbb{K}\setminus\{k\}}\) and \(\mathbf{h}_{\mathbb{K}\setminus\{k\}}\) represent the set of all the view features except for the \(k\)-th view feature \(\mathbf{\hat{y}}_{k}\) and \(\mathbf{h}_{k}\), respectively. Convolution/deconvolution parameters are formatted as (the number of output channels, kernel size, stride). Q denotes quantization. AE and AD represent arithmetic encoder and decoder, respectively. **Multi-view fusion.** All the representations \(\mathbf{f}^{{}^{\prime}}_{\mathbb{K}}\) from the feature extraction module except \(\mathbf{f}^{{}^{\prime}}_{k}\) are aggregated to a preliminary context \(\mathbf{\tilde{f}}^{{}^{\prime}}_{k}\) via a simple average pooling over the dimension of the number of the input features: \[\mathbf{\tilde{f}}^{{}^{\prime}}_{k}=\frac{1}{K-1}\sum_{i\in\mathbb{K}\backslash\{ k\}}\mathbf{f}^{{}^{\prime}}_{i}, \tag{3}\] where \(\mathbb{K}\backslash\{k\}=\{1,\cdots,k-1,k+1,\cdots,K\}\). By this aggregation operation, we achieve fusion between any number of view features. In addition, it is observed that more complex pooling approaches can be developed to further improve the performance. After getting the aggregated context, we apply a multi-head cross-attention module to exploit the dependency between \(\mathbf{f}^{{}^{\prime}}_{k}\) and \(\mathbf{\tilde{f}}^{{}^{\prime}}_{k}\). Since the original attention module incurs high memory and computational cost under a large spatial dimension of input, we adopt the resource-efficient attention in Shen et al. (2021). Specifically, we use a \(1\times 1\) convolution layer and a reshape operation to transform \(\mathbf{f}^{{}^{\prime}}_{k}\in\mathbb{R}^{H\times W\times d}\) and \(\mathbf{\tilde{f}}^{{}^{\prime}}_{k}\in\mathbb{R}^{H\times W\times d}\), _i.e._, query \(\mathbf{Q}_{k}=\mathrm{Conv}(\mathbf{f}^{{}^{\prime}}_{k})\in\mathbb{R}^{n\times h \times d_{1}}\), key \(\mathbf{K}_{k}=\mathrm{Conv}(\mathbf{\tilde{f}}^{{}^{\prime}}_{k})\in\mathbb{R}^{n \times h\times d_{1}}\) and value \(\mathbf{V}_{k}=\mathrm{Conv}(\mathbf{\tilde{f}}^{{}^{\prime}}_{k})\in\mathbb{R}^{n \times h\times d_{2}}\), where \(n=H\times W\) and \(h\) denotes the number of heads. The notations \(d\), \(d_{1}\) and \(d_{2}\) are the channel dimensions of input, key (query) and value in a head, respectively. Then the multi-head cross-attention module is applied as: \[\mathbf{A}_{k,i} =\sigma_{row}(\mathbf{Q}_{k,i})(\sigma_{col}(\mathbf{K}_{k,i})^{\mathsf{ T}}\mathbf{V}_{k,i}),\forall i=1,\cdots,h \tag{4}\] \[\mathbf{f}^{{}^{\prime}}_{\mathbb{K}\backslash\{k\}\to k} =\mathrm{Conv}(\mathbf{A}_{k,1}\oplus\cdots\oplus\mathbf{A}_{k,h}),\] where \(\sigma_{row}\) (\(\sigma_{col}\)) denotes applying the softmax function along each row (column) of the matrix, and \(\oplus\) is the channel-wise concatenation. The context \(\mathbf{f}^{{}^{\prime}}_{\mathbb{K}\backslash\{k\}\to k}\) relevant to the \(k\)-th view feature is extracted and will be injected into the current feature in the next step. **Refinement.** Based on the learned inter-view context \(\mathbf{f}^{{}^{\prime}}_{\mathbb{K}\backslash\{k\}\to k}\), the input feature \(\mathbf{f}_{k}\) is refined to a more informative feature \(\mathbf{f}^{{}^{\prime}}_{k}\): \[\mathbf{f}^{{}^{\prime}}_{k}=\mathbf{f}_{k}+F(\mathbf{f}^{{}^{\prime}}_{\mathbb{K} \backslash\{k\}\to k}\oplus\mathbf{f}^{{}^{\prime}}_{k}), \tag{5}\] where \(F(\cdot)\) consists of two consecutive residual blocks. As shown in Figure 2, the JCT module is placed before the first and third deconvolution layers to connect the different-view decoding stream for feature aggregation and transformation. ### Training The target of LDMIC is to optimize the trade-off between the number of encoded bits and the reconstruction quality. Therefore, a training loss composed of two metrics is used: \[L=\lambda D+R=\lambda\sum_{k=1}^{K}d(\mathbf{x}_{k},\mathbf{\hat{x}}_{k})+\sum_{k=1}^ {K}\left(R(\mathbf{\hat{y}}_{k})+R(\mathbf{\hat{z}}_{k})\right) \tag{6}\] where \(d(\mathbf{x}_{k},\mathbf{\hat{x}}_{k})\) is the distortion between \(\mathbf{x}_{k}\) and \(\mathbf{\hat{x}}_{k}\) under a given metric, such as mean squared error (MSE) \(R(\mathbf{\hat{y}}_{k})\) and \(R(\mathbf{\hat{z}}_{k})\) represent the estimated compression rates of the latent representation \(\mathbf{\hat{y}}_{k}\) and the corresponding hyper representation \(\mathbf{\hat{z}}_{k}\), respectively. \(\lambda\) is a hyper parameter that controls the trade-off between the bit rate cost \(R\) and distortion \(D\). Figure 3: Illustration of the \(k\)-th path in the proposed joint context transfer module. \(\mathbf{f}^{{}^{\prime}}_{\mathbb{K}\backslash\{k\}}\) denotes the set of all the view representations except for the current view representation \(\mathbf{f}^{{}^{\prime}}_{k}\). ## 4 Experiments ### Experimental Setup **Datasets.** To compare with the recently developed learning-based stereo image compression methods, two common stereo image datasets, _i.e._, Instereo2K (Bao et al., 2020) and Cityscapes (Cordts et al., 2016), are chosen to evaluate the coding efficiency of the proposed framework. Apart from testing stereo image datasets related to 3D scenes, we also select a pedestrian surveillance dataset, i.e., WildTrack (Chavdarova et al., 2018), acquired by seven random placed cameras with overlapping FoV, which is to demonstrate the potentials of our proposed framework in distributed camera systems without bipolar geometry relationship between images. More details about datasets are provided in Appendix 6.5. **Benchmarks.** The competing baselines can be split into three categories: (1) _Separate_ model independently compresses each image, whose typical SOTA representatives are BPG (Bellard, 2014), VVC-intra (Bross et al., 2021), Minnen et al. (2018) and Cheng et al. (2020). For BPG and VVC-intra, we disable chroma subsampling. (2) _Joint_ model has access to a set of multi-view images and explicitly utilizes the inter-view redundancy to achieve a high compression ratio. According to performance comparisons in Wodlinger et al. (2022), conventional video standards can be applied in the MIC, where each set of multi-view images is compressed as a multi-frame video sequence by using both HEVC (Sullivan et al., 2012) and VVC (Bross et al., 2021) with _lowdelay_P_ configuration as well as YUV444 input format. We also test MV-HEVC (Tech et al., 2015) with the multi-view intra mode. Apart from that, we report the results of several recent DNN-based stereo image codecs on the InStereo2K and Cityscapes datasets, including DSIC (Liu et al., 2019), two variants of HESIC (Deng et al., 2021), BCSIC (Lei et al., 2022), and SASIC (Wodlinger et al., 2022). (3) _Distributed_ model only uses the joint decoder to implicitly reduce the inter-view dependency. We compare our method with NDIC based on asymmetric DSC (Mital et al., 2022) to demonstrate the superiority of symmetric DSC. More details on baseline settings are given in Appendix 6.5. **Metrics.** The distortion between the reconstructed and original images is measured by peak signal-to-noise ratio (PSNR) and multi-scale structural similarity index (MS-SSIM) (Wang et al., 2003). Besides assessing RD curves, we compute the Bjontegaard Delta bitrate (BDBR) (Bjontegaard, 2001) results to represent the average bitrate savings at the same distortion level. **Implementation Details.** We train our models with five different \(\lambda\) values, where \(\lambda=256,512,1024,2048,4096\) (\(8,16,32,64,128\)) under MSE (MS-SSIM). For MSE optimized models, they are trained from scratch for 400 epochs on InStereo2K/Cityscapes and 700 epochs on Figure 4: Rate-distortion curves of our proposed methods compared against various competitive baselines. using additional information from other cameras. It is observed that the traditional video codecs perform worse than the corresponding intra-frame ones due to lots of heterogeneous overlapping regions, which makes it difficult for standard video codecs to effectively capture the inter-view redundancy by using compensation-based predictions. However, our proposed framework relies on the cross-attention mechanism to exploit the correlations of different views from the perspective of global receptive fields, thereby providing up to 31.21% and 67.77% bitrate saving in PSNR and MS-SSIM, respectively. The remarkable results demonstrate that the proposed LDMIC framework is a promising solution to meet the compression needs of distributed camera systems. The RD curves on the multi-camera case have similar trends with that on the two-camera one, which are provided in Appendix 6.1. Moreover, compared with asymmetric DSC-based NDIC, the proposed method saves 55.67%, 47.5% and 35.15% bits in PSNR on three datasets (InStereo2K, Cityscapes, WildTrack). For the proposed-fast variant with the checkerboard entropy model, the improvements are also adequate, _i.e._, 43.66%, 35.66% and 30.63%. This set of results indicate that the usage of bi-directional information based on symmetric DSC can better exploit the inter-view correlations to bring higher coding gains. Additionally, our methods have better compression efficiency in MS-SSIM than in PSNR, which is partly caused by exploiting the inter-view correlations in the feature space rather than pixel space at the decoder. Thus, the network tends to focus on structure information instead of pixel information. **Computational complexity.** Table 2 shows the computational complexity of seven image codecs running on an Intel Xeon Gold 6230R processor with base frequency 2.10GHz and a single CPU core, including the number of FLOPs, the model parameters and the coding latency. Different from the joint models, our methods designed on DSC decouples the inter-view operations at the encoder, which allows image-level parallel processing. Therefore, the proposed-fast variant enjoys about \(1.36\sim 10.95\) and \(1.41\sim 4.35\) times encoding and decoding speedup against the learned joint schemes (_i.e._, DSIC, HESIC, HESIC+, SASIC). Even if the auto-regressive entropy model is used, the encoding of our method is still faster than that of both DSIC and SASIC based on hyper Figure 5: Ablation study. _Joint Enc-Dec_ and _Sep Enc-Dec_ denote inserting and removing the JCT module at the encoder and decoder, respectively. _Concatenation_, _SAM_ and _BiCTM_ represent different inter-view operations to replace the proposed JCT module at the decoder. _W/O Joint Training_ is to fix the pretrained encoder including the entropy model and only train the joint decoder. Figure 6: Visual examples from the InStereo2K dataset, where we assemble all channels of the latent representation \(Q(\mathbf{y}_{k}-\mathbf{\mu}_{k})\) to display the feature map. prior. Moreover, our proposed fast variant with better coding efficiency achieves similar coding time with another DSC-based method NDIC, which demonstrates the superiority of symmetric DSC in coding speed and compression efficiency. For more details on comparison between our methods and traditional codecs, please refer to Appendix 6.2. ### Ablation study **Inter-view Fusion.** To verify the contribution of the JCT module for fusing inter-view information, a set of ablation experiments are conducted on the InStereo2K dataset with RD curves shown in Figure 5. Specifically, we allow (forbid) both the encoder and the decoder to access the inter-view context, which provides an upper (lower) bound on the performance of the proposed method and is denoted by _Joint (Sep) Enc-Dec_. In this case, the PSNR with (without) the JCT module at the encoder (decoder) improves (drops) by about 0.16dB (0.73dB) at the same bpp level. We further report the compression results when the JCT module is directly replaced by other inter-view fusion operations such as concatenation in Mital et al. (2022b), stereo attention module (SAM) in Wodlinger et al. (2022) and bi-directional contextual transform module (Bi-CTM) in Lei et al. (2022). These operations lead to an increase of the bitrate by 32.73%, 27.99%, 10.11% compared with our method. The experimental results indicate that our proposed JCT module have powerful capability in capturing inter-view correlations and generating more informative representations. **Joint Training Strategy.** In this paper, we exploit the benefit of joint training to implicitly help the encoder to learn removing the partial inter-view redundancy. Thus, the latent representation is expected to be more compact. To investigate its effect, we perform a experiment by only training the joint decoder with the fixed pre-trained encoder and entropy model. As shown in Figure 5, our approach outperforms the _W/O Joint Training_ method by 0.225 dB. In Figure 6, we provide further visual comparisons. It is noted that the latent feature maps with joint training strategy contain more elements with low magnitudes, which requires much fewer bits for encoding. **Number of views.** Table 3 shows the impact of different numbers of views on coding efficiency. We compare the bitrate of cameras C1 and C2 when incorporating different numbers of views during decoding. The bitrate saving increases gradually as more information is received from different cameras. Due to only using a simple average pooling to merge multi-view information to the inter-view context, we get a marginal coding gains when incorporating more views. It is possible to further improve the compression gains of our framework by using more complex aggregation approaches. ## 5 Discussion In this paper, we presented a novel end-to-end distributed multi-view image coding framework nicknamed LDMIC. Our proposal inherits the advantages of traditional distributed compression in image-level parallelization, which is desirable for distributed camera systems. Meanwhile, leveraging the insensitivity of the cross-attention mechanism to epipolar geometric relations, we develop a joint context transfer module to account for global correlations between images from different viewpoints. Experimental results demonstrate the competence of LDMIC in achieving higher coding gains than existing learning-based joint and separate encoding-decoding schemes. Moreover, compared with learned joint models, the LDMIC fast variant enjoys a much lower coding complexity with on-par compression performance. To the best of our knowledge, this is the first successful attempt of the distributed coding architecture to fight against the performance of the joint coding paradigm under the lossy compression case. Based on the proposed framework, there are two clear directions to be explored in the future. On one hand, as mentioned in Section 4.3, it is interesting to investigate how to more effectively incorporate different view information to generate a better inter-view context. One the other hand, it is worth exploring how to extend the framework to multi-view video compression. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Number of cameras & \(K=2\) & \(K=3\) & \(K=4\) & \(K=5\) & \(K=6\) & \(K=7\) \\ \hline Bitrate saving (\%) & 0 & 0.0053 & 0.0801 & 1.0919 & 1.4161 & 1.5004 \\ \hline \hline \end{tabular} \end{table} Table 3: Bitrate savings for two-view images with cameras C1 and C2 as the number of viewpoints increases on the WildTrack dataset. The case of \(K=2\) is set as the anchor. #### Acknowledgments This work was supported by the NSFC/RGC Collaborative Research Scheme (Project No. CRS_HKUST603/22).
2310.10589
Delayed Massive-Star Mechanical Feedback at Low Metallicity
The classical model of massive-star mechanical feedback is based on effects at solar metallicity (Zsun), yet feedback parameters are very different at low metallicity. Metal-poor stellar winds are much weaker, and more massive supernova progenitors likely collapse directly to black holes without exploding. Thus, for ~0.4 Zsun we find reductions in the total integrated mechanical energy and momentum of ~40% and 75%, respectively, compared to values classically expected at solar metallicity. But in particular, these changes effectively delay the onset of mechanical feedback until ages of ~10 Myr. Feedback from high-mass X-ray binaries could slightly increase mechanical luminosity between ages 5-10 Myr, but it is stochastic and unlikely to be significant on this timescale. Stellar dynamical mechanisms remove most massive stars from clusters well before 10 Myr, which would further promote this effect; this process is exacerbated by gas retention implied by weak feedback. Delayed mechanical feedback implies that radiation feedback therefore dominates at early ages, which is consistent with the observed absence of superwinds in some extreme starbursts. This scenario may lead to higher star-formation efficiencies, multiple stellar populations in clusters, and higher Lyman continuum escape. This could explain the giant star-forming complexes in metal-poor galaxies and the small sizes of OB superbubble shells relative to their inferred ages. It could also drive modest effects on galactic chemical evolution, including on oxygen abundances. Thus, delayed low-metallicity mechanical feedback may have broad implications, including for early cosmic epochs.
Michelle C. Jecmen, M. S. Oey
2023-10-16T17:11:26Z
http://arxiv.org/abs/2310.10589v1
# Delayed Massive-Star Mechanical Feedback at Low Metallicity ###### Abstract The classical model of massive-star mechanical feedback is based on effects at solar metallicity (Z\({}_{\odot}\)), yet feedback parameters are very different at low metallicity. Metal-poor stellar winds are much weaker, and more massive supernova progenitors likely collapse directly to black holes without exploding. Thus, for \(\sim 0.4\) Z\({}_{\odot}\) we find reductions in the total integrated mechanical energy and momentum of \(\sim 40\%\) and \(75\%\), respectively, compared to values classically expected at solar metallicity. But in particular, these changes effectively delay the onset of mechanical feedback until ages of \(\sim 10\) Myr. Feedback from high-mass X-ray binaries could slightly increase mechanical luminosity between ages 5-10 Myr, but it is stochastic and unlikely to be significant on this timescale. Stellar dynamical mechanisms remove most massive stars from clusters well before 10 Myr, which would further promote this effect; this process is exacerbated by gas retention implied by weak feedback. Delayed mechanical feedback implies that radiation feedback therefore dominates at early ages, which is consistent with the observed absence of superwinds in some extreme starbursts. This scenario may lead to higher star-formation efficiencies, multiple stellar populations in clusters, and higher Lyman continuum escape. This could explain the giant star-forming complexes in metal-poor galaxies and the small sizes of OB superbubble shells relative to their inferred ages. It could also drive modest effects on galactic chemical evolution, including on oxygen abundances. Thus, delayed low-metallicity mechanical feedback may have broad implications, including for early cosmic epochs. Stellar Feedback (1602) -- Starburst Galaxies (1570) -- Massive Stars (732) -- Metallicity (1031) -- Dwarf Irregular Galaxies (417) -- Lyman-Alpha Galaxies (978) -- Interstellar Medium Wind (848) -- Young Massive Clusters (2049) -- Galaxy winds (626) + Footnote †: journal: AAS ## 1 Introduction Mechanical feedback from massive stars and young star clusters plays a pivotal role in the evolution of star-forming galaxies. Supersonic stellar winds from OB stars and their supernovae (SNe) release large amounts of mechanical energy (\(\sim 3\times 10^{38}\) erg s\({}^{-1}\) per solar mass of stars (Keller et al., 2014) which strongly affect the surrounding interstellar medium (ISM). The shocks generate superbubbles that are pressure-driven by a hot (\(>10^{6}\) K), low-density component which piles up cooler ISM into large, ionized and/or neutral shells. In starbursts, the super star clusters (SSC) drive galactic superwinds that blow hot gas and newly synthesized supernova products into the circumgalactic medium (CGM). Thus, massive-star feedback can be responsible for the morphology, kinematics, ionization balance, and metallicity of the ISM and CGM. Furthermore, expanding superbubbles and superwinds can trigger second, and even third, waves of star formation as they interact with surrounding ISM (e.g., Oey et al., 2005). This classical model for mechanical feedback was formulated for effects at solar metallicity and is approximately constant over the cluster lifetime. However, this is not the case at low metallicity, as the initial mechanical feedback of a low metallicity cluster is at least an order of magnitude lower than its maximum value well into the cluster lifetime (Leitherer et al., 2014). In this work, we show that this effect is likely to be important and could fundamentally impact galaxy evolution in multiple ways. It is well understood that low metallicity mechanical feedback is decreased for the first \(\sim\)3 Myr due to weak stellar winds (see, e.g., the review by Vink, 2022). Recent observational and theoretical evidence suggest that widely used low-metallicity stellar wind mass-loss rates are too high. Mass-loss rates obtained from H\(\alpha\) and UV absorption features reveal values that are smaller than theoretical mass-loss rates from, e.g., Vink et al. (2001) by a factor of \(\sim\)3 (Bouret et al., 2012; Surlan et al., 2013; Puls et al., 2008). Recent numerical models continue to support this moderate decrease (e.g., Gormaz-Matamala et al., 2022; Vink and Sander, 2021). However, recent studies which account for wind micro-clumping and weak winds show a much more dramatic decrease from Vink's (2001) formula of up to one, or even two, orders of magnitude (Bjorklund et al., 2022; Rickard et al., 2022; Ramachandran et al., 2019). Observations in the Small Magellanic Cloud also support much weaker winds (Ramachandran et al., 2019) than expected. Another major effect at low metallicity is that SNe start later than at solar values. The classical paradigm is that all massive stars above \(\sim\)8 \(M_{\odot}\) explode, starting at an age of \(\sim\)3 Myr and continuing steadily until the low-mass limit for core collapse is reached around 45 Myr. However, core-collapse supernova (CCSN) models find that many potential progenitors do not explode but instead form black holes (BH). This effect is well established for low metallicity (Zhang et al., 2008; O'Connor and Ott, 2011; Patton and Sukhbold, 2020). There is no strong consensus on the maximum progenitor mass for SN explosions. Sukhbold et al. (2016) find that only 10% of stars more massive than 20 M\({}_{\odot}\) successfully explode, and O'Connor and Ott (2011) find a strict upper limit of 30 \(M_{\odot}\) at very low metallicities. Additionally, mechanical feedback models overlook dynamical processes, which remove stars, particularly massive stars, before they contribute to the total mechanical feedback of the cluster. These processes are driven by massive binaries, which are more prevalent at low metallicity. When star formation ends, a cluster is not in dynamical equilibrium. As it re-virializes, a significant number of stars will naturally evaporate (e.g., Brinkmann et al., 2017). In addition, the massive stars segregated in the dense cluster core are likely to be dynamically ejected (e.g., Oh et al., 2015; Oh and Kroupa, 2016). If ejected early enough or fast enough, the stars can travel to distances where their mechanical feedback no longer contributes to the aggregate as manifested by a superbubble or superwind. Taken together, these effects imply that mechanical feedback is profoundly different at low metallicity than assumed by the classical paradigm. In this paper we show that the combination of weak stellar winds, fewer supernovae, and the removal of stars by dynamical processes effectively delays the onset of mechanical feedback until cluster ages of \(\sim\) 10 Myr. This has profound effects on the character of massive-star feedback and implies that radiation dominates over mechanical feedback at early ages, which has significant consequences, many of which remain to be understood. ## 2 Starburst99 Models We use Starburst99, a well-established evolutionary synthesis code, to model the mechanical feedback of star-forming galaxies with varying SN progenitor masses (Leitherer et al., 2014). It was previously thought that all stars between 8 and 120 \(M_{\odot}\) end their lives as SNe. However, it is now believed that many low-metallicity progenitors with extended core structure experience direct core collapse without SNe (e.g., O'Connor and Ott, 2011; Zhang et al., 2008; Heger et al., 2003; Sukhbold et al., 2016). The explodability of a progenitor depends on its stellar structure, which is a function of mass and metallicity. To explore the effect of limiting SNe from massive progenitors, we calculate the mechanical luminosity and momentum injection rate over time for three cluster models with varying SN progenitor masses and metallicities. In what follows, the "classical model" has a maximum SN progenitor mass of 120 M\({}_{\odot}\) and solar metallicity. The "unrestricted SNe model" is identical to the classical model but has subsolar metallicity. The "restricted SNe model" has a maximum SN progenitor mass of 20 M\({}_{\odot}\), in line with the predictions of Sukhbold et al. (2016), and has subsolar metallicity. For all models we retain the typical minimum SN progenitor mass of 8 M\({}_{\odot}\), and assume that all stars with masses between the minimum and maximum values explode. We use two stellar evolutionary models published by the Geneva group: one excluding stellar rotation and one with a rotational velocity of 40% of break-up velocity (Georgy et al., 2013; Ekstrom et al., 2012). All models include stellar rotation unless stated otherwise. The models at subsolar metallicity have heavy-element abundances of \(Z=0.002\) and \(Z=0.004\) for their evolutionary model (Georgy et al., 2013) and atmospheric model (Meynet et al., 1994), respectively. All models use instantaneous star formation with a Kroupa initial mass function for a 10\({}^{6}\) M\({}_{\odot}\) cluster. ### Mechanical Feedback Figure 1 shows the mechanical luminosity (left) and momentum injection rate (right) over time due to stellar winds and SNe for different limits on explodability and metallicity. The dashed blue line shows the classical mechanical feedback scenario where strong stellar winds dominate until SNe start at \(\sim\)3 Myr. The magnitude of the stellar wind contribution roughly equals that of the SNe contribution, thereby generating a fairly constant value with time. This is not the case for the subsolar models. The solid red and black lines show the subsolar restricted and unrestricted SNe models, respectively. Since stellar winds are significantly weaker at low metallicity, the start of SNe is clearly identifiable by a sharp increase of an order of magnitude or more in both plots. Since the restricted SNe model limits the SN progenitor range to masses \(<\) 20 M\({}_{\odot}\), the onset of SNe is delayed to an age of \(\sim\)10 Myr. Stellar winds now dominate the mechanical luminosity and the momentum injection rate for this much longer initial period. This causes a great reduction in both the mechanical luminosity and momentum injection rate because, not only are low metallicity stellar winds less powerful than SNe, but also after \(\sim\)3 Myr, as massive stars begin to expire, the remaining lower-mass stars have exponentially weaker stellar winds. Furthermore, we expect the mechanical feedback from stellar winds to be even lower than estimated here, since the adopted mass-loss rates are now thought to be too high (e.g., Vink, 2022; Bjorklund et al., 2022; Gormaz-Matamala et al., 2022; Rickard et al., 2022; Ramachandran et al., 2019). Thus, these combined effects delay the start of strong, SN mechanical feedback until a cluster age of \(\sim\) 10 Myr. The effect of varying the maximum SN progenitor mass is shown in Figure 2 for mechanical luminosity and momentum injection rate, for subsolar models. The models with less massive maximum SN progenitors have exponentially larger delays in the onset of SN feedback, and explodability is also stochastic. Thus, we should take 10 Myr as a nominal value for the feedback delay. Figure 3 shows the total integrated energy (left) and cumulative momentum (right) for the models in Figure 1. The effect from weak, low metallicity stellar winds is shown by the difference between the classical model (blue) and the subsolar, unrestricted SNe model (red). The effect from only restricting SN progenitor masses is shown by the difference between the subsolar, unrestricted SNe model (red) and the restricted SNe model (black). These differences are apparent in the total integrated energy of the cluster at early ages and become less differentiated as the cluster ages. However, for the total momentum injected, the difference between the models remains large over the duration of mechanical feedback input, since, as seen in Figure 1, the momentum injection rate decreases significantly, and more steeply than, the energy injection rate. Thus, the initial momentum loss due to the absence of early SNe causes a significant decrease in the total, cumulative momentum feedback that the system never recovers. Figure 4 gives the cumulative energy (\(E\)) and cumulative momentum (\(p\)) for the restricted SNe model at subsolar metallicity, normalized by the classical model (blue) and subsolar unrestricted SNe model (red). For the first \(\sim\)10 Myr, both the cumulative energy and momentum of the restricted SNe model are only \(\sim 10\%\) that of the classical model. Ultimately, the cumulative Figure 1: Starburst99 models of mechanical luminosity (left) and momentum injection rate (right) from winds and SNe for a 10\({}^{6}\) M\({}_{\odot}\) cluster (cf. Leitherer et al., 2014). The classical (blue) and subsolar unrestricted SNe (red) models have SN progenitor masses 8 – 120 M\({}_{\odot}\). The subsolar, restricted SNe (black) model has SN progenitor masses of 8 – 20 M\({}_{\odot}\). The stellar evolutionary tracks include rotation for both solar (Ekström et al., 2012) and subsolar (Georgy et al., 2013) metallicity. The green dotted lines in the left panel show mechanical feedback for the binary population synthesis models of Rappaport et al. (2005) for an HMXB population of 10 M\({}_{\odot}\) black holes with secondary stars drawn from a Salpeter (1955) IMF. Models S and B show \(L_{\rm{mech}}=L_{\rm{Edd}}\) and 10\(L_{\rm{Edd}}\), respectively (see text). energy of the restricted SNe model reaches \(\sim 60\%\) of the classically expected energy. In comparison, the cumulative momentum of the restricted SNe model only reaches \(\sim 25\%\) of the classically expected momentum. Figure 4 also shows models that assume non-rotating stars. Rotation increases the stellar wind power, but it also extends stellar evolution and therefore effectively shifts the maximum SN progenitor to slightly lower masses. The overall effect is a slight reduction in total energy injection. ## 3 Additional mechanisms ### Accretion Driven Feedback Justham & Schawinski (2012) also noted the expected delay in SN feedback due to the most massive stars collapsing directly to black holes. They therefore suggest that mechanical feedback from high-mass X-ray binaries (HMXBs) generated by some of these black holes could potentially be important at times before SN feedback dominates. However, the binary population synthesis models of Rappaport et al. (2005), predict that the total expected mechanical luminosity from HMXBs at ages \(<10^{7}\) yr is only on the order of \(\log L_{\rm{mech}}/{\rm{erg\,s^{-1}}}\sim 38-39\) for a \(10^{6}\) M\({}_{\odot}\) stellar populations. The left panel of Figure 1 shows mechanical feedback from the Rappaport Figure 3: Total cumulative integrated energy (left) and total momentum injected (right) from winds and SNe. Models are the same as in Figure 1. Figure 2: Mechanical luminosity (left) and momentum injection rate (right) over time for a subsolar \(10^{6}\) M\({}_{\odot}\) cluster. Models with varying maximum SN progenitor masses are shown with different colors as indicated, in increments of 10 M\({}_{\odot}\). The model with a maximum SN progenitor of 20 M\({}_{\odot}\) (dashed line) corresponds to the default subsolar, restricted SNe model in Figure 1. et al. models overplotted, normalized to the \(10^{4}\) core-collapse events in our Starburst99 models. We assume that the X-ray luminosity \(L_{X}\sim L_{\rm mech}\) is on the order of the Eddington luminosity \(L_{\rm Edd}\)(e.g., Pinto and Kosec, 2022; Justham and Schawinski, 2012). Predictions (e.g., King and Muldrew, 2016) and observations (e.g., Kosec et al., 2018; Tao et al., 2019) suggest that \(L_{\rm mech}\) could also be \(10-100\times L_{\rm Edd}\), while on the other hand, \(L_{\rm Edd}\) may be a substantial upper limit to \(L_{X}\)(e.g., Kosec et al., 2018). The figure shows models where \(L_{\rm mech}=L_{\rm Edd}\) (S) and \(10L_{\rm Edd}\) (B). We see that HMXB feedback could increase \(L_{\rm mech}\) in the minima of the troughs at age 5 - 10 Myr seen in Figure 1, and the effect also would be slightly enhanced at low metallicity (e.g., Renzo et al., 2019). It is important to note that both HMXB feedback and SNe are stochastic. But in general, the order-of-magnitude reduction in \(L_{\rm mech}\) preceding the onset of SNe remains. Individual ultraluminous X-ray sources (ULXs) have larger \(\log L_{X}/{\rm erg\,s^{-1}}\sim 39-40\), which is \(L_{\rm Edd}\) of black holes with masses up to \(\sim 100\) M\({}_{\odot}\). Thus, individual objects could be important, but due to their stochasticity and unknown \(L_{\rm mech}\), it is unclear whether they are an important systematic effect (Justham and Schawinski, 2012). ### Cluster Evaporation Another mechanism that affects the output of mechanical feedback from a given cluster is the removal of stars by dynamical processes. Dynamical evolution can remove massive stars before they explode as SNe, thereby further reducing mechanical feedback. Massive stars are more likely to be ejected, and if ejected early and fast enough, their stellar winds and possible SNe will be located outside the cluster's sphere of influence, nominally the superbubble or outflow radius. Dynamical processes which remove stars are dependent on several key initial conditions (see e.g., review by Portegies Zwart et al., 2010), including the initial parameters for cluster density, star formation efficiency, mass segregation, cluster mass, and binary population (e.g., Oh and Kroupa, 2016; Brinkmann et al., 2017). Oh and Kroupa (2016) find that the initial cluster density is the most influential parameter in their study, with the densest clusters ejecting 50% of their O stars by age 3 Myr while the least dense ones eject only 4.5% at 3 Myr. Primordial mass segregation enhances this effect, since such clusters already have a pre-existing dense core of massive stars where most dynamical ejections occur (e.g., Oh and Kroupa, 2016). However, the amount and timescale of gas dispersal significantly affects the stellar density, which in turn strongly affects the stellar ejection rate (Pfalzner and Kaczmarek, 2013). Similarly, Brinkmann et al. (2017) find that clusters with the same initial density can have widely differing bound fractions depending on gas dispersal parameters. _Thus, weak early feedback promotes stellar dynamical ejections,_ further removing sources of mechanical feedback. The stellar ejection fraction peaks at moderately massive clusters (\(10^{3.5}\)\(M_{\odot}\)), including for models with different initial radii (Oh et al., 2015). Brinkmann et al. (2017) find that massive clusters (\(>10^{5}\) M\({}_{\odot}\)) have an 80% bound fraction while moderately massive clusters (\(5\times 10^{3}\) M\({}_{\odot}\)) have only a 20% bound fraction for currently expected gas expulsion parameters. As noted above, the most massive stars tend to sink to the cluster center, promoting their ejection. Figure 4: The total cumulative integrated energy (left) and total momentum injected (right) for the subsolar, restricted SNe model normalized by the classical, solar model (blue) and unrestricted, subsolar model (red). The solid and dotted lines show models including and excluding stellar rotation, respectively. Numerical simulations by, e.g., Oh et al. (2015); Oh & Kroupa (2016); Brinkmann et al. (2017) suggest that most massive stars are ejected within the first 5 Myr. Furthermore, the efficiency of dynamical ejections peaks after 1 Myr for all moderately massive, mass segregated models simulated by Oh & Kroupa (2016). These massive ejected stars must be outside the superbubble radius to not contribute to the mechanical feedback of the cluster. Velocity distributions of Oh & Kroupa (2016) show that \(<~{}25\%\) of ejected O-stars and B-stars will remain bound at 3 Myr. If SNe do not start until an age of 10 Myr, the remaining ejected yet local SN progenitors have another 7 Myr to escape the cluster. Ejected stars could easily travel 50 - 100 pc in only a few Myr. Therefore, the overall effect of dynamical ejections is to enhance the delay in the onset of massive-star mechanical feedback at low metallicity. Accounting for dynamical ejections would further decrease, and potentially extend, the weak mechanical feedback at ages \(\lesssim 10\) Myr. ## 4 Discussion ### Evidence and Implications Thus, at early times \(<10\) Myr in low-\(Z\) systems, we expect only very weak mechanical feedback from metal-poor stellar winds in gas-rich environments. These conditions promote catastrophic cooling (e.g., Silich et al., 2004; Wunsch et al., 2008; Krumholz et al., 2017), pressure-confinement (e.g., Silich et al., 2007; Oey & Garcia-Segura, 2004; Smith et al., 2006), or delayed launching (Danehkar et al., 2021) of any adiabatic mechanical feedback or superwinds. Observational evidence for missing or suppressed superwinds is building; nearby starbursts such as NGC 5253 (Turner et al., 2017), M82 (Smith et al., 2006; Westmoquette et al., 2013), and Mrk 71 (Oey et al., 2017; Komarova et al., 2021) show no evidence of classical, energy-driven superwinds, and instead show giant molecular clouds within a few pc of the super star clusters, with evidence of metal retention in one instance (Westmoquette et al., 2013). High nebular ionization seen in species like He ii, C iv, and O vi observed in such systems may be evidence that superwinds are being suppressed by weak mechanical input and/or catastrophic cooling (e.g., Danehkar et al., 2021, 2022; Oey et al., 2023). Moreover, some of the most extreme, metal-poor, Green Pea galaxies, a population including many Lyman-continuum emitters (Flury et al., 2022), show the lowest superwind velocities (Jaskot et al., 2017). If mechanical feedback effectively starts later at low metallicity than at solar metallicity, it should have profound effects on the structure and evolution of starbursts and their host galaxies. In particular, it implies that radiation feedback dominates over mechanical feedback for the first \(\sim 10\) Myr (e.g., Freyer et al., 2003; Krumholz & Matzner, 2009). Weaker stellar winds and fewer SNe imply increased gas retention and less negative feedback to disrupt ongoing star formation. This effect may be amplified by the negative cycle of poor gas expulsion enhancing stellar densities and thus, stellar ejections, further weakening both radiative and mechanical feedback (Section 3.2), which thus may be insufficient to suppress ongoing star formation. This leads to an increase in the star formation efficiency, star formation rate, and timescale for star formation in a given region (e.g., Shima et al., 2017). It is consistent with the higher star formation efficiencies seen in super star clusters (e.g., Turner et al., 2015; Herrera & Boulanger, 2017; Oey et al., 2017), and it has been linked to the formation of multiple stellar populations found in globular clusters (Krause et al., 2012; Lochhaas & Thompson, 2017; Silich & Tenorio-Tagle, 2018). Additionally, the standard paradigm for Lyman-continuum (LyC) escape from starburst regions assumes that mechanically driven superwinds clear channels in the ISM allowing LyC to travel without being absorbed (e.g., Heckman et al., 2001). Our new model for delayed superwinds poses a problem for this scenario of LyC escape, since at age \(\sim 10\) Myr, the emission rate of LyC photons is \(>100\times\) lower than at unevolved times (e.g., Leitherer et al., 2014). Hence there must be another mechanism to create channels for LyC escape in young, gas-rich conditions. Jaskot et al. (2019) propose that large quantities of retained gas naturally cool and clump in the absence of strong mechanical feedback, and therefore create gaps between the clumps, providing channels for LyC to escape. This scenario is consistent with the "picket fence" geometry that is often invoked for LyC emitters (Heckman et al., 2001, 2011; Rivera-Thorsen et al., 2017). It is also consistent with simulatioms by, e.g., Rogers & Pittard (2013) and Dale et al. (2013), who find that on the order of 50% or more of LyC photons can escape from star-forming giant molecular clouds due to their inhomogeneous, clumpy nature and radiation-dominated feedback. Moreover, Kimm et al. (2019) find that LyC escape is enhanced in metal-poor clouds, in particular. Observations of the local starburst Mrk 71-A appear to support this scenario. This object provides the strongest evidence of catastrophic cooling where adiabatic mechanical feedback is suppressed, as evidenced by the gas kinematics (Komarova et al., 2021; Oey et al., 2017) and direct observation of strong radiative cooling (Oey et al., 2023). Komarova et al. (2021) find that LyC radiation drives a fast superwind out to hundreds of pc from the parent SSC in this object. This strongly suggests that the radiation is able to escape even further beyond this region and is therefore optically thin, despite the suppression of energy-driven mechanical feedback. Since the SSC is obscured by high-density gas, including molecular gas (Oey et al., 2017), the presence of the LyC-driven wind therefore implies that the radiation is able to escape through gaps in this gas, consistent with the model of Jaskot et al. (2019). A more straightforward consequence of delayed mechanical feedback is that the total mechanical energy from a given cluster will be reduced. In Figure 4 we show the total cumulative energy of the restricted SN model relative to the classical solar model and the sub-solar, unrestricted SN model. For the first 10 Myr, the energy from the restricted SN model is just 10% of the classically expected solar model. Around this age, the SNe begin to explode and quickly dominate the total energy. However, while the total mechanical energy in the metal-poor models is lower than at Z\({}_{\odot}\), the reduction is less than a factor of two, and therefore order-of-magnitude estimates for mechanical feedback are still valid. However, the fact that it is effectively delayed for 10 Myr may have fundamental consequences. For example, delayed SN feedback may resolve the superbubble growth-rate discrepancy (e.g., Oey, 2009), in which superbubbles around OB associations in the Magellanic Clouds and the Milky Way appear to be too small for their observed stellar populations (e.g., Oey, 1995, 1996; Cooper et al., 2004; Smith and Wang, 2004). This discrepancy could be resolved if the mechanical luminosity was reduced by a factor of ten (Oey, 1996). Figure 1 shows that excluding SNe from the first \(\sim 10\) Myr indeed reduces the mechanical power by about the right amount during this early period when stellar winds dominate and O stars are prevalent. The cited studies of objects in the Magellanic Clouds and Milky Way all have ages in this range. Similarly, if a given cluster's mechanical power is overestimated, then superbubble ages are correspondingly underestimated for given observed radii. This is especially true for objects estimated to be \(<15\) Myr old. On a global scale, the porosity of the neutral ISM will be correspondingly reduced if superbubbles are smaller (Oey and Clarke, 1997; Clarke and Oey, 2002). Simplistically, there will be \(\sim 40\%\) less hot gas generated than expected from prior models, modestly affecting the phase balance of the ISM. Interestingly, the observed ISM porosities of Local Group galaxies estimated by Oey and Clarke (1997) tend to be lower than the predicted values based on the observed H ii region luminosity function. The reduction in hot gas and superwinds may be exacerbated by radiative cooling enhanced by weak feedback (e.g., Danehkar et al., 2022). Our study complements the large body of work simulating the effects of SN feedback on their host galaxies (see, e.g., Keller et al., 2022, and references therein). Keller and Kruijssen (2022) specifically explore how SN parameters, including the time of SN onset and the duration of SNe, affect the regulation of star formation and galactic outflows. They find that longer delay times indeed enhance the star formation efficiency of individual clouds as noted above. Based on individualized timing and location of momentum, energy, and chemical enrichment from stars, Andersson et al. (2023) agree that reduced mechanical feedback leads to colder and more fragmented disks and thus much higher star formation rates. Semenov et al. (2021) and Keller et al. (2022) find similar results for the presence or absence of early, pre-SN feedback, which have significant effects on the ISM structure, subsequent star formation, and nature of superwinds. Gutcke et al. (2021) find that allowing for variable SN feedback reduces the total energy, and thus dwarf galaxies will expel slightly less gas and metals. They confirm that more spatially distributed SNe inhibit the development of superwind outflows (Clarke and Oey, 2002) and mass loading; this effect is enhanced when progenitor stars are ejected from clusters. We have shown that all of these effects are especially prevalent in metal-poor, dwarf galaxies, which therefore _may explain why dwarf irregular galaxies have such large star-forming complexes and high specific star-formation rates._ These processes may also be linked to the compactness of blue compact dwarf galaxies and Green Peas, many of which appear to be responsible for Lyman continuum emission (e.g., Flury et al., 2022; Jaskot et al., 2019). Weak mechanical feedback is also suggested to be a driving factor in star formation observed at cosmic dawn in JWST observations (Dekel et al., 2023). ### Element Yields and Abundances Under the restricted SN scenario, the SN nucleosynthesis rate will be somewhat lower than generally expected, causing galactic chemical evolution to take place a bit more slowly and with modified element enrichment patterns. In particular, the \(\alpha\)/Fe enrichment rate will be slower, since the more massive stars are those that dominate \(\alpha\)-element yields. Eliminating SNe from the upper IMF also affects the relative element yields. Figure 5 shows the production of C, N, O, Mg, Si, and Fe during the modeled timeframe for our three models. The Star burst99 SN yields are from Woosley & Weaver (1995), and stellar wind yields are from the Geneva evolution models. Table 1 gives the total yields for all elements calculated by Starburst99, relative to O, for our model populations. The first row shows the yield reduction for the subsolar-metallicity, restricted SNe model relative to the unrestricted model where all massive stars explode. Additionally, Figure 6 shows the cumulative production of C, N, O, and Fe at subsolar metallicity, comparing the restricted and unrestricted SN models. The data show that restricting the range of SN progenitors causes significant changes in production among the shown elements, and these may have noticeable effects on abundance patterns of \(\alpha\)-elements and other species due to massive stars. In particular, O and Mg are produced at only 20% the rate for full-IMF SNe (Table 1), since they are disproportionately generated by the most massive stars. This is more than a factor of 2 below the typical reduction factors for other elements. Interestingly, standard galactic chemical evolution models are not able to explain the observed trends. Figure 5: Production of C, N, O, Mg, Si and Fe by number over time, for the three model populations of 10\({}^{6}\) M\({}_{\odot}\). Line types are as in Figure 3. els for the Milky Way slightly overpredict the solar O abundance relative to other species (see, e.g., review by Prantzos, 2008). An underproduction of O might also be relevant to the enhanced N/O ratios seen for young systems dominated by primary N evolution (see, e.g., review by Maiolino & Mannucci, 2019), and similar enhancements in C/O at low metallicity are suggested to be linked to the same processes dominating early N evolution (e.g., Maiolino & Mannucci, 2019; Berg et al., 2016). On the other hand, the Fe yield changes the least, with a production rate of 86% that of unrestricted SNe. Since Fe production is dominated by Type Ia SNe, massive-star production effects on its long-term chemical abundance patterns is further minimized. We also note that the effects of suppressed feedback are likely unimportant for the interpretation of abundances in extremely metal-poor (EMP) stars. Although these are dominated by core-collapse SN production, the interpretation is that they reflect the products of individual SN events (e.g., Frebel & Norris, 2015), and therefore the collective production of a young population, as modeled here, is not relevant. Since the number of SNe is dominated by the lower-mass range of progenitors according to the IMF, the effect of changing the maximum SN progenitor mass does not have a strong effect on galactic chemical evolution. As noted above, it results in changes on the order of 2 - 3 to some element yields, which is modest relative to evolutionary abundance patterns. However, as shown in Table 1 and Figure 6, these effects may still be significant. More comprehensive investigation, including the production of other elements, is needed to fully understand the effect of restricting the range of SN progenitors. ### Uncertainties The scenario we have presented is subject to a number of uncertainties that affect the timescale for the onset of SN feedback and magnitude of preceding stellar wind feedback. The main source of uncertainty centers on which stars successfully explode as core-collapse SNe. While there is a consensus that most metal-poor high-mass stars collapse directly to BHs without SN explosions (e.g., Zhang et al., 2008; O'Connor & Ott, 2011; Sukhbold et al., 2016; Muller et al., 2016; Patton & Sukhbold, 2020), exactly which progenitor masses explode and why has yet to be definitively determined. O'Connor & Ott (2011) find that a progenitor's core compactness at bounce determines its final fate. At low metallicities, this parameter is what causes the failure of stars more massive than \(\sim\)20 M\({}_{\odot}\) to explode. Ertl et al. (2016) find a related explosion criterion that produces similar results, and finds that only \(\sim\)10% of stars more massive than 20 M\({}_{\odot}\) generate SNe (Sukhbold et al., 2016). Keller & Kruijssen (2022) also stress the importance of the _minimum_-mass SN progenitor, which strongly influences the total energy and momentum injected by a given IMF. The stellar structure, explosion mechanics, and progenitor mass range are also important for nucleosynthetic yields (e.g., Prantzos, 2008). There are also certain mass ranges that tend to explode. Sukhbold et al. (2016) find that SN progenitors less massive than 15 M\({}_{\odot}\) explode easily, while those in the ranges 15 - 23 M\({}_{\odot}\), 27 - 30 M\({}_{\odot}\), or \(>\) 35 M\({}_{\odot}\) explode rarely. For their one-dimensional core-collapse models, the vast majority of SN progenitors having masses \(>\) 20 M\({}_{\odot}\) do not explode. We adopted this value as a threshold for our simple parameterization above that assumes all stars with higher masses collapse directly into black holes. On the other hand, 3-D models show higher turbulence and convection, which may enhance explodability (Fields & Couch, 2021). Moreover, most models of explodability agree that the masses of exploding stars are not a continuous distribution. Although not fully understood yet, the explosion mechanism of core-collapse supernovae is thought to be inherently stochastic (e.g., Cardall & Budiardja, 2016). This introduces variance into the explosion timescale, yields, and SN mass threshold, so the start of CCSN feedback will be more gradual than modeled in Figure 1, and some SNe are expected to occur before 10 Myr. In addition, very massive stars are predicted to end their lives in rare, energetic (\(<10^{53}\) erg) pair-instability supernovae (PISNe). Heger & Woosley (2002) find that Figure 6: Cumulative production of C, N, O, and Fe for the models at subsolar metallicity. Dashed and solid lines show unrestricted and restricted SN models, respectively. stars with helium core masses 64 - 133 M\({}_{\odot}\) produce PISNe. This corresponds to initial masses of 140 - 260 M\({}_{\odot}\), of which we have \(\sim 100\) in our \(10^{6}\) M\({}_{\odot}\) cluster (Kasen et al., 2011; Heger et al., 2003; Hirschi, 2017). If these predictions are correct, this would produce significant mechanical luminosity from the cluster at early ages. However, a single PISN event has yet to be reliably identified, even with \(>1,000\) SNe detected per year and a relative event rate of PISNe to CCSNe of 1%. Takahashi (2018) therefore suggests that there may be fewer PISNe than expected due to shell convection occurring earlier in the core carbon-burning phase, whereby PISN progenitors can avoid pair-creation instability. Moreover, there are still significant uncertainties in stellar wind parameters, especially at low metallicity. Uncertain mass-loss rates and wind velocities are a major problem in creating accurate stellar evolution models and population synthesis models. The most commonly used formula for mass loss is that of Vink et al. (2001), which is now believed to be an overestimate. Accounting for clumping and wind hydrodynamics has been shown to lower theoretical mass-loss rates by at least factor of 2 - 3 (Bouret et al., 2012; Surlan et al., 2013; Puls et al., 2008). However, UV spectral observations of O-stars in the massive young SMC regions NGC 346 and SMC-SGS 1 show even more dramatic mass-loss rate reductions by factors closer to an order of magnitude or more (Rickard et al., 2022; Ramachandran et al., 2019) from the theoretical values of Vink et al. (2001). Another important uncertainty originates from the omission of binary stars from our analysis. Accounting for massive binary evolution will extend the lives of binary mass gainers and therefore add SNe at later ages. It will also increase the numbers of HMXBs and ULXs, and their feedback. We suggested above (Section 3.1) that neither of these effects will significantly change the weak mechanical feedback at early times, but further study is needed to clarify their contribution. Massive, interacting binaries lead to increased LyC emission and stellar winds from stripped stars, possibly increasing both radiative and mechanical feedback. However binaries also drive ejection of massive stars from their clusters, which is likely to offset this effect. Furthermore, binarity is now believed to dominate massive star evolution. Binaries dominate dynamical processes leading to stellar ejections, and further work is needed to quantify the loss of mechanical feedback due to this effect, which, as we argued above, is likely substantial. Binary mass transfer will greatly increase rotation velocities of mass gainers, while also greatly reducing the masses of donors. The SB99 models have only two stellar rotational velocities, 0 and 40% of the ZAMS break-up velocity. As is evident in Figure 4, faster rotating stars further delay the onset of mechanical feedback. Binary mass transfer could cause much higher rotation velocities, which would thus further shift the start of strong mechanical feedback to somewhat later times. Binary mergers could also affect the mechanics and likelihood of SN explosions and their nucleosynthetic yields. ## 5 Conclusion In summary, massive-star feedback at low metallicity differs greatly from the classical paradigm at solar metallicity. At low metallicity, stellar winds are weak, and the more massive stars fail to explode as core-collapse SNe. We have compared the evolution of mechanical luminosity and momentum injection for low metallicity models to those predicted by the classical, Z\({}_{\odot}\) feedback paradigm. Our restricted SNe model limits SNe to progenitor masses \(<20\) M\({}_{\odot}\), as compared to the classical, unrestricted limit. We find that low metallicity Starburst99 models predict that mechanical feedback is effectively delayed by roughly 10 Myr. We additionally discussed the contribution of HMXBs to our low metallicity, delayed feedback model. We found that accretion-driven feedback could slightly increase the mechanical luminosity of the cluster only between the ages 5-10 Myr, but the order-of-magnitude reduction during this time remains. Furthermore, dynamical mechanisms remove stars from clusters, reducing their mechanical feedback contribution. This is often an overlooked, but likely significant process. The effect is enhanced for clusters that are mass-segregated, dense, have high binary fractions, and have moderate masses (Oh et al., 2015). Such clusters at low metallicity will deviate the most from the classical mechanical feedback paradigm. For example, the moderately massive and densest cluster modeled by Oh & Kroupa (2016) ejected 50% of its O-stars by 3 Myr. A potentially important process that remains to be explored is that _gas retention due to weak feedback likely promotes ejection of OB stars, exacerbating the effect._ This occurs because gas retention further promotes mass segregation and high cluster densities. Accounting for the dynamical mechanisms that remove stars will thus further decrease the expected total mechanical feedback of a given cluster. A delay in the onset of mechanical feedback implies that radiation dominates over mechanical feedback at early ages. This corresponds to the scenario where mechanically driven superwinds are suppressed by catastrophic cooling or pressure confinement, as has been observed in several local starburst systems. This has a variety of important implications. There is increased gas retention, which increases rate and timescale of star formation. This is consistent with the higher star formation efficiencies in super star clusters and may be linked to the multiple stellar populations in globular clusters. _Delayed feedback may also offer a simple explanation for why metal-poor, dwarf irregular galaxies have such large star-forming complexes._ Retention of dense gas near super star clusters likely leads to clumping, thereby generating the picket-fence geometry that may be conducive to LyC escape through inter-clump regions. Overestimated early mechanical luminosity can explain why superbubbles are often found to be too small for their observed parent stellar populations. Similarly, superbubble ages that are inferred from the observed radii and stellar populations are therefore too young. The cumulative, large-scale effect of reduced initial mechanical luminosity is to reduce the total integrated energy and production of hot gas from clusters by \(\sim\)40% and to reduce the total momentum injected by 75% relative to the classical model. This correspondingly, modestly modifies the phase balance of the ISM and CGM. It also has a modest effect on galactic chemical evolution by slowing the \(\alpha\)/Fe evolution rate, and slightly modifying abundance patterns. In particular, the production of O is reduced by a factor \(\gtrsim 2\) relative to other elements. These effects are subject to a number of uncertainties, in particular, involving the stellar progenitors that successfully explode as core-collapse SNe; and the myriad effects of binary evolution, which is unaccounted for in our models. However, this model of delayed mechanical feedback cohesively explains many observations for metal-poor star-forming regions and starburst galaxies, including the character of star formation in metal-poor, dwarf galaxies and starbursts. Since the associated feedback processes play a key role in our universe, both at early epochs and the present, this effect is broadly relevant and worth deeper investigation. We wish to thank Carl Fields, Edmund Hodges-Kluck, Anne Jaskot, Evan Kirby, Lena Komarova, Claus Leitherer, and Vadim Semenov for helpful comments and discussions. Additionally, we thank the anonymous referee for valuable suggestions which improved the depth of the paper. This work was supported by NASA HST-GO-16261 and the University of Michigan.
2305.17050
Exploiting Abstract Meaning Representation for Open-Domain Question Answering
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models (PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model's ability to capture accurate correlations, especially within complex contexts. Therefore, we utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information. We introduce a method known as Graph-as-Token (GST) to incorporate AMRs into PLMs. Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance, resulting in up to 2.44/3.17 Exact Match score improvements on NQ/TQ respectively. Furthermore, our method enhances robustness and outperforms alternative Graph Neural Network (GNN) methods for integrating AMRs. To the best of our knowledge, we are the first to employ semantic graphs in ODQA.
Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng Zhang, Yue Zhang
2023-05-26T16:00:16Z
http://arxiv.org/abs/2305.17050v1
# Exploiting Abstract Meaning Representation for # Exploiting Abstract Meaning Representation for Open-Domain Question Answering Cunxiang Wang\({}^{\clubsuit}\), Zhikun Xu\({}^{\heartsuit}\), Qipeng Guo\({}^{\heartsuit}\), Xiangkun Hu\({}^{\diamond}\), Xuefeng Bai\({}^{\clubsuit}\), Zheng Zhang\({}^{\diamond}\) and Yue Zhang\({}^{\clubsuit}\) \({}^{\spadesuit}\)Zhejiang University, China \({}^{\clubsuit}\)School of Engineering, Westlake University, China \({}^{\heartsuit}\)Fudan University, China; \({}^{\diamond}\)Amazon AWS AI {wangcunxiang, zhangyue}@westlake.edu.cn \({}^{\dagger}\) The correponding author. ###### Abstract The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models (PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model's ability to capture accurate correlations, especially within complex contexts. Therefore, we utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information. We introduce a method known as Graph-as-Token (GST) to incorporate AMRs into PLMs. Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance, resulting in up to 2.44/3.17 Exact Match score improvements on NQ/TQ respectively. Furthermore, our method enhances robustness and outperforms alternative Graph Neural Network (GNN) methods for integrating AMRs. To the best of our knowledge, we are the first to employ semantic graphs in ODQA. 1 Footnote 1: We release our code and data at [https://github.com/wangcunxiang/Graph-aS-Tokens](https://github.com/wangcunxiang/Graph-aS-Tokens) ## 1 Introduction Question Answering (QA) is a significant task in Natural Language Processing (NLP) Rajpurkar et al. (2016). Open-domain QA (ODQA) Chen et al. (2017), particularly, requires models to output a singular answer in response to a given question using a set of passages that can total in the millions. ODQA presents two technical challenges: the first is _retrieving_Karpukhin et al. (2020) and _reranking_Fajcik et al. (2021) relevant passages from the dataset, and the second is generating an answer for the question using the selected passages. In this work, we focus on the _reranking_ and _reading_ processes, which necessitate fine-grained interaction between the question and passages. Existing work attempts to address these challenges using Pretrained Language Models (PLMs) Glass et al. (2022). However, the diverse surface form expressions often make it challenging for the model to capture accurate correlations, especially when the context is lengthy and complex. We present an example from our experiments in Figure 1. In response to the question, the reranker incorrectly ranks a confusing passage first, and the reader generates the answer _"2015-16"_. The error arises from the PLMs' inability to effectively handle the complex semantic structure. Despite _"MVP"_, _"Stephen Curry"_ and _"won the award"_ appearing together, they are not semantically related. In contrast, in the AMR graph, it is clear that _"Stephen Curry"_ wins over _"international players"_, not the _"MVP"_, which helps the model avoid the mistake. The baseline model may fail to associate "Most Valuable Player" in the passage with "MVP" in the question, which may be why the baseline does not rank it in the Top10. To address this issue, we adopt structured semantics (i.e., Abstract Meaning Representation Banarescu et al. (2013) graphs shown on the right of Figure 1) to enhance Open-Domain QA. While previous work has integrated graphs into neural models for NLP tasks, adding additional neural architectures to PLMs can be non-trivial, as training a graph network without compromising the original architecture of PLMs can be challenging Ribeiro et al. (2021). Converting AMR graphs directly into text sequences and appending them can be natural, but leads to excessively long sequences, exceeding the maximum process ing length of the transformer. To integrate AMR into PLMs without altering the transformer architecture and at a manageable cost, we treat nodes and edges of AMR Graphs aS Tokens (GST) in PLMs. This is achieved by projecting the embeddings of each node/edge, which consist of multiple tokens, into a single token embedding and appending them to the textual sequence embeddings. This allows for integration into PLMs without altering the main model architecture. This method does not need to integrate a Graph Neural Network into the transformer architecture of PLMs, which is commonly used in integrating graph information into PLMs Yu et al. (2022); Ju et al. (2022). The GST method is inspired by Kim et al. (2022) in the graph learning domain, who uses token embeddings to represent nodes and edges for the transformer architecture in graph learning tasks. However, their method is not tailored for NLP tasks, does not consider the textual sequence embeddings, and only handles a certain types of nodes/edges, whereas we address unlimited types of nodes/edges consisting of various tokens. Specifically, we select BART and FiD as baselines for the reranking and reading tasks, respectively. To integrate AMR information, we initially embed each question-passage pair into text embeddings. Next, we parse the pair into a single AMR graph using AMRBART Bai et al. (2022). We then employ the GST method to embed the graph nodes and graph edges into graph token embeddings and concatenate them with the text embeddings. Lastly, we feed the concatenated text-graph embeddings as the input embeddings to a BART-based Lewis et al. (2020) reranker to rerank or a FiD-based Izacard and Grave (2020) reader to generate answers. We validate the effectiveness of our GST approach using two datasets - Natural Question Kwiatkowski et al. (2019) and TriviaQA Joshi et al. (2017). Results indicate that AMR enhances the models' ability to understand complex semantics and improves robustness. BART-GST-reranker and FiD-GST outperform BART-reranker and FiD on the reranking and reading tasks, respectively, achieving up to 5.9 in Top5 scores, 3.4 in Top10 score improvements, and a 2.44 increase in Exact Match on NQ. When the test questions are paraphrased, models equipped with GST prove more robust than the baselines. Additionally, GST outperforms alternative GNN methods, such as Graph-transformer and Relational Graph Convolution Network (RGCN) Schlichtkrull et al. (2018), for integrating AMR. To the best of our knowledge, we are the first to incorporate semantic graphs into ODQA, thereby achieving better results than the baselines. ## 2 Related Work Open-domain QA.Open-Domain Question Answering (ODQA) Chen et al. (2017) aims to answer one factual question given a large-scale text database, such as Wikipedia. It consists of two steps. The first is _dense passage retrieval_Karpukhin et al. (2020), which retrieves a certain number of passages that match the question. In Figure 1: An example from our experiments. The top-middle square contains the question and the gold standard answer. The middle section shows a confusing passage with an incorrect answer generated by the baseline model and ranked first by the baseline reranker. The bottom-middle section presents a passage with the gold standard answer, which is ranked within the top ten by our reranker but not by the baseline. Important information is highlighted. this process, a _reranking_ step can be used to filter out the most matching passages (Fajcik et al., 2021; Glass et al., 2022). The second is _reading_, which finds answer by reading most matching passages (Izacard and Grave, 2020; Lewis et al., 2020). We focus on the reranking and reading, and integrate AMR into those models. Abstract Meaning Representation (AMR)(Banarescu et al., 2013) is a formalism for representing the semantics of a text as a rooted, directed graph. In this graph, where nodes represent basic semantic units such as entities and predicates, and edges represent the relationships between them. Compared with free-form natural language, AMR graphs are more semantically stable as sentences with same semantics but different expressions can be expressed as the same AMR graph (Bai et al., 2021; Naseem et al., 2021). In addition, AMR graphs are believed to have more structure semantic information than pure text (Naseem et al., 2021). Previous work has implemented AMR graphs into neural network models. For example, (Bai et al., 2021) adopts Graph-transformer (Yun et al., 2019) to integrate AMRs into the transformer architecture for the dialogue understanding and generation. AMR-DA (Shou et al., 2022) uses AMRs as an data augmentation approach which first feeds the text into AMRs and regenerates the text from the AMRs. Bai et al. (2022) uses AMR graphs with rich semantic information to redesign the pre-training tasks which results in improvement on downstream dialogue understanding tasks. However, none of them is used for Open-domain QA or applied with the GST technique. which does not need to implement extra architectures in the PLMs, avoiding the incompatibility of different model architectures. Integrating Structures into PLMs for ODQASome work also tries to integrate structure information into PLMs for ODQA. For example, GRAPE (Ju et al., 2022) insert a Relation-aware Graph Neural Network into the T5 encoders of FiD to encode knowledge graphs to enhance the output embeddings of encoders; KG-FiD (Yu et al., 2022) uses the knowledge graph to link different but correlated passages, reranks them before and during the reading, and only feeds the output embeddings of most correlated passages into the decoder. However, existing work concentrates on the knowledge graph as the source of structure information and no previous work has considered AMRs for ODQA. LLMs in Open-Domain Question Answering (ODQA)Research has been conducted that utilizes pre-trained language models (PLMs) to directly answer open-domain questions without retrieval (Yu et al., 2023; Wang et al., 2021; Ye et al., 2021; Rosset et al., 2021). The results, however, have traditionally not been as effective as those achieved by the combined application of DPR and FiD. It was not until the emergence of ChatGPT that direct answer generation via internal parameters appeared to be a promising approach. In a study conducted by Wang et al. (2023), the performances of Large Language Models (LLMs), such as ChatGPT (versions 3.5 and 4), GPT-3.5, and Bing Chat, were manually evaluated and compared with that of DPR+FiD across NQ and TQ test sets. The findings demonstrated that FiD surpassed ChatGPT-3.5 and GPT-3.5 on the NQ test set and outperformed GPT-3.5 on the TQ test set, affirming the relevance and effectiveness of the DPR+FiD approach even in the era of LLMs. ## 3 Method We introduce the Retrieval and Reading of Open-Domain QA and their baselines in Section 3.1, AMR graph generation in Section 3.2 and our method Graph-aS-Token (GST) in Section 3.3. ### Baseline Retrieval.The retrieval model aims to retrieve \(N_{1}\) passages from \(M\) reference passages (\(N_{1}<<M\)) given the question \(q\). Only fast algorithms, such as BM25 and DPR (Karpukhin et al., 2020), can be used to retrieve from the large-scale database, and complex but accurate PLMs cannot be directly adopted. So, retrieval algorithm is often not very accurate. One commonly used method is applying a reranking process to fine-grain the retrieval results, and we can use PLMs to encode the correlations, which is usually more accurate. Formally, reranking requires model to sort out the most correlated \(N_{2}\) passages with \(q\) from \(N_{1}\) passages (\(N_{2}<N_{1}\)). For each passage \(p\) in the retrieved passage \(P_{N_{1}}\), we concatenate the \(q\)\(p\) together and embed them into text sequence embeddings \(X_{qp}\in\mathbb{R}^{L\times H}\), where \(L\) is the max token length of the question and passage pair and \(H\) is the dimension. We use a pretrained language model to encode each \(\mathbf{X_{qp}}\) and a classification head to calculate a correlation score between \(q\) and \(p\): \[s_{qp}=PLM(\mathbf{X_{qp}}) \tag{1}\] where \(PLM\) denotes the pretrained language model and the commonly used Multi-Layer Perceptron (MLP) is used as as the classification head. We use the cross entropy as the loss function, \[\mathcal{L} =\frac{1}{N_{q}}\sum_{q}[\frac{1}{N_{pos}+N_{neg}}\sum_{p}l_{qp}] \tag{2}\] \[=\frac{1}{N_{q}*(N_{pos}+N_{neg})}\sum_{q}\sum_{p}-\] \[[(y_{qp}*log(s_{qp})+(1-y_{qp})*log(1-s_{qp}))],\] where \(N_{pos}\) and \(N_{neg}\) are the numbers of positive and negative passages for training one question, respectively. To identify positive/negative label of each passage to the question, we follow Karpukhin et al. (2020), checking whether at least one answer appears in the passage. We choose the \(N_{2}\) passages which have reranked among Top-\(N_{2}\) for the reading process. Reading.The reader needs to generate an answer \(a\) given the question \(q\) and \(N_{2}\) passages. In this work, we choose the Fusion-in-Decoder (FiD) model Izacard and Grave (2020) as the baseline reader model. The FiD model uses \(N_{2}\) separate T5 encoders Raffel et al. (2020) to encode \(N_{2}\) passages and concatenate the encoder hidden states to feed in one T5 decoder to generate answer. Similar to reranking, we embed the question \(q\) and each passage \(p\) to text sequence embeddings \(\mathbf{X_{qp}}\in\mathbb{R}^{L\times d_{H}}\), where \(L\) is the max token length of the question and passage pair and \(d_{H}\) is the dimension. Next, we feed the embeddings in the FiD model to generate the answer \[a=FiD([\mathbf{X_{qp_{1}}},\dots,\mathbf{X_{qp_{i}}},\mathbf{X_{qpN_{2}}}]) \tag{3}\] where \(a\) is a text sequence. ### Amr We concatenate each question \(q\) and passage \(p\), parse the result sequence into an AMR graph \(G_{qp}=\{V,E\}\), where \(V,E\) are nodes and edges, respectively. Each edge is equipped with types, so \(e=\{(u,r,v)\}\) where \(u,r,v\) represent the head node, relation and the tail node, respectively. ### Graph aS Token (GST) As shown in Figure 2, we project each node \(n\) or edge \(e\) in one AMR graph \(G\) into node embedding \(\mathbf{x^{n}}\) or edge embedding \(\mathbf{x^{e}}\). We adopt two types of methods to project each node and edge embeddings to one token embedding, which are MLP projection and Attention projection. After the projection, we append the node embeddings \(\mathbf{X^{N}}=[\mathbf{x^{n}_{1}},\dots,\mathbf{x^{n}_{n_{n}}}]\) and edge embeddings \(\mathbf{X^{E}}=[\mathbf{x^{e}_{1}},\dots,\mathbf{x^{e}_{n_{e}}}]\) to the corresponding text sequence embeddings \(\mathbf{X^{T}}=[\mathbf{x^{t}_{1}},\dots,\mathbf{x^{t}_{n_{t}}}]\). So, the result sequence embedding is in the following notation: \[\mathbf{X}=[\mathbf{X^{T}},\mathbf{X^{N}},\mathbf{X^{E}}] \tag{4}\] InitializationWe explain how we initialize embeddings of nodes and edges here. Figure 2: The structure of our Graph-aS-Token method. The input consists of the text and the AMR graph of one passage; The output is a united embedding. As each node \(n\) and relation \(r\) contain plural tokens (example of node 'ordinal-entity' is shown the left and bottom of Figure 2), \(n=[t1,..,t_{n}]\) and \(r=[t1,\dots,t_{r}]\), and each edge \(e\) contains two nodes and one relation, we have \(e=[[t1,..,t_{u}],[t1,\dots,t_{r}],[t1,..,t_{v}]]\). For edges and nodes, we first embed their internal tokens into token embedding. For edges, we have \[\begin{split}\mathbf{x^{e1}}=&[[\mathbf{x^{u}_{1}},\dots,\mathbf{x^{u}_{n_{u}}}],\\ &[\mathbf{x^{r}_{1}},\dots,\mathbf{x^{r}_{n_{r}}}],\\ &[\mathbf{x^{r}_{1}},\dots,\mathbf{x^{v}_{n_{v}}}]\end{split} \tag{5}\] For nodes, we have \[\mathbf{x^{n1}}=[\mathbf{x^{n}_{1}},\dots,\mathbf{x^{n}_{n}}] \tag{6}\] MLP ProjectionThe process is illustrated in the MLP Projection part of Figure 2. As each AMR node can have more than one tokens, we first average its token embeddings. For example, for a head node \(u\), \(\mathbf{x^{u}}=AVE([\mathbf{x^{u}_{1}},\dots,\mathbf{x^{u}_{n}}])\in\mathbb{ R}^{d_{H}}\). The same is done for the relation. Then, we concatenate the two node embeddings and one relation embedding together as the edge embedding, \[\mathbf{x^{e2}}=[\mathbf{x^{u}},\mathbf{x^{r}},\mathbf{x^{v}}]\in\mathbb{R} ^{3d_{H}} \tag{7}\] Next, we use a \(\mathbb{R}^{3d_{H}\times d_{H}}\) MLP layer to project the \(\mathbf{x^{e2}}\in\mathbb{R}^{d_{H}}\) into \(\mathbf{x^{e}}\in\mathbb{R}^{d_{H}}\), and the final edge embedding \[\begin{split}\mathbf{x^{e}}&=MLP(\mathbf{x^{e2}}) \\ &=MLP([\mathbf{x^{u}},\mathbf{x^{r}},\mathbf{x^{v}}])\end{split} \tag{8}\] Similarly, we average the node tokens embeddings first \(\mathbf{x^{n1}}=AVE([\mathbf{x^{n}_{1}},\dots,\mathbf{x^{n}_{n}}])\). To reuse the MLP layer, we copy the node embedding two times and concatenate, so \(\mathbf{x^{n2}}=[\mathbf{x^{n1}},\mathbf{x^{n1}},\mathbf{x^{n1}}]\in\mathbb{ R}^{3d_{H}}\). Last, We adopt an MLP layer to obtain final node embedding \[\mathbf{x^{n}}=MLP(\mathbf{x^{n2}})\in\mathbb{R}^{d_{H}} \tag{9}\] We have also tried to assign separate MLP layers to nodes and edges, but preliminary experiments show that it does not improve the results. Attention ProjectionWe use one-layer self-attention to project nodes and edges into embeddings, which is shown in the Attn Projection part in Figure 2. The edge embedding is calculated \[\begin{split}\mathbf{x^{e}}=Att_{E}([\mathbf{x^{u}_{1}},\dots, \mathbf{x^{u}_{n}},\\ \mathbf{x^{r}_{1}},\dots,\mathbf{x^{r}_{n_{r}}},\mathbf{x^{v}_{1 }},\dots,\mathbf{x^{v}_{n_{v}}}])\end{split} \tag{10}\] Similarly, the node embedding is calculated \[\mathbf{x^{n}}=Att_{N}([\mathbf{x^{n}_{1}},\dots,\mathbf{x^{n}_{n}}], \tag{11}\] where \(Att_{E}\) and \(Att_{N}\) both denote one self-attention layer for edges and nodes, respectively. We take the first token (additional token) embedding from the self-attention output as the final embedding. We only modify the input embeddings from \(\mathbf{X}=\mathbf{X^{T}}\) to \(\mathbf{X}=[\mathbf{X^{T}},\mathbf{X^{N}},\mathbf{X^{E}}]\). The rest details of models, such as the transformer architecture and the training paradigm, are kept the same with the baselines. Our model can directly use the PLMs to encode AMR graphs, without incompatibility between GNN's parameters and PLMs' parameters. ## 4 Experiments ### Data We choose two representative Open-Domain QA datasets, namely Natural Questions (NQ) and TriviaQA (TQ), for experiments. Data details are in presented in Appendix Table 9. Since retrieval results have a large impact on the performance of downstream reranking and reading, we follow Izacard and Grave (2020) and Yu et al. (2022) to fix retrieval results for each experiment to make the reranking and reading results comparable for different models. In particular, we use the DPR model initialized with parameters in Izacard and Grave (2020)2 to retrieve 100 passages for each question. Then we rerank them into 10 passages, which means \(N_{1}=100,N_{2}=10\). Footnote 2: [https://dl.fbaipublicfiles.com/FiD/pretrained_models/nq_retriever.tar.gz](https://dl.fbaipublicfiles.com/FiD/pretrained_models/nq_retriever.tar.gz) [https://dl.fbaipublicfiles.com/FiD/pretrained_models/tag_retriever.tar.gz](https://dl.fbaipublicfiles.com/FiD/pretrained_models/tag_retriever.tar.gz) We generate the amr graphs using AMR-BART (Bai et al., 2022) (the AMRBART-large-finetuned-AMR3.0-AMRParsing checkpoint) 3. Footnote 3: [https://huggingface.co/xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing](https://huggingface.co/xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing) ### Models Details We choose the BART model as the reranker baseline and the FiD model (implemented on T5 model(Raffel et al., 2020)) as the reader baseline, and adopt the GST method on them. For each model in this work, we use its Large checkpoint, such as BART-large and FiD-large, for reranking and reading, respectively. In the reranking process, we evaluate the model using the dev set per epoch, and use Top10 as the pivot metric to select the best-performed checkpoint for the test. For the reading, we evaluate the model per 10000 steps, and use Exact Match as the pivot metric. For training rerankers, we set number of positive passages as 1 and number of negative passages as 7. We run experiments on 2 Tesla A100 80G GPUs. ### Metric Following Glass et al. (2022) and Izacard and Grave (2020), we use Top-N to indicate the reranking performance and Exact Match for the reading performance. However, TopN is unsuitable for indicating the overall reranking performance for all positive passages, so we also adopt two metrics, namely Mean Reciprocal Rank (MRR) and Mean Hits@10 (MHits@10). The MRR score is the Mean Reciprocal Rank of all positive passages. Higher scores indicate that the positive passages are ranked higher overall. The MHits@10 indicates the percentage of positive passages are ranked in Top10. Higher scores indicate that more positive passages are ranked in Top10. Their formulations are in Appendix Section A.5. Note that, only when the retrieved data is exactly the same, the MRR and MHits@10 metrics are comparable. ### Preliminary Experiments We present the reranking performance of four baseline PLMs, including BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), ELECTRA Clark et al. (2020) and BART Lewis et al. (2020) on the NQ and TQ in Appendix Table 8. BART outperforms other three models in every metric on both NQ and TQ. So, we choose it as the reranker baseline and apply our Graph-aS-Token method to it in following reranking experiments. ### Main Results The Main results are presented in Table 1. Our method can effectively boost the performance on both reranking and reading. Reading.As shown in the reading columns of Table 1, our method can boost the FiD performance, no matter whether there is reranker and whether the reranker is with AMR or not. Without reranking, FiD-GST-A achieves 51.11/70.39 EM on NQ/TQ test, which are 0.45/0.89 EM higher over the baseline FiD; With reranking, 'BART-GST-M + FiD-GST-M'achieves 53.10/72.61 EM on NQ/TQ test, 1.77/1.27 EM better than 'BART-reranker + FiD'. With the same reranker, FiD-GST is better than the baseline FiD, for example, 'BART-reranker + FiD-GST-A' achieves 52.38/72.05 on NQ/TQ test, which is 1.05/0.72 higher than the 51.33/71.33 of 'BART-reranker + FiD'. Overall, our GST models have achieved up to \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Rearanker \(\backslash\) Dataset} & \multicolumn{3}{c|}{**Natural Questions**} & \multicolumn{3}{c}{**TriviaQA**} \\ \cline{2-7} & \multicolumn{1}{c|}{Reranking} & \multicolumn{1}{c|}{Reading} & \multicolumn{1}{c|}{Reranking} & \multicolumn{1}{c|}{Reading} \\ \cline{2-7} & Top5 & Top10 & EM & Top5 & Top10 & EM \\ \hline w/o reranker + FiD-reader & & & 49.47/50.66 & & & 69.02/69.50 \\ w/o reranker + FiD-GST-A & 73.7/74.6 & 79.5/80.3 & 50.12/51.11 & 78.07/78.1 & 81.5/81.8 & 70.17/70.39 \\ w/o reranker + FiD-GST-M & & & 50.06/50.97 & & & 69.98/70.10 \\ \hline BART-reranker + FiD-reader & & & 50.33/51.33 & & & 71.16/71.33 \\ BART-reranker + FiD-GST-A & 78.7/78.6 & 83.0/83.3 & 50.80/52.38 & 83.2/83.2 & 85.2/85.1 & 71.93/72.05 \\ BART-reranker + FiD-GST-M & & & 50.76/52.24 & & & 72.12/72.24 \\ \hline BART-GST-A + FiD-reader & & & 50.68/52.18 & & & 71.54/71.71 \\ BART-GST-A + FiD-GST-A & & & 51.05/52.80 & & **83.5/83.3** & **85.3/85.3** & **72.63/72.67** \\ \hline BART-GST-M + FiD-reader & **79.6/80.0** & 83.3/**83.7** & 51.11/52.13 & & & 71.47/71.62 \\ BART-GST-M + FiD-GST-M & & **51.40/53.10** & 83.1/82.9 & 85.0/85.1 & 72.58/72.61 \\ \hline \hline \end{tabular} \end{table} Table 1: Reranking and reading results on the dev/test set of NQ and TQ. In each cell, the left is on the dev while the right is on the test. For the BART/FiD with GST-M/A in the first column, they are equipped AMR graphs with the GST method, -M indicates the MLP projection while -A is the attention projection. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Rearker \(\backslash\) Dataset} & \multicolumn{1}{c|}{**Natural Questions**} & \multicolumn{1}{c}{**TriviaQA**} \\ \cline{2-4} & MRR & MH@10 & MRR & MH@10 \\ \hline w/o reranker & 20.2/18.0 & 37.9/34.6 & 12.1/12.3 & 25.5/25.9 \\ \hline BART-reranker & 25.7/23.3 & 49.3/45.8 & 16.9/17.0 & 37.7/38.0 \\ \hline BART-GST-A & 28.1/24.7 & 52.7/48.2 & **17.7/17.8** & **39.3/39.9** \\ \hline BART-GST-M & **28.4/25.0** & **53.2/48.7** & 17.5/17.6 & 39.1/39.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Overall reranking results on NQ and TQ. In each cell, the left is dev and the right is test. 2.44 EM (53.10 vs 50.66) on NQ test and 3.17 (72.67 vs 69.50) on TQ test. RerankingShown in the reranking columns of Table 1, BART-GST-M can achieve 80.0/83.7 scores in Top5/Top10, which improve 5.4/3.4 on NQ-test compared to DPR and 1.4/0.4 compared to BART-reranker. BART-GST-M achieves 79.3/83.3 scores in Top5/Top10, which outperform DPR by 4.7/3.0 on NQ-test, showing that our GST method is effective. We present results of the MRR and MHits@10 metrics in Table 2. Our GST method can help positive passages rank higher in Top10. In NQ, BART-GST-M has 7.0/14.1 advantages on MRR/MHits@10 over DPR while 1.7/2.9 advantages over BART-reranker; In TQ, BART-GST-A has 5.5/14.0 advantages on MRR/MHits@10 over DPR and 0.8/1.9 advantages on MRR, MHits@10 over BART-reranker. The overall reranking results can also explain the reason why even when the Top10 results are similar and readers are the same, the reranked passages by BART-GST can lead to better reading performance. For example, in NQ test, the reading performance of 'BART-GST-M + FiD' is 0.80 better than 'BART-reranker + FiD'. ### Analysis Robustness.To evaluate the robustness of the baseline and our models, we paraphrase the test questions of NQ and TQ, evaluate paraphrased test questions and the original ones with the same model checkpoint. We use a widely-used paraphraser, namely _Parrot Paraphraser_[1] to paraphrase test questions. The results are shown in Table 3. The performance drops in reranking and reading of our GST models are lower than the baseline model, despite that our models have better performance. For reranking, the drop of our BART-GST-A is -1.9/-1.3/-1.4/-2.1 for Top5/Top10/MRR/MHits@10, which is lower than the baseline's -2.6/-1.5/-1.8/-2.2. For reading, the -3.21 EM drop of FiD-GST-M is also smaller than the -3.90 of baseline FiD. It shows that our GST method can not only improve performance but also improve robustness, which can prove that adding structural information can help models avoid the erroneous influence of sentence transformation. Comparison with FiD-100.We also compare the reranking+reading paradigm with the directly-reading paradigm. For the latter, the FiD reader is directly trained and evaluated on 100 retrieved passages without reranking. The results are shown in Table 4. Without our GST method, the reranking+reading paradigm (FiD-10 w/ BART reranker) is worse than FiD-100 without reranking, which is 71.33 to 71.78 on the test. However, with our GST method, the reranking+reading paradigm outperforms FiD-100. For example, FiD-GST-M-10 w/ BART-GST-M reranker has better performance on NQ test than FiD-100, which is 53.10 vs 52.88, and FiD-GST-A-10 w/ BART-GST-A reranker vs FiD-100 on TQ \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline & NQ dev & NQ test & TQ dev & TQ test \\ \hline FiD-10 & 49.47 & 50.66 & 69.02 & 69.50 \\ \hline FiD-100 & **51.60** & 52.88 & 71.61 & 71.88 \\ \hline FiD-10 & \multirow{2}{*}{50.33} & \multirow{2}{*}{51.33} & \multirow{2}{*}{71.16} & \multirow{2}{*}{71.33} \\ w/ BART-reranker & & & & \\ \hline FiD-GST-A-10 & \multirow{2}{*}{51.03} & \multirow{2}{*}{52.80} & \multirow{2}{*}{**72.63**} & \multirow{2}{*}{**72.67**} \\ w/ BART-GST-A reranker & & & & \\ \hline FiD-GST-M-10 & \multirow{2}{*}{51.30} & \multirow{2}{*}{**53.10**} & \multirow{2}{*}{72.58} & \multirow{2}{*}{72.61} \\ w/ BART-GST-M reranker & & & & \\ \hline \hline \end{tabular} \end{table} Table 4: Reading experiments of with and without reranking. The first two row are trained/evaluated with DPR data while the rest are with reranked data. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline & Orig Test & New Test & Drop \\ \hline \multirow{2}{*}{BART-reranker} & 78.6/83.3 & 76.2/81.8 & -2.6/-1.5 \\ & 23.3/45.8 & 21.5/43.6 & -1.8/-2.2 \\ \hline \multirow{2}{*}{BART-GST-A} & 79.3/83.3 & 77.4/82.0 & **-1.9**/-1.3 \\ & 24.7/48.2 & 23.2/46.1 & **-1.4**/-**2.1** \\ \hline \multirow{2}{*}{BART-GST-M} & 80.0/83.7 & 78.0/82.4 & -2.0/-1.3 \\ & 25.0/48.7 & 23.4/46.3 & -1.6/-2.4 \\ \hline \multicolumn{4}{c}{A: Robustness of rerankers. Each cell contains} \\ Top5/Top10/MRR/MHits@10 as the metrics. & & \\ \hline & Orig Test & New Test & Drop \\ \hline FiD-reader & 50.66 & 46.76 & -3.90 \\ \hline FiD-GST-A & 51.11 & 47.84 & -3.27 \\ \hline FiD-GST-M & 50.97 & 47.76 & **-3.21** \\ \hline \multicolumn{4}{c}{B: Robustness of readers. Exact Match as the Metric.} \\ To avoid the influence of different reranking results, & \\ we use the same DPR results to train and eval. & \\ \hline \hline \end{tabular} \end{table} Table 3: Robustness on rerankers and readers. We conduct experiments on NQ. _Orig Test_ is the original test questions while _New Test_ means the paraphased test questions. _Drop_ is the difference from the original test to the paraphrased test, the smaller absolute number indicates better robustness. test is 72.67 vs 71.78. To our knowledge, we are the first make FiD-10 beat FiD-100. Influence of AMR Quality.We explore how AMR graphs quality influence the performance of our models in this section, by using the AMRBART-base-finetuned-AMR3.0-AMRParsing, 4 which is a smaller version. We compare the reranking performance of BART-GST with either superior or inferior graphs on NQ and TQ. We use the each kind of graphs to train its own reranking models. The results are shown in Table 5. Footnote 4: [https://huggingface.co/xfbai/AMRBART-base-finetuned-AMR3.0-AMRParsing](https://huggingface.co/xfbai/AMRBART-base-finetuned-AMR3.0-AMRParsing) Our models still work with inferior AMR graphs but the performance is not good as the superior ones in both reranking and reading. This indicates that when the quality of AMR graphs is higher, the GST models can potentially achieve better performance. Ablation to Nodes/EdgesWe ablate nodes and edges in our models to explore whether nodes or edges contribute more to the results. We conduct reranking experiments on NQ. The results are shown in Table 6. As can be seen, nodes are edges are both useful for the GST method, where 'BART-GST-M (only nodes)' and 'BART-GST-M (only edges)' both outperform the baseline BART-reranker in MRR/MHits@10 on NQ test, which are 24.2/48.7 vs 24.7/47.4 vs 23.3/45.8, respectively. However, 'BART-GST-M (only edges)' are better than 'BART-GST-M (only nodes)' in four metrics on NQ, partly due to the fact that edges also contain nodes information. Case StudyWe present two cases from our in Figure 3. In the upper one, for the negative passage, the baseline may consider _"a ban on smoking in all closed public areas"_ same as _"the smoking ban in public places"_, which are actually different; For the positive passage, the baseline may not take _"act regulated smoking in public area"_ as _"the smoking ban in public places"_ while our model does. In the lower one, the baseline reader ignores the competition is _" for the opportunity to play in Super Bow"_ rather than _"in the Super Bowl"_, and because the number of similar passages with _"Philadelphia Eagle"_ are more than the positive passage's, the baseline reader finds the incorrect passage which leads to the incorrect answer. In \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline & Top5 & Top10 & MRR & MH@10 \\ \hline BART-reranker & 78.7/78.6 & 83.0/83.3 & 25.7/23.3 & 49.3/45.8 \\ \hline BART-GST-M & 79.6/80.0 & 83.3/83.7 & 28.4/25.0 & 53.2/48.7 \\ \hline BART-GST-M & \multirow{2}{*}{78.5/78.9} & \multirow{2}{*}{82.9/83.1} & \multirow{2}{*}{27.6/24.2} & \multirow{2}{*}{51.8/47.3} \\ only nodes & & & & \\ \hline BART-GST-M & \multirow{2}{*}{78.6/79.3} & \multirow{2}{*}{83.0/83.3} & \multirow{2}{*}{27.9/24.7} & \multirow{2}{*}{52.4/47.4} \\ only edges & & & & \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation to nodes and edges to our GST methods on NQ. We choose BART-GST-M because it better performs on NQ. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline & Top5 & Top10 & MRR & MH@10 \\ \hline BART-reranker & 78.7/78.6 & 83.0/83.3 & 25.7/23.3 & 49.3/45.8 \\ \hline BART-GST-M & \multirow{2}{*}{79.6/80.0} & \multirow{2}{*}{83.3/83.7} & \multirow{2}{*}{28.4/25.0} & \multirow{2}{*}{53.2/48.7} \\ (superior AMRs) & & & & \\ \hline BART-GST-M & \multirow{2}{*}{79.5/79.3} & \multirow{2}{*}{83.5/83.1} & \multirow{2}{*}{28.4/24.7} & \multirow{2}{*}{52.9/47.8} \\ (inferior AMRs) & & & & \\ \hline \hline \end{tabular} In reranking. \begin{tabular}{l|c} \hline \hline & Exact Match \\ \hline FiD-reader & \multirow{2}{*}{50.12/51.11} \\ (superior AMRs) & & & \\ \hline FiD-GST-A & \multirow{2}{*}{49.95/50.83} \\ (inferior AMRs) & & & \\ \hline \hline \end{tabular} In reranking. \begin{tabular}{l|c} \hline \hline & Exact Match \\ \hline FiD-reader & \multirow{2}{*}{48.47/50.66} \\ \hline FiD-GST-A & \multirow{2}{*}{50.12/51.11} \\ (superior AMRs) & & & \\ \hline FiD-GST-A & \multirow{2}{*}{49.95/50.83} \\ (inferior AMRs) & & & \\ \hline \hline \end{tabular} In reading. \end{table} Table 5: Influence of superior AMR graphs which generated by a larger model, and inferior AMR graphs which generated by a smaller model. Figure 3: Two cases from our experiments for reranking and reading, respectively. We highlight important information over questions and passages. contrast, our model focuses on the only positive passage and answers the question correctly. ### Alternative Graph Methods We have also tried several methods to integrate AMRs into PLMs, but their performance is worse than our Graph-aS-Token method. Here we take two representative examples, which are Relational Graph Convolution Network (RGCN) (Schlichtkrull et al., 2018) for the reranker and Graph-transformer (Yun et al., 2019) for FiD. All those methods require alignments between text tokens and graph nodes, for which only some nodes can be successfully aligned. Stacking RGCN above TransformerThe model architecture consists of a transformer encoder and a RGCN model where RGCN is stacked on top of the transformer. After the vanilla forward by transformer encoder, AMR graphs abstracted from queries and passages in advance are constructed with node embeddings initialized from transformer output. Then they are fed into the RGCN model and the final output of the [CLS] node is used for scoring. For the text embeddings of one question-passage pair, its encoder hidden states \[\mathbf{H}=Encoder(X_{qp})\] For one node \(n\), its initial embedding \[\mathbf{h^{0}}=MeanPooling(\mathbf{H_{start:end}})\] where \(start\) and \(end\) are the start and end positions of the text span aligned with the node. The update of node embedding for each layer \(l\) is \[\mathbf{h_{i}^{l+1}}=\sigma(W_{0}^{l}\mathbf{h_{i}^{l}}+\sum_{r\in R}\sum_{j \in N_{i}^{r}}\frac{1}{c_{i,r}}W_{i}^{l}\mathbf{h_{i}^{l}})\] \[c_{i,r}=\|N_{i}^{r}\|\] where \(R\) is the set of edge types, \(N_{i}^{r}\) stands for the group of nodes which connect with node \(i\) in relation \(r\). so the correlation score of \(q\) and \(p\): \[s_{qp}=ClsHead(h_{[CLS]}^{L})\] The results are presented in Table 7, which is clear that the RGCN-stacking method is inferior to the GST method. Some metrics, including Top5, Top10 and MRR, of RGCN-stacking are worse than the baseline, meaning the RGCN method is not feasible for integrating AMRs into PLMs though it looks like reasonable and practical. Graph-transformerWe apply the graph-transformer architecture to FiD model for reading. We follow the graph-transformer architecture in Bai et al. (2021), whose main idea is using AMR information to modify the self-attention scores between text tokens. However, we find stacking challenging for PLMs because the new-initialized graph architectures are not compatible with architectures of PLMs, lead to non-convergence during training. Despite that, tricks such as incrementally training and separate tuning can lead to convergence, results are still below the baseline model, let alone GST. Flattening AMR GraphsWe have also tried to directly flatten AMR graphs into text sequences, but the result sequences are always beyond the maximum processing length (1024) of the transformer. So, we have to cut off some nodes and edges to fit in the transformer, but the results show that it does not work well and has only a very sight improvement while the computational cost is tens times over the baseline. ## 5 Conclusion In this study, we successfully incorporated Abstract Meaning Representation (AMR) into Open-Domain Question Answering (ODQA) by innovatively employing a Graph-aS-Token (GST) method to assimilate AMRs with pretrained language models. The reranking and reading experiments conducted on the Natural Questions and TriviaQA datasets have demonstrated that our novel approach can notably enhance the performance and resilience of Pretrained Language Models (PLMs) within the realm of ODQA. ## Acknowledgement This publication has emanated from research conducted with the financial support of the Pioneer and \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline & Top5 & Top10 & MRR & MH@10 \\ \hline BART-reranker & 78.7/78.6 & 83.0/83.3 & 25.7/23.3 & 49.3/45.8 \\ \hline BART-GST-M & 79.6/80.0 & 83.3/83.7 & 28.4/25.0 & 53.2/48.7 \\ \hline RGCN-Stacking & 78.6/78.2 & 82.3/83.0 & 26.1/23.1 & 49.5/46.0 \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison between the baseline, GST and RGCN-Stacking in reranking on NQ. "Leading Goose" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003. ## Limitations Our Graph-aS-Token (GST) method can increase the time and GPU memory cost, we set an quantitative analysis in Appendix Section A.4. We train the models with only one random seed. We do not conduct a large number of hyper-parameter tuning experiments, but use a fixed set of hyper-parameters to make the baseline and our models comparable. ## Ethics Statement No consideration.
2306.01362
Deep-Inelastic Scattering: What do we know ?
A survey is given on the current status of the theoretical description of unpolarized and polarized deep--inelastic scattering processes in Quantum Chromodynamics at large virtualities.
Johannes Blümlein
2023-06-02T08:37:20Z
http://arxiv.org/abs/2306.01362v1
# Deep-Inelastic Scattering: What do we know? ###### Abstract A survey is given on the current status of the theoretical description of unpolarized and polarized deep-inelastic scattering processes in Quantum Chromodynamics at large virtualities. Dedicated to the Memory of Harald Fritzsch. ## 1 Introduction About 50 years ago Quantum Chromodynamics (QCD), the theory of strong interactions, was found [1, 2, 3, 4, 5], and Harald Fritzsch played a major role in this. After the proof of renormalizibility [6, 7] of the Yang-Mills theories [8] and the proof of the anomaly freedom [9] of the \(SU_{2L}\times U_{1Y}\times SU_{3c}\) Standard Model, systematic perturbative calculations became possible to make predictions for experimental precision measurements of several hard scattering processes under certain kinematic conditions. The first calculation concerned the running of the strong coupling constant [3, 4] proving asymptotic freedom, a conditio sine qua non for higher order calculations in perturbative QCD. One of the key processes is deeply inelastic scattering (DIS) of leptons off nucleons, both in the unpolarized and polarized case. These processes allow to determine the various quark flavor and gluon densities [10, 11, 12, 13], reveal higher twist contributions in the region of low virtualities \(Q^{2}\) and/or large values of the Bjorken variable \(x\), [14, 15], with \(x=Q^{2}/(2p.q)\), \(Q^{2}=-q^{2}\), \(q\) the 4-momentum transfer from the lepton to the nucleon, and \(p\) the nucleon momentum. Deep-inelastic scattering has been the method to observe scaling [16], predicted in Ref. [17] and leading to the parton model [18]. It allows to measure the strong coupling constant \(\alpha_{s}(M_{Z}^{2})\) at high precision [19] from the scaling violations of the deep-inelastic structure functions [20]. Furthermore, their heavy flavor contributions allow a precision measurement of the charm quark mass \(m_{c}\)[21]. In this survey we will sketch the way the QCD corrections to deep-inelastic scattering took from the beginning of the light-cone picture [22] and (naive) parton model [18] until today, concerning the running of the coupling, the anomalous dimensions, the different massless and massive Wilson coefficients in both the unpolarized and polarized case.1 The corrections to the QCD \(\beta\)-function, being a zero-scale quantity, are simpler to calculate than the anomalous dimensions for general values of the Mellin variable \(N\), which are somewhat simpler than the massless Wilson coefficients, followed in complexity by analytic results on the massive Wilson coefficients. Footnote 1: For earlier reviews see Refs. [23, 24, 25]. For the unpolarized anomalous dimensions the first order corrections were known in 1974, the second order corrections in 1980 (with a final correction in 1991), and the third order results in 2004. The main massless Wilson coefficients were known in correct form 1980 at one loop, in 1992 at two loops and 2005 at three loops. In the massive case the one loop results were known in 1976, the two-loop results emerged between 1992 and 1996, and the asymptotic three loop results between 2010 and today, in the single and two-mass cases. The first fixed Mellin moments of these quantities were available earlier, if they had been calculated, and a series of lower moments has been calculated at four-loop order since 2016. The QCD \(\beta\) function is known to five-loop order since 2016/17. This time-line shows the challenge, which in part had to involve new mathematical developments and certainly great efforts in computer algebra. We will describe this development for the anomalous dimensions, the Wilson coefficients and the renormalization-group quantities in the following, which form the asset needed to describe the scaling violations of the deep-inelastic structure functions. Furthermore, we will add some remarks on the Drell-Yan process, since these data are needed to fix the light sea-quark distributions in QCD analyzes. We will also comment on technical and mathematical challenges connected to these analytic calculations, which became indispensable since the time of about 1998 and were essential to achieve the present status. ## 2 Scaling violations of DIS structure functions The measurement of the fundamental parameters of the Standard Model, such as \(\alpha_{s}(M_{Z}^{2})\) or the heavy quark masses \(m_{Q}\), require clear conditions. One has to choose a kinematic region for which definite theoretical predictions can be made. Here one request is that non-perturbative and perturbative effects can be clearly separated and the perturbative corrections can be carried out safely. In deep-inelastic scattering one is therefore advised to consider the kinematic region of the dominance of twist [26]\(\tau=2\) operators, which means that the virtuality \(Q^{2}\) of the process must be large to approach the Bjorken limit [17]. The effect of the higher twist contributions [14, 15]2 is then suppressed. One might choose to include only data with \(Q^{2}>25\) GeV\({}^{2}\) and \(W^{2}=Q^{2}(1-x)/x>12.5\) GeV\({}^{2}\) into the analysis, despite the fact of a high statistics measured below these scales, to avoid biases of not completely known power- and other corrections. It is known, cf. Ref. [15], that one may obtain _different_ Standard Model parameters if these cuts are weakened pointing to corrections not being controlled. Moreover, one may use the asymptotic heavy quark Wilson coefficients in this region [27], which can be calculated analytically, cf. Section 6. Under these conditions one obtains the single particle factorization [28] between the process-dependent Wilson coefficients and the single parton distribution functions, which is otherwise not the case. In going systematically to higher and higher orders there are also no limitations in approaching the small \(x\) region, since the calculation within QCD is complete. Small \(x\) approaches have to rely on factorization between the perturbative and non-perturbative contributions [28] as well and refer to the twist expansion for consistent renormalization. As has been shown in [29, 30], several sub-leading small \(x\) series have to be known to obtain results which are phenomenologically stable. In the following we will concentrate on the theoretical calculations under the above conditions. Let us consider the dynamics in Mellin \(N\) space. The Mellin transform of a function \(f(x)\) is given by \[f(N)=\int_{0}^{1}dxx^{N-1}\ \hat{f}(x). \tag{1}\] In the twist-2 approximation the deep-inelastic structure functions \(F_{i}\) obey the representation \[F_{i}(N,Q^{2})=\sum_{l=q,g}C_{i,l}(N,a_{s},Q^{2}/\mu^{2})\cdot f_{l}(N,a_{s}), \tag{2}\] where \(C_{i,l}(N,Q^{2}/\mu^{2})\) denote the renormalized Wilson coefficients, \(f_{l}\) the renormalized parton distribution functions, and \(a_{s}=\alpha_{s}/(4\pi)\). Introducing the operator [31] \[{\mathscr{D}}(\mu^{2}):=\mu^{2}\frac{\partial}{\partial\mu^{2}}+\beta(a_{s}( \mu^{2}))\frac{\partial}{\partial a_{s}(\mu^{2})}-\gamma_{m}(a_{s}(\mu^{2}))m (\mu^{2})\frac{\partial}{\partial m(\mu^{2})}, \tag{3}\] with \[\beta(a_{s}(\mu^{2}))=\mu^{2}\frac{\partial a_{s}(\mu^{2})}{\partial\mu^{2}},\quad\gamma_{m}(a_{s}(\mu^{2}))=-\frac{\mu^{2}}{m(\mu^{2})}\frac{\partial m( \mu^{2})}{\partial\mu^{2}}\, \tag{4}\] one obtains the following renormalization group equations (RGEs) from (2) \[\sum_{j}\left[{\mathscr{D}}(\mu^{2})\delta_{ij}+\gamma_{ij}^{\rm S,NS}-n_{ \psi}\gamma_{\psi}-n_{A}\gamma_{A}\right]f_{j}(N,\mu^{2}) = 0, \tag{5}\] \[\sum_{j}\left[{\mathscr{D}}(\mu^{2})\delta_{ij}+\gamma_{J_{1}}+\gamma_{J_{2}} -\gamma_{ij}^{\rm S,NS}\right]C_{i}\left(N,\frac{Q^{2}}{\mu^{2}}\right) = 0. \tag{6}\] Here \(\gamma_{\psi}\), \(\gamma_{A}\) and \(\gamma_{J_{1,2}}\) denote the anomalous dimension of external quarks, gluons, and the currents, which can be non-zero if the currents are not conserved. \(\gamma_{ij}^{\rm S,NS}\) denote the anomalous dimensions of the local operators (10, 11). The scale dependence is due to \(a_{s}(\mu^{2})\), \[\mu^{2}\frac{da_{s}(\mu^{2})}{d\mu^{2}}=-\sum_{k=0}^{\infty}\beta_{k}a_{s}^{k+2} (\mu^{2}), \tag{7}\] where one finally considers \(\mu^{2}=Q^{2}\). ## 3 Zero scale quantities The renormalization group equations for the massless and massive operator matrix elements and the Wilson coefficients also describe the scale dependence of the strong coupling constant and of the heavy quark masses. These are zero scale quantities and they have to be calculated to the respective order in perturbation theory. The running of the coupling constant \(\alpha_{s}(\mu^{2})\) is described by (7). Similar equations hold for the other quantities. In the \(\overline{\rm MS}\) scheme one may compute the different \(Z\)-factors renormalizing QCD in the case that there are no composite operators. The one-loop result for the QCD \(\beta\)-function has been calculated in Refs. [3, 4], the two-loop corrections in Refs. [32], the three-loop contributions in Refs. [33], the four-loop corrections in Refs. [34], and most recently the five-loop corrections in Refs. [35]. The effect of asymptotic freedom observed in [3, 4] is refined by the higher order corrections for the number of active quark flavors \(N_{F}\leq 6\) of nature. The other \(Z\)-factors to renormalize QCD were calculated in higher orders in Refs. [36, 37, 38, 39, 40] and are now also available at five-loop order. The running of the heavy quark masses and the wave function renormalization have been calculated in Refs. [41]. Herewith all necessary \(Z\)-factors occurring in processes without local operators are known. ## 4 Anomalous dimensions and splitting functions The QCD evolution of the twist-2 parton densities is ruled by the anomalous dimensions in \(N\) space, \(\gamma_{ij}(N)\), or the splitting functions in \(x\) space, where the latter are the inverse Mellin transform of the former. The evolution equations derive from the RGE (5) in non-singlet and singlet cases, \[\frac{df_{i}^{\rm NS}(N,\mu^{2})}{d\ln(\mu^{2})} = -\gamma_{qq}^{\rm NS}(N,a_{s})f_{i}^{\rm NS}(N,\mu^{2}), \tag{8}\] \[\frac{d}{d\ln(\mu^{2})}\Bigg{[}\begin{array}{c}\Sigma(N,\mu^{2 })\\ G(N,\mu^{2})\end{array}\Bigg{]} = -\left[\begin{array}{c}\gamma_{qq}(N,a_{s})\ \gamma_{qg}(N,a_{s})\\ \gamma_{qq}(N,a_{s})\ \gamma_{gg}(N,a_{s})\end{array}\right]\Bigg{[} \begin{array}{c}\Sigma(N,\mu^{2})\\ G(N,\mu^{2})\end{array}\Bigg{]}, \tag{9}\] with \(i=1,2,3\) and \(a_{s}=a_{s}(\mu^{2})\). The twist-2 parton densities are given as forward matrix elements \(\langle i|O_{k}|j\rangle\) of the composite operators \[O_{q;r;\mu_{1},\ldots,\mu_{n}}^{\rm NS}(0)=i^{N-1}{\bf S}\left[\bar{\psi} \gamma_{\mu_{1}}D_{\mu_{2}}\ldots D_{\mu_{N}}\frac{\lambda_{r}}{2}\psi\right], \tag{10}\] \[O^{5}_{q;r;\mu_{1},\ldots,\mu_{n}}(0) = i^{N-1}{\bf S}\left[\bar{\psi}\gamma_{\mu_{1}}D_{\mu_{2}}\ldots D_ {\mu_{N}}\psi\right], \tag{11}\] \[O^{5}_{g;r;\mu_{1},\ldots,\mu_{n}}(0) = 2i^{N-2}{\bf S}{\bf S}{\bf p}\left[F^{a}_{\beta\gamma}D_{\mu_{2}} \ldots D_{\mu_{N-1}}F^{\alpha,a}_{\mu_{N}}\right], \tag{12}\] in the unpolarized case (with similar expressions in the polarized case). Here the indices \(q\) and \(g\) refer to quark and gluon field operators, respectively, and \(\lambda_{r}\) denotes the Gell-Mann matrix of the corresponding light flavor representation; \(\psi\) is the quark field, \(D_{\mu}\) the covariant derivative, \(F^{a}_{\alpha\beta}\) the Yang-Mills field strength tensor, \({\bf S}\) the symmetry operator for all Lorentz indices and \({\bf Sp}\) the color-trace, where the index \(a\) is the color index in the adjoint representation. One calculates both the fixed Mellin moments using certain techniques as well as the complete functions for general values of \(N\). Here either the even or odd values of \(N\) contribute, depending on the respective amplitude crossing relations, cf. [23, 42]. ### Fixed Moments The first information one obtains on the anomalous dimensions is given by their fixed moments. One may calculate them by differentiating the forward Compton amplitude by the proton momentum \(p\). In this way one works without reference, however, equivalent to the local twist two operators. The method has the advantage that also the moments of the respective massless Wilson coefficients can be obtained in this way. In the massless case the Mincer algorithm has been used [43] to three-loop order. In the massive case one uses the package MATAD[44]. At two-loop order this has been done in Ref. [45]. The method has later been expanded to three-loop order in Refs. [46], reaching an intermediate technical limit calculating the 16th moment in the flavor non-singlet case in 2003. In the case of massive operator matrix elements (OMEs) moments between \(N=10\) and \(N=16\) were calculated in Ref. [47] at three-loop order. In the two-mass case moments were calculated in Refs. [48]. The method implies an exponential rise of terms to be calculated and therefore terminates at a given order, depending on the complexity of the given problem. More recently, also a series of lower moments at four- and five-loop order have been calculated by basically the same method in Refs. [49, 50], using Forcer[51] now having reached \(N=20\) in the four-loop case. This provides the most far reaching information at the moment. For simpler structures the number of moments obtained allow the reconstruction of the general \(N\) results under certain assumptions [49], in particular also that only harmonic sums [52] contribute. As known from the light-cone expansion [22], the Mellin moments are the genuine quantities in describing the scaling violations of DIS structure functions. In any approach to the calculation of DIS anomalous dimensions and Wilson coefficients one may transform the integration-by-parts identities (IBP) [53] into difference equations for the master integrals, and related to that, the amplitude. By using the method of arbitrarily high moments [54] one may calculate very effectively high numbers of moments. Even in the massive case we have generated 15.000 moments at three-loop order recently. The method of Ref. [54] allows then to use the method of guessing [55] to find the associated recurrence, given one has had a sufficient number of moments. Here no special assumptions on the mathematical structure of the results are made unlike in the approach used e.g. in Ref. [49]. The obtained recurrences are then inspected using difference ring theory algorithms as implemented in the package Sigma[56]. In the case of first order factorizing problems the general \(N\) solution can be calculated. In all other cases the first order factors can be separated off. This method could be applied to all anomalous dimensions and massless Wilson coefficients to three loop order, which are first-order factorizable problems. This also applies to various massive Wilson coefficients at three-loop order, as will be discussed below. The calculation of the Mellin moments by using the differentiation method is very different form other approaches. Therefore these results provide firm checks on the results for the case of general \(N\) recurrences, without making special assumptions. ### Results at General \(N\) The leading order unpolarized anomalous dimensions were calculated in Refs. [20] and in the polarized case in [57]. A partonic approach has been used in Ref. [58], which is related to Refs. [20, 57] by a Mellin transform.3 The next-to-leading order anomalous dimensions and splitting functions were computed in Refs. [59, 60] resp. in Refs. [61]. Finally, the next-to-next-to-leading order ones in Refs. [62, 63, 64, 65, 66, 67, 68, 69, 70] and Refs. [65, 71, 72, 73] in the unpolarized and polarized cases. Simpler color factors at four-loops are available at general values of \(N\)[50, 74]. Here different techniques have been used, such as the forward Compton amplitude [62, 63, 65, 71], massive on-shell OMEs [66, 67, 72] massless off-shell OMEs [64, 70, 73], and different hard scattering cross sections with on-shell amplitudes [68, 69]. All contributions can be expressed in terms of harmonic sums or in \(x\)-space by the corresponding Mellin inversion in terms of harmonic polylogarithms [75]. One may now ask the question which order corrections are still important for current experimental precision analyses. Present day DIS data have an accuracy of up to O(1%). Future data, e.g. at the EIC [77], will reach at least this level. As shown in Figure 1 the NNLO corrections are not enough, in particular in the smaller \(x\) and large \(x\) regions. This applies also to the high luminosity data from the LHC. Therefore the four-loop splitting functions shall be calculated. Footnote 3: For further one–loop results see Ref. [25], Section 7. ## 5 Massless Wilson Coefficients For the massless Wilson coefficients the one-loop corrections were given in [78, 79]. The two-loop corrections were computed in Refs. [45, 60, 80] and the three-loop corrections were calculated in Refs. [63, 65, 81]. First color factor contributions of \(O(N_{F}^{2})\) were computed recently in Ref. [82] at four loops. Up to the three-loop level all these quantities can be represented in terms of harmonic sums in Mellin space and the following 60 harmonic sums contribute [65], after algebraic reduction [83] \[\begin{array}{l}\{S_{1};S_{2},S_{-2};S_{3},S_{-3},S_{2,1},S_{-2,1};S_{4},S_{- 4},S_{-2,2},S_{3,1},S_{-3,1},S_{2,1,1},S_{-2,1,1};S_{5},\\ S_{-5},S_{-2,3},S_{2,3},S_{-2,-3},S_{-2,-3},S_{2,2,1},S_{-2,1,-2},S_{-2,2,1},S_{ 4,1},S_{-4,1},S_{2,1,-2},S_{3,1,1},\\ S_{-3,1,1},S_{2,1,1,1},S_{-2,1,1,1};S_{6},S_{-6},S_{-3,3},S_{4,2},S_{4,-2},S_{-4,2},S_{5,1},S_{-5,1},\\ S_{-2,-2,-2},S_{-2,2},S_{-2,-3,1},S_{-2,3,1},S_{-3,1,-2},S_{-3,-2,1},S_{-3,2,1},S_ {-4,1,1},S_{2,3,1},S_{3,1,-2},\\ S_{3,2,1},S_{4,1,1},S_{-2,-2,1,1},S_{-2,1,1,2},S_{-2,2,1,1},S_{2,2,1,1},S_{3,1, 1,1},S_{-3,1,1,1},\\ S_{2,1,1,1,1},S_{-2,1,1,1,1}\}.\end{array} \tag{13}\] The harmonic sums are recursively defined by \[S_{b,\vec{x}}(N)=\sum_{k=1}^{N}\frac{(\mathrm{sign}(b))^{k}}{k^{[b]}}S_{\vec{x }}(k),\ \ S_{\emptyset}=1,b,a_{i}\in\mathbb{Z}\backslash\{0\},\ \ N\in\mathbb{N}\backslash\{0\}. \tag{14}\] ## 6 Massive Wilson Coefficients The massive Wilson coefficients receive single mass and two-mass contributions (due to both charm and bottom quark corrections being present). We mainly will discuss asymptotic scales \(Q^{2}\gg m_{Q}^{2}\) subsequently. In this case one obtains the following representation for the five contributing massive Wilson coefficients up to three-loop order [47] \[L_{2,4}^{\text{NS}}(N_{F})=a_{s}^{2}\left[A_{qq,Q}^{\text{NS},(2)}(N_{F})+C_{ 2,q}^{\text{NS},(2)}(N_{F})\right]\] Figure 1: Ratios of the relative NNLO to NLO corrections of the singlet (\(\Sigma\)) and gluon (\(\mathsf{G}\)) distributions as functions of \(x\). Upper panels: unpolarized case; lower panels: polarized case. \(Q^{2}=10,10^{2},10^{3},10^{4}\) GeV\({}^{2}\): dotted, dash-dotted, dashed, full lines. From Ref. [76]. \[+ a_{s}^{3}\Big{[}\;A_{qq,Q}^{\text{NS},(3)}(N_{F})+A_{qq,Q}^{\text{NS},(2)}(N_{F})C_{2,q}^{\text{NS},(1)}(N_{F})+\hat{C}_{2,q}^{\text{NS},(3)}(N_{F}) \Big{]}\] \[\tilde{L}_{2,q}^{\text{PS}}(N_{F}) = a_{s}^{3}\;\Big{[}\;\bar{A}_{qq,Q}^{\text{PS},(3)}(N_{F})+A_{qq, Q}^{(2)}(N_{F})\;\;\tilde{C}_{2,q}^{(1)}(N_{F}+1)+\hat{C}_{2,q}^{\text{PS},(3)}(N_{F}) \Big{]}\] \[\tilde{L}_{2,g}^{\text{S}}(N_{F}) = a_{s}^{2}A_{gg,Q}^{(1)}(N_{F})\tilde{C}_{2,g}^{(1)}(N_{F}+1)\] \[+ a_{s}^{3}\Big{[}\;\bar{A}_{qq,Q}^{(3)}(N_{F})+A_{gg,Q}^{(1)}(N_{ F})\;\;\tilde{C}_{2,g}^{(2)}(N_{F}+1)+A_{gg,Q}^{(2)}(N_{F})\] \[\qquad\qquad\cdot\tilde{C}_{2,g}^{(1)}(N_{F}+1)+\;A_{Qg}^{(1)}(N_{ F})\;\;\tilde{C}_{2,q}^{\text{PS},(2)}(N_{F}+1)+\hat{C}_{2,g}^{(3)}(N_{F}) \Big{]}\] \[H_{2,q}^{\text{PS}}(N_{F}) = a_{s}^{2}\Big{[}\;A_{qq}^{\text{PS},(2)}(N_{F})+\;\tilde{C}_{2, q}^{\text{PS},(2)}(N_{F}+1)\Big{]}\] \[+ a_{s}^{3}\Big{[}\;A_{qq}^{\text{PS},(3)}(N_{F})+\;\tilde{C}_{2, q}^{\text{PS},(3)}(N_{F}+1)+A_{gq,Q}^{(2)}(N_{F})\;\tilde{C}_{2,g}^{(1)}(N_{F}+1)\] \[\qquad\qquad+A_{Qq}^{\text{PS},(2)}(N_{F})\;C_{2,q}^{\text{NS},( 1)}(N_{F}+1)\Big{]}\] \[H_{2,g}^{\text{S}}(N_{F}) = a_{s}\;\Big{[}\;A_{Q}^{(1)}(N_{F})+\;\tilde{C}_{2,g}^{(1)}(N_{F} +1)\Big{]}\] \[+ a_{s}^{2}\Big{[}\;A_{Qg}^{(2)}(N_{F})+\;A_{Qg}^{(1)}(N_{F})\;C_{ 2,q}^{\text{NS},(1)}(N_{F}+1)+\;A_{gg,Q}^{(1)}(N_{F})\] \[\qquad\qquad\cdot\tilde{C}_{2,g}^{(1)}(N_{F}+1)+\;\tilde{C}_{2,g} ^{(2)}(N_{F}+1)\Big{]}\] \[+ a_{s}^{3}\Big{[}\;A_{Qg}^{(3)}(N_{F})+\;A_{Qg}^{(2)}(N_{F})\;C_{ 2,q}^{\text{NS},(1)}(N_{F}+1)+\;A_{gg,Q}^{(2)}(N_{F})\] \[\qquad\qquad\cdot\tilde{C}_{2,g}^{(1)}(N_{F}+1)+\;A_{Qg}^{(1)}(N_ {F})\;\Big{[}C_{2,q}^{\text{NS},(2)}(N_{F}+1)+\;\tilde{C}_{2,q}^{\text{PS},(2 )}(N_{F}+1)\Big{]}\] \[\qquad\qquad+\;A_{gg,Q}^{(1)}(N_{F})\;\tilde{C}_{2,g}^{(2)}(N_{F }+1)+\;\tilde{C}_{2,g}^{(3)}(N_{F}+1)\Big{]}, \tag{19}\] where \(\tilde{f}(N_{F})=f(N_{F})/N_{F}\), \(\tilde{f}(N_{F})=f(N_{F}+1)-f(N_{F})\). Here \(C_{i}\) are the respective contributions of the massless Wilson coefficients and \(A_{ij}^{(k)}\) are the massive \(k\)-loop order OMEs. In the following we will deal with neutral-current interactions and the structure functions \(F_{2}(x,Q^{2})\) and \(F_{L}(x,Q^{2})\)[84]. Also for charged current processes higher order massive Wilson coefficients have been calculated. It turns out that beyond two-loop order several new mathematical quantities beyond the harmonic sums are contributing, cf. Section 9. ### Single mass corrections The one-loop corrections can be calculated for general values of \(Q^{2}\) and were obtained in Refs. [85] in the unpolarized and polarized cases. The tagged-heavy flavor two-loop corrections were calculated numerically in Refs. [86]. Note that these corrections do not refer to the inclusive structure functions. These were calculated in the case of asymptotic scales \(Q^{2}\gg m_{Q}^{2}\) in Refs. [87, 88, 89, 90, 27, 91]. The flavor non-singlet contributions can also be obtained in closed form for general values of \(Q^{2}\), cf. [87, 27, 90]. This is also the case for the pure singlet contributions [92] and one may obtain a systematic expansions of the contributing power corrections of \(O((m_{Q}^{2}/Q^{2})^{k})\). Here root-valued alphabets play a role and the results are given by incomplete elliptic integrals in part which are iterative integrals, unlike complete elliptic integrals. The analytic asymptotic two-loop results depend only on harmonic sums [52], as do the logarithmic scale corrections to three-loop order. The latter corrections were obtained in Refs. [93]. The three-loop corrections to the unpolarized asymptotic Wilson coefficients were computed in Refs. [94] and in the polarized case in Refs. [93, 95] Massive OMEs determine also the transition matrix elements in the variable flavor scheme (VFNS). The corrections up to two-loop order were calculated in Refs. [91, 96, 97] and the single and two-mass VFNS were given in [91, 97, 98]. At three-loop order the massive OMEs beyond those contributing to the massive Wilson coefficients, were calculated in Refs. [93, 66, 99] for the unpolarized case and in Refs. [93, 99, 100] in the polarized case. The transition relations in the single mass variable flavor number scheme [97] are given by \[f_{k}(N_{F}+1,\mu^{2},m^{2},N)+f_{\overline{k}}(N_{F}+1,\mu^{2},m ^{2},N)=\] \[A^{\rm NS}_{qq,Q}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right)\cdot \left[f_{k}(N_{F},\mu^{2},N)+f_{\overline{k}}(N_{F},\mu^{2},N)\right]+\tilde{A }^{\rm PS}_{qq,Q}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right)\] \[\cdot\Sigma(N_{F},\mu^{2},N)+\tilde{A}_{qg,Q}\left(N_{F},\frac{ \mu^{2}}{m^{2}},N\right)\cdot G(N_{F},\mu^{2},N), \tag{20}\] \[f_{Q}(N_{F}+1,\mu^{2},m^{2},N)+f_{\overline{Q}}(N_{F}+1,\mu^{2},m^{2},N)=\] \[A^{\rm PS}_{Qq}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right)\cdot \Sigma(N_{F},\mu^{2},N)+A_{Qg}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right)\cdot G (N_{F},\mu^{2},N)\] (21) \[\Sigma(N_{F}+1,\mu^{2},m^{2},N)=\left[A^{\rm NS}_{qq,Q}\left(N_{F },\frac{\mu^{2}}{m^{2}},N\right)+N_{F}\tilde{A}^{\rm PS}_{qq,Q}\left(N_{F}, \frac{\mu^{2}}{m^{2}},N\right)\right.\] \[\left.+A^{\rm PS}_{Qq}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right) \right]\cdot\Sigma(N_{F},\mu^{2},N)\] \[+\left[N_{F}\tilde{A}_{qg,Q}\left(N_{F},\frac{\mu^{2}}{m^{2}},N \right)+A_{Qg}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right)\right]\] \[\cdot G(N_{F},\mu^{2},N)\] (22) \[\Delta(N_{F}+1,\mu^{2},m^{2},N)=f_{k}(N_{F}+1,\mu^{2},N)+f_{ \overline{k}}(N_{F}+1,\mu^{2},m^{2},N)\] \[-\frac{1}{N_{F}+1}\Sigma(N_{F}+1,\mu^{2},m^{2},N)\] (23) \[G(N_{F}+1,\mu^{2},m^{2},N)=A_{gq,Q}\left(N_{F},\frac{\mu^{2}}{m^ {2}},N\right)\cdot\Sigma(N_{F},\mu^{2},N)\] \[+A_{gg,Q}\left(N_{F},\frac{\mu^{2}}{m^{2}},N\right)\cdot G(N_{F}, \mu^{2},N). \tag{24}\] In this way one may define also heavy quark parton densities \(f_{Q(\overline{Q})}\) in the region \(\mu^{2}\gg m_{Q}^{2}\). In the two-mass case one may separate the genuine two-mass contributions to \(f_{c}\) and \(f_{b}\), cf. [48]. ### Two-mass corrections The two-mass corrections to the different massive OMEs, except that of \((\Delta)A_{Qg}^{(3)}\), can be calculated in terms of iterative integrals using square-root valued alphabets in which the real mass-ratio \(\eta=m_{c}^{2}/m_{b}^{2}\) appears in \(x\)-space. In addition new special constants are contributing associated to these functions. The corrections, except those for \(A_{Qg}^{(3)}\) in the unpolarized case, were calculated in Refs. [48, 101, 102]. In the polarized case the three-loop corrections were computed in Refs. [48, 103, 104]. The VFNS in the two-mass case has been given in [48]. It extends the one given in Section 6.1 and accounts for the fact that the mass ratio \(m_{c}^{2}/m_{b}^{2}\) is not small. ## 7 Scheme-invariant evolution The systematically and theoretically best way to measure the strong coupling constant in deep-inelastic scattering is due to the evolution of a given structure function itself. This requires specific experimental conditions, which were sometimes not available at some of the deep-inelastic facilities in the past. Having proton and deuteron data available in the same \((x,Q^{2})\)-bins and performing the deuteron wavefunction corrections allows to measure the following non-singlet structure functions \[F_{2}^{\rm NS}(x,Q^{2}) = F_{2}^{p}-F_{2}^{d}=\frac{x}{6}C_{q}^{\rm NS+}\otimes[u+\bar{u} -d-\bar{d}], \tag{25}\] \[xg_{1}^{\rm NS}(x,Q^{2}) = x(g_{1}^{p}-g_{1}^{d})=\frac{x}{6}\Delta C_{q}^{\rm NS+}\otimes[ \Delta u+\Delta\bar{u}-\Delta d-\Delta\bar{d}], \tag{26}\] see Ref. [105]4. The massless and the massive non-singlet Wilson coefficients are available to three-loop order [63, 65, 107], including the two-mass corrections [48]. The scale evolution of the non-singlet combination of the parton distribution functions forming a _single_ input density, requires four-loop anomalous dimensions. The investigation of moments shows, that these quantities can be extremely well constrained by a Pade-approximant of the lower order anomalous dimensions, implying a negligible theory error. The above equations can be rewritten in terms of evolution operators \((\Delta)E^{\rm NS}\) Footnote 4: Scheme invariant evolution equations in the singlet case were considered in Refs. [78, 106] \[F_{2}^{\rm NS}(x,Q^{2}) = E^{\rm NS}(x,Q^{2},Q_{0}^{2})\otimes F_{2}^{\rm NS}(x,Q_{0}^{2 }), \tag{27}\] \[xg_{1}^{\rm NS}(x,Q^{2}) = x[\Delta E^{\rm NS}(x,Q^{2},Q_{0}^{2})\otimes g_{1}^{\rm NS}(x, Q_{0}^{2})]. \tag{28}\] Here the evolution operators can be analytically calculated in Mellin space in the analyticity region of \(N\in\mathbb{C}\). The \(x\)-space result is then obtained by a single numerical contour integral around the singularities of the problem, cf. [29, 108]. Measuring the input structure functions at \(Q_{0}^{2}\) with correlated errors, the evolution from \(Q_{0}^{2}\) to the higher scales \(Q^{2}\) depends only on a single parameter, the strong coupling constant \(a_{s}(M_{Z}^{2})\) or the QCD-scale \(\Lambda_{\rm QCD}\). The charm quark mass may be fixed within errors in this process and accounted for by error propagation. A measurement of this kind is proposed for further facilities, like the EIC [77] or the LHeC [109]. ## 8 The Drell-Yan process The Drell-Yan process of hadronic lepton pair production \(pp\to\gamma^{*}/Z^{*}+X\) with subsequent leptonic decay of the virtual gauge bosons [110] or the associated charged current processes probe quark-antiquark initial states at leading order. Therefore this process is particularly sensitive to the sea quark distributions and yields complementary information to deep-inelastic scattering in disentangling the different light flavor distributions. The one-loop corrections to this process were calculated in Refs. [111] around 1980. The two-loop corrections have been completed 1990 in [112]. A subset of the Wilson coefficients is also related to the initial state QED corrections of \(e^{+}e^{-}\to\gamma^{*}/Z^{*}+X\), for massive electrons in the limit \(m_{e}^{2}/s\to 0\), where \(s\) denotes the cms energy, cf. Ref. [113]. Like for all massless and massive two-loop single scale Wilson coefficients it has been shown in Ref. [114] that also in the case of the unpolarized and polarized Drell-Yan processes and Higgs-boson production only six functions are needed in Mellin \(N\) space to describe these quantities. Here only harmonic sums [52] contribute. The three-loop corrections were calculated in Refs. [69]. Here also elliptic integrals contribute to the scattering cross section, if expressed in the variable \(\hat{s}/s\), where \(\hat{s}\) is the cms energy of the virtual gauge boson. In the experimental analysis one has to use differential distributions, such as encoded in the packages DYNNLO, FEWZ, MATRIX, MCFM [115]. ## 9 Conclusions Perturbative QCD has evolved significantly over the last 50 years and proven to be the correct theory of the strong interactions at high virtualities. While reviews like Ref. [116] in 1973 still were reluctant to evoke \(SU_{3c}\) as part of the Standard Model, QCD allows now for highly precise predictions. These analytic results required new mathematical and computer-algebraic technologies to be obtained. On the side of computer algebra we would like to mention in particular the IBP methods [53], Forcer[51], the packages Sigma[56] and HarmonicSUms[117], the method of arbitrary high moments [54], and the method to perform the inverse Mellin transform without giving an explicit general \(N\) expression [118]. At the mathematics side new developments have set in around 1998 with harmonic sums [52], generalized harmonic sums [119], cyclotomic harmonic sums [120], finite and infinite binomial sums [121, 122], related iterated integrals [119, 75, 120, 121], special numbers, e.g. Ref. [123], and methods related to \({}_{2}F_{1}\)-solutions [124] and complete elliptic integrals [125, 124]. For a survey on these methods see Refs. [126]. Here the main question is: What can be integrated analytically and how? An important aspect in this context is anti-differentiation [127]. This development is still ongoing and we look forward for the new brilliant results to come. Finally, we would like to comment on fundamental parameters of the Standard Model, such as \(\alpha_{s}(M_{Z}^{2})\) and \(m_{c}\), which can already be determined by the present high-precision data. These still will be improved when the calculation for all the three-loop heavy flavor corrections are completed. In earlier analyses we obtained in the non-singlet and singlet cases the following values for \(\alpha_{s}(M_{Z}^{2})\)e Footnote e: Working under comparable cuts as ours, also in Ref. [128, 129] lower values of \(\alpha_{s}^{\rm N^{2}LO}(M_{Z}^{2})=0.1136\) and \(0.1150\,^{+0.0000}_{-0.0000}\) were obtained. Furthermore, we agree with the results of Ref. [11], \(\alpha_{s}^{\rm N^{2}LO}(M_{Z}^{2})=0.1136\pm 0.0004\). Also a series of other measurements at N\({}^{2}\)LO deliver a series of values significantly below the world average, cf. Ref. [19]. \[\alpha_{s}^{\rm N^{2}LO,NS}(M_{Z}^{2}) = 0.1141\pm 0.0022\ \ \mbox{[13]}, \tag{29}\] \[\alpha_{s}^{\rm N^{2}LO}(M_{Z}^{2}) = 0.1140\pm 0.0009\ \ \mbox{[12]}, \tag{30}\] and the charm quark mass \[m_{c}(m_{c})=1.252\pm 0.018\ {\rm GeV}.\ \ \ \mbox{[12]}. \tag{31}\] Note that there is still a 0.07 GeV theory error involved in the latter, which will be significantly reduced after the massive three-loop corrections are completely available. The result is very well compatible with the four-loop result based on \(e^{+}e^{-}\) annihilation data \[m_{c}(m_{c})=1.279\pm 0.008\ {\rm GeV}.\ \ \ \mbox{[130]}. \tag{32}\] Dedicated measurements at future high luminosity DIS facilities, like the EIC [77] and LHeC [109], are believed to improve these results further and to finally resolve the problem of still conflicting results on \(\alpha_{s}(M_{Z}^{2})\) from different measurements [19, 131].
2305.07243
Better speech synthesis through scaling
In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodology of improving performance need not be confined to images. This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise -- an expressive, multi-voice text-to-speech system. All model code and trained weights have been open-sourced at https://github.com/neonbjb/tortoise-tts.
James Betker
2023-05-12T04:19:49Z
http://arxiv.org/abs/2305.07243v2
# Better speech synthesis through scaling ###### Abstract In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodology of improving performance need not be confined to images. This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise - an expressive, multi-voice text-to-speech system. All model code and trained weights have been open-sourced at [https://github.com/neonbjb/tortoise-tts](https://github.com/neonbjb/tortoise-tts). ## 1 Background ### Text-to-speech The field of text-to-speech (TTS) research has been largely constrained to the development of efficient models trained on relatively small datasets. This choice has been driven by: 1. The desire to build efficient speech generation models that can be deployed at scale and thus must have a high sampling rate. 2. The unavailability of very large, transcribed speech datasets. 3. Challenges scaling the encoder-decoder model architectures traditionally used in TTS. #### 1.1.1 Neural MEL Inverters Most modern text-to-speech systems operate on speech data that is encoded as a MEL spectrogram. There are many compelling reasons to operate in this encoding space, but for neural networks, the most compelling reason is that it is highly spatially compressed. The MEL configuration used by the Tacotron, for example, operates at 256x compression over raw audio waveform data sampled at 22kHz, but contains most of the information found in that data. Because of this, an entire body of research has been dedicated to finding high-quality ways to decode MEL spectrograms back into audio waveforms. A synthesizer that performs this task is generally called a "vocoder", but I more generally refer to it as a "MEL inverter" in this paper. Modern MEL inverters built on neural networks are incredibly sophisticated. They produce waveforms that are nearly indistinguishable from recorded waveforms to human ears, and they are highly generalizable outside of their training set. I capitalize on this work by using an implementation of Univnet(Kim, 2021) as a final stage for my text-to-speech system. ### Image generation While TTS systems largely focus on latency, this has not been the case in other domains. For example, with image generation, more focus has been applied to training models that generate high quality results, regardless of the sampling time. For the purposes of this paper, I dive into two bodies of research: #### 1.2.1 Dall-E DALL-E(Ramesh et al., 2021) showed how an autoregressive decoder can be applied to text-to-image generation. This is particularly appealing because of the vast quantity of research that has been poured into scaling decoder-only models in the NLP domain. Two important problems persist with DALL-E: first, it relies on full-sequence self-attention, which carries a cost of \(O(N^{2})\) compute and memory, where N is the sequence length. This is particularly troublesome when dealing with modalities like images or audio, which have large sequence lengths when dealt with naively. Second, traditional autoregressive approaches require operating in the discrete domain. Images are encoded into sequences of discrete tokens using a quantizing autoencoder. DALL-E then models these sequences of tokens using an autoregressive prior model. This is a strength of DALL-E in terms of expressiveness, but it comes at the cost of requiring a decoder which can convert these image tokens back into the pixel values that actually comprise an image. It is my opinion that learned VQVAE decoder used by DALL-E is principally responsible for the blurry incoherence exhibited by most of it's samples. #### 1.2.2 DDPMs The generative model space has long been plagued by models that either exhibit mean-seeking behavior (resulting in blurriness) or mode-collapse (resulting in a lack of diversity or generalization). Denoising diffusion probabilistic models (DDPMs(Ho et al., 2020)) have recently arisen as the first type of generative model capable of producing crisp, coherence and diverse images. These models have been shown to be quite effective at using low-quality guidance signals to reconstruct the high-dimensional space that those guidance signals were derived from. Put another way, they are great at super-resolution. There are two important caveats to DDPMS: 1. Traditional approaches to DDPMs rely on fixed output shapes that are known before sampling begins. As a concrete example relevant to this paper, DDPMs cannot learn to convert text into audio signals because they cannot solve the implicit alignment problem between text and audio. 2. DDPMs must be sampled from over multiple iterations. This sampling process consumes a great deal of compute, and means sampling from a DDPM will always incur a significant latency cost. #### 1.2.3 Re-ranking DALL-E introduced the process of "re-ranking" the outputs of autoregressive models. This process samples randomly from the autoregressive model and picks the highest quality output of k outputs for downstream use. Such a procedure requires a strong discriminator: a model that can tell good text/image pairings from bad. DALL-E used CLIP(Radford et al., 2021), a model trained with a contrastive text and image pairing objective. ## 2 Methods ### Joining Autoregressive Decoders and DDPMs To review some of the conclusions drawn above: 1. Autoregressive models are strong at converting between unaligned domains like vision, text and speech. 2. DDPMs operate in the continuous domain which allows them to model expressive modalities. Both types of models have demonstrated the ability to scale performance with additional compute and data. It becomes evident that when posed with a problem like generating continuous data like speech spectrograms or images, a marriage of these two approaches might have some distinct advantages. Specifically, in inference, the autoregressive model will be used to convert a sequence of text tokens to a sequence of tokens representing the output space (in our case, speech tokens). The DDPM will then be used to decode these tokens into a high quality representation of speech. ### Applying Autoregression+DDPMs to TTS To build out the previously proposed system, we need to train the following neural networks: 1. An autoregressive decoder which predicts a probability distribution for speech tokens, conditioned on text. 2. A contrastive model similar to CLIP which is used to rank outputs of the autoregressive decoder. 3. A DDPM which can convert speech tokens back into speech spectrograms. The architectures and training process for all of these networks largely follow the procedures found in their respective literature. Details can be found in B #### 2.2.1 Conditioning Input A unique design choice made with TorToise is an additional input which is provided to both the autoregressive generator and the DDPM, which I term the speech conditioning input. The speech conditioning input starts as one or more audio clips of the same speaker as the target. These clips are converted to MEL spectrograms and fed through an encoder consisting of a stack of self-attention layers. The autoregressive generator and the DDPM have their own conditioning encoders, both of which are learned alongside their respective networks. The output of these layers is averaged to produce a single vector. The vectors from all of the encoded conditioning clips are then averaged again before being fed as an input into the autoregressive or conditioning networks. Figure 1: TorToise-v2 architectural design diagram. Inputs of text and a reference audio clip (for speaker cloning) flow through a series of decoding and filtering networks to produce high-quality speech. The intuition behind the conditioning input is that it provides a way for the models to infer vocal characteristics like tone and prosody such that the search space of possible speech outputs corresponding to a given textual input is greatly reduced. #### 2.2.2 The "TorToise Trick" For the majority of the training procedure, the DDPM is trained to convert discrete speech codes into MEL spectrograms. After this process has converged, I fine-tune the DDPM on the autoregressive latent space which is pulled from the AR model outputs instead of the speech codes. This is described in detail in B. The logic here is that the AR latent space is far more semantically rich than discrete tokens. By fine-tuning on this latent space, we improve the efficiency of the downstream diffusion model. I liken this to recent work showing that training decoder models conditioned on frozen text encoders to produce large efficiency gains. This fine-tuning is one of the greatest contributors to model output quality of any of the tweaks I made to the various model training processes. ### Clvp As mentioned earlier, a good strategy for gathering expressive outputs from generative models is using a qualitative discriminator to re-rank several outputs, then choosing only the best. DALL-E uses CLIP for this. This same type of approach used for CLIP can be applied to speech: after all, most TTS datasets are simply pairings of audio clips and text. By training a model on these pairs in a contrastive setting, the model becomes a good discriminator for speech. For TorToise, I train the Contrastive Language-Voice Pretrained Transformer, or CLVP. It has many of the same properties of CLIP, but notably serves as a scoring model for use in re-ranking TTS outputs from the AR model. To make this work efficiently in inference, I trained CLVP to pair discretized speech tokens with text tokens. This way, CLVP can rerank multiple AR outputs without the expensive diffusion model being invoked. ## 3 Training These models were trained on a small cluster of 8 NVIDIA RTX-3090s over the period of 1 year. Specifics on how these models are trained can be found in B. ## 4 Inference Process Once the four models of the framework are fully trained, the inference procedure is as follows: 1. Feed the conditioning inputs and the text into the autoregressive model and decode a large number of output candidates. 2. Use CLVP to produce correlation scores between each speech candidate and text. 3. Choose the top k speech candidates, and for each candidate: 4. Decode to a MEL spectrogram using the DDPM. 5. Convert to a waveform using a conventional vocoder. 6. When decoding the autoregressive model, nucleus sampling is used with P=.8, repetition penalty=2 and softmax temperature=.8. Sampling from DDPMs is a highly studied and rapidly changing field. At the time TorToise was designed, I found the sampling configuration with the best balance between quality and inference speed to be as follows: 1. Algorithm: DDIM(Song et al., 2022) 2. Schedule: Linear 3. Sampling steps: 64 4. Conditioning-Free Guidance constant: 2 ## 5 The Dataset Since my goal was to train what is essentially a large language model, I needed a lot of data. I started with the LibriTTS(Zen et al., 2019) and HiFiTTS(Bakhturina et al., 2021) datasets, which combined contain 896 hours of transcribed speech. I built an additional, "extended" dataset of 49,000 hours of speech audio from audiobooks and podcasts scraped from the internet. Details on how this dataset was built are in appendix I. The official LibriTTS test split was used for validation purposes. ## 6 Experiments Text to speech systems are challenging to experimentally compare because many state of the art systems are closed source with few samples to compare against. To this end, I built my own evaluation suite which uses CLVP to produce a distance metric between real samples and generated samples, similar to the FID score used by images. I also use an open source wav2vec model to characterize the "intelligibility" of a speech segment. I have open sourced this work here. Past this, comparisons between the samples generated from TorToise and those generated by other papers can be found here. ## 7 Conclusion TorToise is the latest in a line of recent state-of-the-art breakthroughs that use general model architectures. Almost no part of TorToise was designed specifically for audio processing, yet it outperforms all previous TTS models in realism. It does this by: Embracing generalist architectures like stacks of transformer layers. Leveraging a large, high-quality dataset. Training at large-ish scale and high batch size. My main take-away from this project is how incredibly strong the results are from adhering to the above 3 points. It seems likely to me that any digitized modality is subject to generative modeling using this framework.
2305.08170
The GRAVITY young stellar object survey -- XI. Probing the inner disk and magnetospheric accretion region of CI Tau
Aims: We aim at spatially and spectrally resolving the innermost scale of the young stellar object CI Tau to constrain the inner disk properties and better understand the magnetospheric accretion phenomenon. Methods: The high sensitivity offered by the combination of the four 8-m telescopes of the VLTI allied with the spectral resolution of the K-band beam combiner GRAVITY offers a unique capability to probe the sub-au scale of the CI Tau system, tracing both dust and gas emission regions. We develop a geometrical model to fit the interferometric observables and constrain the physical properties of the inner dusty disk. The continuum-corrected pure line visibilities have been used to estimate the size of the Br$\gamma$ emitting region. Results: From the K-band continuum study, we report an highly inclined resolved inner dusty disk, with an inner edge located at a distance of $21\pm2\,R_\star$ from the central star, which is significantly larger than the dust sublimation radius (R$_{sub}= 4.3$ to $8.6\,R_\star$). The inner disk appears misaligned compared to the outer disk observed by ALMA and the non-zero closure phase indicates the presence of a bright asymmetry on the south-west side. From the differential visibilities across the Br$\gamma$ line, we resolve the line emitting region, and measure a size of $4.8^{+0.8}_{-1.0}$ $R_\star$. Conclusions: The extended inner disk edge compared to the dust sublimation radius is consistent with the claim of an inner planet, CI Tau b, orbiting close-in. The inner-outer disk misalignment may be induced by gravitational torques or magnetic warping. The size of the Br$\gamma$ emitting region is consistent with the magnetospheric accretion process. Assuming it corresponds to the magnetospheric radius, it is significantly smaller than the co-rotation radius, which suggests an unstable accretion regime that is consistent with CI Tau being a burster.
GRAVITY Collaboration, A. Soulain, K. Perraut, J. Bouvier, G. Pantolmos, A. Caratti o Garatti, P. Caselli, P. Garcia, R. Garcia Lopez
2023-05-14T14:38:53Z
http://arxiv.org/abs/2305.08170v1
# The GRAVITY Young Stellar Object survey+ ###### Abstract Context:T Tauri stars are known to be the cradle of planet formation. Most exoplanets discovered to date lie at the very inner part of the circumstellar disk (\(<1\) au). The innermost scale of Young Stellar Objects is therefore a compelling region to be addressed, and long-baseline interferometry is a key technique to unveil their mysteries. Aims:We aim at spatially and spectrally resolving the innermost scale (\(\leq 1\) au) of the young stellar system CI Tau to constrain the inner disk properties and better understand the magnetospheric accretion phenomenon. Methods:The high sensitivity offered by the combination of the four 8-m class telescopes of the Very Large Telescope Interferometer (VLTI) allied with the high spectral resolution (R \(\sim\) 4000) of the K-band beam combiner GRAVITY offers a unique capability to probe the sub-au scale of the CI Tau system, tracing both dust (continuum) and gas (Bry line) emission regions. We develop a physically motivated geometrical model to fit the interferometric observables (visibilities and closure phases (CP)) and constrain the physical properties of the inner dusty disk. The continuum-corrected pure line visibilities have been used to estimate the size of the Hydrogen I Br\(\gamma\) emitting region. Results:From the K-band continuum study, we report an highly inclined (\(i\sim 70^{\circ}\)) resolved inner dusty disk, with an inner edge located at a distance of \(21\pm 2\,R_{\star}\) from the central star, which is significantly larger than the dust sublimation radius (\(R_{\rm mb}\)= 4.3 to 8.6\(\,R_{\star}\)). The inner disk appears misaligned compared to the outer disk observed by ALMA and the non-zero closure phase indicates the presence of an asymmetry that could be reproduced with an azimuthally modulated ring with a brighter south-west side. From the differential visibilities across the Br\(\gamma\) line, we resolve the line emitting region, and measure a size of \(4.8^{+0.83}_{-1.0}\,R_{\star}\). Conclusions:The extended inner disk edge compared to the dust sublimation radius is consistent with the claim of an inner planet, CI Tau b, orbiting close-in. The inner-outer disk misalignment may be induced by gravitational torques or magnetic warping. The size of the Br\(\gamma\) emitting region is consistent with the magnetospheric accretion process. Assuming it corresponds to the magnetospheric radius, it is significantly smaller than the co-rotation radius (\(R_{\rm cor}\)= \(8.8\pm 1.3\,R_{\star}\)), which suggests an unstable accretion regime that is consistent with CI Tau being a burster. Conclusions:The standard model of the star is a \(\sim 10^{\circ}\) system, which suggests an unstable accretion regime that is consistent with CI Tau ## 1 Introduction The power of long baseline near-infrared interferometry to investigate the inner regions of young stellar systems has been amply demonstrated in the past years (Dullemond & Monnier, 2010). The inner disk structure (GRAVITY Collaboration et al., 2021), associated outflows (GRAVITY Collaboration et al., 2017), and the accretion process (Gravity Collaboration et al., 2020) can all be probed on an angular scale of less than one millisecond of arc (mas), which corresponds to a region extending a few stellar radii around the central star at the distance of the closest star-forming regions. On this scale, accretion in classical T Tauri stars (i.e., Class II young stellar objects with \(M_{\star}<2\,M_{\odot}\)) occurs along funnel flows due to the strong stellar magnetic field (\(\approx\) kG) that channels the infalling gas (e.g., Romanova & Owocki, 2015; Hartmann et al., 2016; Bouvier et al., 2007). The inner disk is disrupted at the magnetospheric or truncation radius (typically at \(\sim 5\,R_{\star}\)), where the magnetic pressure of the stellar field balances the thermal and/or ram pressure of the accreting matter (Besso-Liaz et al., 2008; Blinova et al., 2016; Pantolmos et al., 2020). The observational evidence for the magnetospheric accretion process in young stars, while quite convincing and widely accepted, has so far been mostly indirect. It relies on measurements of magnetic field strength and topology (e.g., Donati & Landstreet, 2009) and mass accretion rate estimates (e.g., Manara et al., 2021; Alcala et al., 2021). It is probed through a number of spectral diagnostics, including the emission line spectrum of T Tauri stars that forms, at least in part, in the magnetic funnel flows (e.g., Bouvier et al., 2020), and the UV continuum excess arising for the accretion shock at the stellar surface (e.g., Espaillat et al., 2022). In recent years, the increased sensitivity of long baseline interferometers has opened a new window to the star-disk interaction region, with results that provide a direct es timate of the extent of the magnetospheric cavity and support the magnetospheric accretion paradigm (Gravity Collaboration et al., 2020; Bouvier et al., 2020; Gravity Collaboration et al., 2023). We present here the results from VLTI/GRAVITY observations of the young stellar system CI Tau. CI Tau is a 2 Myrold (Guillocteau et al., 2014), 0.9 \(M_{\odot}\)(Simon et al., 2019) classical T Tauri star, located at a distance of 160.3 \(\pm\) 0.4 pc (Gaia Collaboration et al., 2022) in the Taurus molecular cloud. It is known to harbour a strong, mostly poloidal magnetic field up to 3.7 kG and exhibits a variable mass accretion rate of the order of 2\(\times 10^{-8}\,M_{\odot}\,\)yr\({}^{-1}\)(Donati et al., 2020). On the large scale, CI Tau is surrounded by a circumstellar disk that extends up to 200 au on millimetre continuum images, and features a succession of dusty rings, with gaps located at radii \(\sim\) 13, 39, and 100 au, suggestive of on-going planet formation (Clarke et al., 2018). Indeed, CI Tau is the only accreting T Tauri star for which a hot super-Jupiter (\(M_{\rm p}=11.3\,M_{\rm Jup}\)) has been claimed from radial velocity variations (Johns-Krull et al., 2016), although the planetary origin of the radial velocity signal has been questioned (Donati et al., 2020). The goal of the VLTI/GRAVITY observations we report here was to investigate the star-disk interaction region of this intriguing young system, to derive the properties of the dusty inner disk on a scale of 0.1 au or less from continuum K-band visibilities and phases, and to investigate the magnetospheric accretion region through the analysis of differential interferometric quantities measured across the Br\(\gamma\) line profile. Section 2 describes the observations and data reduction, Section 3 presents the derivation of the properties of the inner disk and of the Br\(\gamma\)-line emitting region through model-fitting, and Section 4 discusses the results in light of the possible existence of CI Tau b, compares the inner disk properties to the outer disk structure, and confront the interferometric results to magnetospheric accretion models. Conclusions are presented in Section 5. ## 2 Observations We observed CI Tau at two epochs on January 9\({}^{\rm th}\) 2021 and February 23\({}^{\rm rd}\) 2022 in the K-band with the GRAVITY instrument (Gravity Collaboration et al., 2017), combining the four Unit Telescopes (UTs) of the ESO Very Large Telescope Interferometer (VLTI) installed in Paranal, Chile. This program was part of the GTO large program dedicated to the Young Stellar Objects (YSO). The maximum baseline accessible with the UTs is 130 m, which corresponds to a maximal angular resolution of \(\lambda/2B_{max}\approx 1.5\) mas at 2.2 \(\mu\)m. Both epochs were carried out using the single-field on-axis mode, where 50% of the flux is sent to the fringe tracker (FT) and 50% to the scientific instrument (SC): the instrument tracks the fringes on the science target itself to stabilize them at a frequency of 900 Hz (Lacour et al., 2019), enabling longer integration on the SC, in particular for faint targets. Data were obtained in high spectral resolution mode (R \(\sim\) 4000). GRAVITY covers a spectral range from 1.9 to 2.4 \(\mu\)m, including the neutral-hydrogen Br\(\gamma\) line at 2.1661 um. Weather conditions were excellent during the two nights; we recorded eleven and six 5-min long files on the object in 2021 and 2022, respectively (Table 1). We observed two calibrators before (HD 31464) and after (HD 40003) the observations to accurately estimate the atmospheric transfer function and calibrate the interferometric observables. We used SearchCal tool (Chelli et al., 2016) to establish our calibrator list, which offers a way to search for objects that are single stars, bright, unresolved and close to the target. Due to technical issues during the first epoch, one of the telescopes (UT2) was down during the observations, which reduced the number of exploitable baselines from six to three. The data reduction was performed using the ESO GRAVITY pipeline1(Lapeyrere et al., 2014). For each file, we extracted six (three) complex visibilities and four (one) closure phase measurements in 2022 (2021), dispersed over six spectral channels for the FT and about 1600 for the SC, respectively. The bluest part of the fringe tracker being contaminated by the metrology laser working at 1.908 \(\mu\)m, we discarded the first channel from our analysis. Finally, we recovered the differential visibilities and phases in the Br\(\gamma\) line region from the SC data. The error bars supplied by the pipeline are known to be underestimated and do not include residual calibration effects (Bouvier et al., 2020; Gravity Collaboration et al., 2021). To be conservative, we refined our uncertainties by computing the total rms over the files for both observables, which yields constant uncertainties of 2% for the visibility and 0.7 degrees for the closure phases. The final uncertainties being similar between the two epochs, we adopted the same error bars for all observations. Normalising uncertainties between our two epochs allows us to mitigate the effects of different weather conditions and adaptive optics correction, and to attribute the same weight to the 2021 and 2022 data sets. Footnote 1: [https://www.eso.org/sci/software/pipelines/gravity](https://www.eso.org/sci/software/pipelines/gravity). ## 3 Results In this section we report the method used to derive the main properties of the emitting regions both in the K-band continuum and across the Br\(\gamma\) line. ### The inner dusty disk #### 3.1.1 Geometrical model To model the continuum complex visibility, we follow the same approach as adopted by Lazareff et al. (2017) and Gravity Collaboration et al. (2021), which consists of representing the system as a three-component model: an unresolved point-like star (\(s\)) as we are not able to resolve the stellar photosphere, a circumstellar dusty disk (\(d\)), and a fully resolved component (\(h\)). Each element is represented by a complex visibility function (\(V_{s}\), \(V_{d}\) and \(V_{h}\)) and accounted for in the whole system by their flux contributions (\(F_{s}\), \(F_{d}\) and \(F_{h}\)): \[V_{\rm rad}(\mathbf{B}/\lambda)=\frac{F_{s}V_{s}+F_{d}V_{d}(\mathbf{B}/\lambda)+F_{h}V _{h}}{F_{s}+F_{d}+F_{h}}, \tag{1}\] where \(V_{s}=1\) for a point-source, \(V_{h}=0\) for a fully resolved component, \(\mathbf{B}/\lambda\) is the spatial frequency in rad\({}^{-1}\) at the different baselines \(\mathbf{B}\) and \(F_{s}+F_{d}+F_{h}=1\). We consider wavelength independent flux contributions (non-chromatic model). The extended component \(V_{h}\) is commonly used to mimic the effect of the scattered light (Pinte et al., 2008), which decreases the visibility at the zero spatial frequency (\(\mathbf{B}/\lambda=0\)). This last component appears to contribute significantly in the case of YSO, such as transitional disks (Lazareff et al., 2017) or T Tauri stars (Anthonioz et al., 2015; Gravity Collaboration et al., 2021). As Lazareff et al. (2017), we model the dusty disk contribution by an circular ring defined by a radius \(a_{r}\), an inclination \(i\), and a position angle \(PA\). To describe a smooth inner rim radial profile, we convolve the ring model by a 2-d Gaussian model. In the following, we present the convolution effect by using the ratio between the Gaussian kernel \(a_{k}\) and the half-flux radius \(a\) as \(w=a_{k}/a\) in percent. Finally, we add a brightness azimuthal modulation along the ring described by cosine and sine amplitudes \(c_{1}\) and \(s_{1}\). This modulation can be used to represent a non-uniform azimuthal disk profile responsible for a non-zero closure phase signature. In practice, \(c_{1}\) and \(s_{1}\) can vary between -1 and 1, allowing us to drag the brightest portion (if any) around the disk in polar coordinates. #### 3.1.2 Fitting strategy For the first epoch, the FT data were not fully exploitable due to a relatively low coherence time (\(\sim\) 2-3 ms) that degraded the signal-to-noise ratio significantly. To address this, we used the SC data instead and calculated the observables averaged over 300 spectral channels, which reproduces the spectral resolution of the FT camera (R \(\sim\) 30). For the second epoch, the weather conditions were optimal with a coherence time around 7 ms, but we adopted the same approach as in 2021 to get consistent results between the two epochs. To estimate the properties of the continuum emitting region, we perform the fit over several steps to avoid any local \(\chi^{2}\) minima and robustly estimating the associated uncertainties. Since the circumstellar disk is only partially resolved by the interferometer (\(V\sim 0.8\)), its flux contribution \(F_{d}\) and its size \(a_{r}\) are partly degenerated (Lazareff et al., 2017). To get an independent estimate of the relative contributions of the disk and the star in the K-band, we used the near-infrared veiling measured as described in Sousa et al. (2023). At the time of our 2021 observations, the infrared veiling amounted to \(r_{K}=0.83\pm 0.04\) (A. Sousa, priv. comm.), which yields an estimate of \(F_{z}=1/(1+r_{K})=55\%\) around 2.2 \(\mu\)m. To consider the star's intrinsic variability, we adopted a typical error of 5% on this measurement. Besides, we independently evaluated the stellar contribution by fitting the target's spectral energy distribution. We collected the photometry measurements from EPIC (B, V and R bands, Howell et al., 2014; Huber et al., 2017), Gaia DR3 (G\({}_{pp}\), G and G\({}_{rp}\) bands, Gaia Collaboration et al., 2022) and 2MASS (J, H, and K bands, Skrutskie et al., 2003, 2006). We adopted the stellar parameters and the visual extinction (\(A_{V}=0.65\)) determined by Donati et al. (2020) and use the accurate distance estimate from Gaia DR3 (160.3 \(\pm\) 0.4 pc, Gaia Collaboration et al., 2022). We thus derived \(F_{z}=55\%\), quite consistent with the veiling measurement. We therefore used this value for the star contribution as a prior during the fitting process, with a 5% tolerance. This additional constraint releases the degeneracy between the ring's size and its flux contribution. We carried out an initial parameter search using the Levenberg-Marquardt method2. We estimated the geometrical parameters with and without azimuthal modulation by using or not the closure phase quantity. Given the lower \(\chi^{2}\) values obtained with the asymmetric case (1.6 versus 2.1 for the total \(\chi^{2}\) value, and 0.5 versus 2.9 when considering the CP only), we adopted this model to fit the data using a Monte-Carlo Markov Chain (MCMC) approach3. We used 200 walkers for 2000 iterations and rejected the first 1000 iterations as the burn-in time. The 1-\(\sigma\) uncertainty associated with each parameter is computed from the final distribution of walkers using the 16, 50, 84% percentiles. Footnote 2: Available with **emcee**(Foreman-Mackey et al., 2013). #### 3.1.3 Inner disk properties For the 2021 data set, the model converges toward an elongated thin ring model with an inner rim radius of \(a_{r}=0.20\pm 0.02\) au. We estimate a width-to-radius ratio \(w\) smaller than 28%, indicating a resolved inner gap. The major-to-minor axis elongation corresponds to a relatively high inclination of \(i=71\pm 1^{\circ}\) at a position angle of \(PA=148\pm 1^{\circ}\) counted from North to East. The dusty disk contribution is constant between the two epochs (\(F_{d}=36\pm 2\%\)) and the halo contribution remains between \(8\) and \(10\%\). For the second epoch, the limited time of observation (\(\sim\)1h) corresponds accordingly to a short range of spatial frequencies (30 vs. 75 arcsec\({}^{-1}\), see Fig. 1 and 2). This prevents us from resolving the inner gap (\(w\) close to 1), and from constraining the orientation of the system in an unambiguous way. In order to derive the lower limit of the system's inclination for the second epoch, we performed a \(\chi^{2}\)-minimum search Figure 1: Model image of CI Tau’s inner disk in 2021. The position of the star is depicted and has been removed to show the disk structure. The upper-left inset shows the u-v coverage. The colour circles represent the baselines: UT1-UT3 (_pink_), UT3-UT4 (_green_) and UT1-UT4 (_blue_). \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline MJD & Date & Time (UT) & Configuration & \(N\) & Seeing (”) & \(\tau_{0}\) (ms) & Calibrators \\ \hline 59223.12 & 2021-01-09 & 01:35–04:03 & UT1-UT3-UT4 & 11 & 0.68–1.05 & 2.7–5.7 & HD 31464, HD 40003 \\ 59633.04 & 2022-02-23 & 00:39–01:26 & UT1-UT2-UT3-UT4 & 6 & 0.36–0.53 & 5.8–8.1 & HD 31464, HD 40003 \\ \hline \end{tabular} 1 \end{table} Table 1: Journal of the VLTI/GRAVITY observations. (see App. B for details). The inner dusty disk properties are presented in Table 2. The values of inclination and position angle for the second epoch correspond to those obtained from the \(\chi^{2}\) search (Fig. B.1). The MCMC-posterior distribution obtained for the second epoch converges to a very high inclination (close to 90\({}^{\circ}\), Fig. C.2) that prevents us from determining the asymmetric modulation (\(c_{j}\), \(s_{j}\) compatible with zero). The inner rim radius estimate from the 2022 data set appears significantly smaller than the one derived for the 2021 data set. The inner gap being unresolved in 2022, the inner disk size could be underestimated. Figure 1 displays the best-fit model image as determined by GRAVITY in 2021. The non-zero closure phases are consistent with the presence of an asymmetry in the inner rim located in the South-West part. The data-model comparison and the MCMC distributions are presented and discussed in Appendix A and C. ### The Br\(\gamma\) line emitting region GRAVITY's high spectral resolution allows us to resolve the Br\(\gamma\) line profile at 2.1661 \(\mathrm{\SIUnitSymbolMicro m}\). This spectral feature is the privileged tracer of the star-disk interaction, attributed to the magnetospheric accretion process (Hartmann et al., 1994). Following Weigelt et al. (2007), Kraus et al. (2008) and Gravity Collaboration et al. (2023), we compute the continuum-subtracted observables, the so-called pure line visibilities, by using the emission line profile provided by GRAVITY. This differential observable is only sensitive to the Br\(\gamma\) emitting region and remove all contributions from the star and disk, assuming no photospheric absorption is present in the line region, which is adequate for cooler T Tauri stars. The pure line visibility \(V_{line}(\lambda)\) is computed as: \[V_{Line}(\lambda)=\frac{F_{L/C}(\lambda)V_{Tor}(\lambda)-V_{Cont}}{F_{L/C}( \lambda)-1}, \tag{2}\] \(F_{L/C}\) denotes the total line-to-continuum flux ratio as taken from the normalised spectrum (Fig. 2), \(V_{Cont}\) is the visibility computed in the continuum, and \(V_{Tor}\) is the total complex quantities measured by GRAVITY. In order to enhance the signal across the Br\(\gamma\) line, we combine the 11 files available for the first epoch in 2021. The u-v plane rotation occurring during the observational sequence remained relatively small (\(<10\) degrees) and thus the files can be combined without degrading the scientific signal significantly. Unfortunately, the data quality in 2022 were not sufficient to reach the required signal-to-noise ratio to detect the differential signal. Figure 2 presents the Br\(\gamma\) emission line profile, the total differential visibility and the extracted pure-line visibility. A significant signal is only detected for the most extended baseline (UT1-UT4, 126.16 m) with a 3-\(\sigma\) detection in the visibility amplitude. We did not detect any significant differential phase signals neither epochs or baselines. The pure line visibilities across the Br\(\gamma\) line profile range from 0.90 to 0.93, indicating a more compact emitting region than the inner disk seen in the continuum. In order to estimate the characteristic size of the Br\(\gamma\) emitting region, we averaged the five pure line visibilities over the spectral channels and derived a unique visibility measurement of \(V_{\mathrm{Br}\gamma}=0.92\pm 0.03\). Based on a simple geometric Gaussian disk model (Berger and Segransan, 2007), we extracted the half-flux radius (or half width half maximum, HWHM) corresponding to \(V_{\mathrm{Br}\gamma}\). Figure 3 presents the visibility curve of a 2-d Gaussian model compared to the extracted pure line visibility. The visibility uncertainty of 0.03 is directly reported on the visibility curve model (blue shade area), which yields asymmetric errors on the half-flux radius estimate. We thus derive a Br\(\gamma\) emission region radius of \(R_{\mathrm{Br}\gamma}=0.28^{+0.05}_{-0.06}\) mas, which corresponds to \(0.045^{+0.008}_{-0.009}\) au at the distance of CI Tau, or \(4.8^{+0.8}_{-1.0}\)\(R_{\star}\) for a stellar radius of 2 \(R_{\odot}\). The Br\(\gamma\) emitting region is thus significantly more compact than the continuum disk radius. \begin{table} \begin{tabular}{l l l} \hline \hline Parameters & 2021 & 2022 \\ \(F_{d}\) [\%] & \(36\pm 2\) & \(35\pm 5\) \\ \(F_{h}\) [\%] & \(9.2\pm 0.6\) & \(8.3\pm 0.2\) \\ \(i\) [\({}^{\circ}\)] & \(71\pm 1\) & \(\geq 70\) \\ \(PA\) [\({}^{\circ}\)] & \(148\pm 1\) & \(140^{+16}_{-12}\) \\ \(c_{1}\) & \(0.94^{+0.04}_{-0.08}\) & - \\ \(s_{1}\) & \(-0.75^{+0.09}_{-0.12}\) & - \\ \(w\) [\%] & \(17^{+11}_{-6}\) & unresolved \\ \(a_{r}\) [mas] & \(1.25\pm 0.13\) & \(0.81\pm 0.13\) \\ \(a_{r}\) [au]1 & \(0.20\pm 0.02\) & \(0.13\pm 0.02\) \\ \(a_{r}\) [\(\mathrm{R_{\star}}\)]2 & \(21\pm 2\) & \(14\pm 2\) \\ \(\chi^{2}_{r}\) & 1.56 & 0.87 \\ \hline \end{tabular} \end{table} Table 2: Best-fit parameters of the K-band continuum VLTI/GRAVITY data of CI Tau obtained in 2021 and 2022 with 1\(\sigma\) error bars. Figure 2: Br\(\gamma\) line observables. **Top:** The normalized spectral line profile averaged over the four telescopes with GRAVITY. **Bottom:** Differential visibilities from the UT1-UT4 baseline of the CI Tau observation in 2021. The small blue dots and error bars represent the total visibility. The larger coloured dots indicate the pure line visibilities after the subtraction of the continuum contribution (see Eq. 2). The continuum estimate and the associated uncertainty are shown as red dashed lines. The Gaussian model used to fit the total visibility is represented as a blue line. ### Mass-accretion rate and truncation radius To estimate the instantaneous mass-accretion rate at the time of GRAVITY observations, we computed the Br\(\gamma\) line luminosity and used the line-to-accretion luminosity relations from Alcala et al. (2017). We measured the equivalent width of the Br\(\gamma\) line on the GRAVITY spectrum, EW\({}_{\rm Br\gamma}\), and estimated the extinction-corrected nearby continuum flux from the 2MASS K-band magnitude (Skrutskie et al., 2006). With EW\({}_{\rm Br\gamma}=7.9\pm 0.4\) A and a continuum flux of \(3.3\,10^{-13}\) W m\({}^{-2}\) \(\mu\)m\({}^{-1}\), we derive a line luminosity of \((2.07\pm 0.10)\)10\({}^{-4}\)\(L_{\odot}\) at \(160.3\pm 0.4\) pc. The accretion luminosity can then be derived from the empirical relationship (Alcala et al., 2017): \[\log\left(\frac{L_{acc}}{L_{\odot}}\right)=a\log\left(\frac{L_{line}}{L_{ \odot}}\right)+b, \tag{3}\] with \(a=1.19\pm\)0.10 and \(b=4.02\pm\)0.51. Finally, the accretion luminosity can be converted into an instantaneous mass-accretion rate using the following relation (Hartmann et al., 1998): \[\dot{M}_{acc}=\left(1-\frac{R_{\star}}{R_{\rm Br\gamma}}\right)^{-1}L_{acc} \frac{R_{\star}}{GM_{\star}}, \tag{4}\] which assumes that the energy released by the infalling material confined within the magnetosphere is entirely converted into accretion luminosity. Adopting the GRAVITY size of the Br\(\gamma\) emitting region \(R_{\rm Br\gamma}=4.8\,R_{\star}\) for the magnetosphere radius, we derive a mass accretion rate of \(\dot{M}_{acc}=3.9^{+12.8}_{-3.0}\)\(10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\) (\(\log(\dot{M}_{acc})=-7.4\pm 0.6\)). The size of the magnetospheric accretion region, characterised by the magnetic truncation radius \(R_{\rm tr}\), is driven by the strength of the magnetic field and the mass accretion rate (Hartmann et al., 2016): \[\frac{R_{\rm tr}}{R_{\odot}}=12.6\,\frac{B_{\star}^{4/7}R_{2}^{12/7}}{M_{0.5}^ {1/7}\dot{M}_{-8}^{2/7}}, \tag{5}\] where B is the surface field strength of the dipolar magnetic field at the stellar equator in kG, \(R_{2}\) is the stellar radius in units of 2 \(R_{\odot}\), \(M_{0.5}\) is the stellar mass in units of 0.5 M\({}_{\odot}\) and \(\dot{M}_{-8}\) is the mass-accretion rate in units of \(10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\). Using the stellar parameters of CI Tau reported by Donati et al. (2020), for a magnetic field of 0.85 kG4, a stellar radius of 2.0\(\pm\)0.3 R\({}_{\odot}\), a mass of 0.90\(\pm\)0.02 M\({}_{\odot}\)(Simon et al., 2019), and the mass-accretion rate derived from Eq. 4, we compute a truncation radius \(R_{\rm tr}=3.6\pm 1.5\,R_{\star}\), in agreement with the interferometric half-flux radius derived above for the Br\(\gamma\) line emitting region. We therefore conclude that most of the Br\(\gamma\) emission originates from the magnetospheric accretion region. Footnote 4: We use the polar magnetic field value of 1.7 kG divided by two to retrieve the value at the equator. ## 4 Discussion The high spatial and spectral resolution of GRAVITY allows us to detect and characterise the inner region of the CI Tau system with an unprecedented precision. Figure 4 illustrates the characteristic sizes of the system. In this section, we discuss how the GRAVITY results shed light on the global structure of the inner system. ### The inner dust cavity The continuum analysis of the two epochs of observation yields an inner dusty rim located between 14 and 21 \(R_{\star}\) from the central star. This direct measurement appears to be significantly higher than the estimate of the dust sublimation radius. The K-band emission of T Tauri stars is supposed to be dominated by the directly irradiated front of the dusty disk rim (Dullemond and Monnier, 2010). For a given stellar luminosity, we assess the radius corresponding to the thermal equilibrium of the dust grains, to remain under the sublimation temperature at 1500 K for silicates. We used the relation from Monnier and Millan-Gabet (2002) to determine the sublimation radius (\(R_{\rm sub}\)) in au: \[R_{\rm sub}=1.1\,\sqrt{Q_{R}}\,\sqrt{\frac{L_{\star}}{1000\,L_{\odot}}}\left( \frac{1500}{T_{\rm sub}}\right)^{2}, \tag{6}\] with \(Q_{R}\) the absorption efficiency ratio of the dust between incident and reemitted field, and \(T_{\rm sub}\) the sublimation temperature. Monnier and Millan-Gabet (2002) assess that the absorption efficiency \(Q_{R}\) depends on the dust properties and the effective temperature of the central star. For an effective temperature of 4200 K (Donati et al., 2020) and a typical grain size distribution ranging from 0.03 to 1 \(\mu\)m, \(Q_{R}\) ranges from 1 to 4. For a stellar luminosity of 1.26 L\({}_{\odot}\)(Donati et al., 2020), the 1500 K sublimation radius ranges from 0.04 to 0.08 au, i.e., 4.3 to 8.6 \(R_{\star}\). The inner disk rim location we derive is therefore at least twice farther than the sublimation radius (see Fig. 4), when considering a sublimation temperature of 1500 K. One potential explanation for an extended inner dust cavity is the presence of an hypothetical close-in planet. CI Tau is so far the only Class II pre-main sequence star claimed to host a hot Jupiter, CI Tau b, with a mass of \(\sim\)11.3 Jupiter mass (Johns-Krull et al., 2016; Flagg et al., 2019). If such a planet exists, it could significantly affect the inner region of the disk. Muley and Dong (2021) demonstrated that a massive candidate planet orbiting at 0.08 au leads to the formation of an inner gap ranging from 0.1 to 0.2 au depending on the eccentricity of the planet, fully compatible with our observation. ### The inner and outer disk misalignment Young stellar objects such as the CI Tau system harbour a large outer disk structure. Clarke et al. (2018) retrieve the geometrical properties of CI Tau's outer disk on a scale from 1 to 100 au using the Atacama Large Millimeter/Submillimeter Array (ALMA). The outer disk consists of multiple rings seen at Figure 3: Comparison between the observed Br\(\gamma\) visibility (orange dot) and a visibility curve predicted for a Gaussian disk model of the emitting region (blue curve). The blue shaded area depicts the uncertainty on the size relative to the visibility error. an inclination of \(i_{\rm out}=50^{\circ}\) and a position angle of \(PA_{\rm out}\simeq 11^{\circ}\) from North to East. In comparison, the inner disk orientation we derive from GRAVITY features \(i_{\rm in}\simeq 70^{\circ}\) and \(PA_{\rm in}=148^{\circ}\). The two disks thus appear significantly misaligned. Such a misalignment has been recently reported on a few targets among a large sample of YSO (Bohn et al., 2022). Following Min et al. (2017); Bohn et al. (2022), we can thus measure the misalignment angle between the inner and outer disks as: \[\Delta\theta(i_{\rm in},PA_{\rm in},i_{\rm out},PA_{\rm out}) = \arccos[\sin(i_{\rm in})\sin(i_{\rm out}) \tag{7}\] \[\times \cos(PA_{\rm in}-PA_{\rm out})\] \[+ \cos(i_{\rm in})\cos(i_{\rm out})]\] The misalignment angle \(\Delta\theta\) corresponds to the angle between the two normal vectors defined by the planes of the inner and outer disk. Additionally, we do not know which side of the inner disk is closest to the observer. Two misalignment angles can therefore be calculated, namely \(\Delta\theta_{1}\sim 109\) or \(\Delta\theta_{2}\sim 42^{\circ}\) for CI Tau. In both cases, the inner and outer disks appear to be significantly misaligned. While such a significant misalignment may induce a shadow projected onto the outer disk (Bohn et al., 2022), such a shadow is not detected in scattered light images of CI Tau's disk (Garufi et al., 2022). Various physical processes can induce a substantial misalignment between the inner and the outer disks. Gravitational torques caused by the presence of low-mass (Arzamassky et al., 2018) or high-mass (Xiang-Gruess and Papaloizou, 2013) planets can force the precession of the inner disk, and physically disconnect it from the outer disk. For the massive case (\(>1~{}M_{\rm Jup}\)), if the companion's angular momentum is significantly greater than the disk one, the inner disk can gain a warped inner structure with an inclination of up to \(\simeq 20^{\circ}\) relative to the outer part. Recent 3-d simulations reinforce this assumption for planets massive enough to carve gaps (Nealon et al., 2018). Inner-outer disk misalignments are not only observed as a consequence of massive companions. Differential angular momentum across the disk can induce a tilt between the spin vectors of the various components (star, inner and outer disks) (Epstein-Martin et al., 2022). The magnetic star-disk interaction can also warp the close-in region and be responsible for an inclined inner disk, up to \(40^{\circ}\) inclination, with respect to the stellar-spin axis (Romanova et al., 2021). Finally, an external infall of gaseous material could affect the outer disk region and induce a misalignment (Kuffmeier et al., 2021). A detailed review of the misalignment processes and shadowing effects is provided in Benisty et al. (2022). ### The magnetospheric accretion region From the exquisite precision of the differential visibilities achievable with GRAVITY, we are able to spatially resolve the characteristic size of the Br\(\gamma\) line emitting region. With a half-flux radius of \(0.045^{+0.008}_{-0.009}\) au, a large fraction of the Br\(\gamma\) line emission appears to originate from a region extending over \(4.8^{+0.8}_{-1.0}~{}R_{\star}\) around the star. A quantitative comparison between the Br\(\gamma\) half-flux radius and the co-rotation radius can be used as a simple criterion to determine the physical origin of the observed Br\(\gamma\) line emission. If the Br\(\gamma\) emission appears as a compact source, smaller than the co-rotation radius, the origin is consistent with the magnetospheric accretion scenario. In contrast, if the Br\(\gamma\) emission is significantly larger than the co-rotation radius, other mechanisms such as disk winds or outflows are likely to contribute to the observed Br\(\gamma\) profile (Gravity Collaboration et al., 2020). The co-rotation radius is defined as the one where the angular velocity of the rotating disk matches the angular velocity of the star: \[R_{\rm cor}=(GM_{\star})^{1/3}(P_{\rm rot}/2\pi)^{2/3} \tag{8}\] For a rotational period \(P_{\rm rot}=9.00\pm 0.05\) days (Donati et al., 2020) and a mass of \(0.90\pm 0.02~{}\rm M_{\odot}\)(Simon et al., 2019), we compute a co-rotation radius \(R_{\rm cor}=8.8\pm 1.3~{}R_{\star}\). We find here that the Br\(\gamma\) half-flux radius is significantly smaller than the co-rotation radius, which argues in favour of most of the line flux arising from the magnetospheric accretion process. Furthermore, based on spectro-polarimetric magnetic Figure 4: A schematic view of the innermost region of the CI Tau system. The sizes and their uncertainties derived from the GRAVITY observations are represented for the Br\(\gamma\) emitting region (_blue circle_) and for the K-band continuum (_orange circle_). The purple circle (and shaded area) depicts the truncation radius and its uncertainty derived from the GRAVITY Br\(\gamma\) emission line (Sect. 3.3). Additional characteristic scales associated with YSO are depicted: the sublimation radius (\(R_{\rm sub}\), _pink line_) for a range of absorption \(Q_{R}\) from 1 to 4 and the co-rotation radius (\(R_{\rm cor}\), _green line_). The stable-unstable magnetospheric accretion regimes are indicated with a boundary around 70% of the \(R_{\rm cor}\)(Blinova et al., 2016). field measurements, Donati et al. (2020) estimated a range of values between 3.7 and 6.3 \(R_{\star}\) for the magnetospheric truncation radius of CI Tau, which is consistent with our GRAVITY measurement of \(3.6\pm 1.5\)\(R_{\star}\). We caution, however, that the interferometric Br\(\gamma\) half-flux radius derived from a 2-d Gaussian model may underestimate the full extent of the magnetospheric accretion region (Tessore et al., 2023). From our truncation radius estimate (Sect. 3.3), we derive a ratio of \(R_{\rm H}/R_{\rm cor}=0.41\pm 0.18\) and the system will likely be in an unstable accretion regime (\(R_{\rm H}/R_{\rm cor}\)\(\lesssim 0.7\), Blinova et al., 2016). In magnetic star-disk interactions, unstable accretion is the outcome of an interchange instability where the gas penetrates the stellar magnetosphere through equatorial tongues (Romanova et al., 2008) in addition to the stable funnel flows (i.e., stable accretion). Such accretion tongues are expected to deposit matter at random places on the stellar surface, usually close to the stellar equator, a feature that can possibly explain the stochastic photometric behaviour of the system known as a burster (Rogero et al., 2021; Cody et al., 2022). ## 5 Conclusion We have used the VLTI/GRAVITY instrument to probe the innermost scales of the young system CI Tau. Investigating the K-band spectral domain at high spectral resolution allows us to study the system in the continuum to probe dust emission and within the Br\(\gamma\) line to trace gas emission simultaneously. Below, we summarise our major results. (i) From the continuum analysis, we report the detection of a highly inclined resolved inner disk, whose inner edge is located at a distance of \(21\pm 2\)\(R_{\star}\) from the central star. The measured inner rim position seems to be significantly farther than the theoretical sublimation radius (4-8 \(R_{\star}\) for a typical sublimation temperature of silicates of 1500 K), a result which might support the presence of a close-in massive planetary companion. (ii) The inner disk exhibits a strong misalignment relative to the outer disk seen at submillimeter wavelengths with ALMA. Such a misalignment could be induced by magnetic warping or by gravitational torques induced by a close-in massive companion. (iii) We constrained the half-flux radius of the Br\(\gamma\) emitting region to be at a distance of 4.8 \(R_{\star}\) from the central star, which is consistent with the magnetospheric accretion paradigm. The Br\(\gamma\) size is significantly smaller than the co-rotation radius, which leads to an unstable accretion regime, presumably at the origin of the stochastic photometric variability of the system. The interferometric precision achievable today with GRAVITY at the VLTI allows us to characterise the inner scales of the CI Tau system with an unprecedented sensitivity. Given the high variability of this system, a temporal follow-up represents the most promising opportunity to investigate the dynamics of the star-disk interaction process, and to ascertain the origin of the Br\(\gamma\) emission. This work represents a first step to understand the star-planets-disk interactions occurring on sub-au scales in young stellar objects. ###### Acknowledgements. We acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 742095; _SPIDI_: Star-Planets-Inner Dists-Inners, [https://www.spidi-eu.org](https://www.spidi-eu.org)). We thank A. Sousa for providing the infrared veiling measurements. We thank M. Bentive for the fruitful discussion about the disk misalignment and for confirming the misalignment value. We thank A. Wojczak for the pure line derivation and the benchmark of our algorithms. We thank in particular the SPIDI crew for providing ideas and triggering discussions on the accretion phenomenon (B. Tessore, R. Manick). A.C.G. has been supported by PRIN-INAF MAIN-STREAM 2017" Protoplanetary disks seen through the eyes of new-generation instruments and PRIN-INAF 2019 "Spectroscopically tracing the disk dispersal evolution (STRADE)". This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.as.int/gaia](https://www.cosmos.as.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpcas/consortium](https://www.cosmos.esa.int/web/gaia/dpcas/consortium)). This research made use of NASA Astrophysics Data System; SciPy (Virtanen et al., 2020); NuMP (Harris et al., 2020), astro-polio (Hunter, 2007); and Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2018). This research has made use of the Jean-Marie Mariot Center LTPPo3, Aspro26 and SearchCa1 services, co-developed by CRAL, IPAG and LAGRANGE. Footnote 3: [http://www.jmmc.fr/lservechal](http://www.jmmc.fr/lservechal)
2307.10146
Altermagnetic surface states: towards the observation and utilization of altermagnetism in thin films, interfaces and topological materials
The altermagnetism influences the electronic states allowing the presence of non-relativistic spinsplittings. Since altermagnetic spin-splitting is present along specific k-paths of the 3D Brillouin zone, we expect that the altermagnetic surface states will be present on specific surface orientations. We unveil the properties of the altermagnetic surface states considering three representative space groups: tetragonal, orthorhombic and hexagonal. We calculate the 2D projected Brillouin zone from the 3D Brillouin zone. We study the surfaces with their respective 2D Brillouin zones establishing where the spin-splittings with opposite sign merge annihilating the altermagnetic properties and on which surfaces the altermagnetism is preserved. Looking at the three principal surface orientations, we find that for several cases two surfaces are blind to the altermagnetism, while the altermagnetism survives for one surface orientation. Which surface preserves the altermagnetism depends also on the magnetic order. We show that an electric field orthogonal to the blind surface can activate the altermagnetism. Our results predict which surfaces to cleave in order to preserve altermagnetism in surfaces or interfaces and this paves the way to observe non-relativistic altermagnetic spin-splitting in thin films via spin-resolved ARPES and to interface the altermagnetism with other collective modes. We open future perspectives for the study of altermagnetic effects on the trivial and topological surface states.
Raghottam M Sattigeri, Giuseppe Cuono, Carmine Autieri
2023-07-19T17:20:47Z
http://arxiv.org/abs/2307.10146v2
# Alternate surface states: towards the observation and utilization of ###### Abstract The altermagnetism influences the electronic states allowing the presence of non-relativistic spin-splittings. Since altermagnetic spin-splitting is present along specific \(k\)-paths of the 3D Brillouin zone, we expect that the altermagnetic surface states will be present on specific surface orientations. We unveil the properties of the altermagnetic surface states considering three representative space groups: tetragonal, orthorhombic and hexagonal. We calculate the 2D projected Brillouin zone from the 3D Brillouin zone. We study the surfaces with their respective 2D Brillouin zones establishing where the spin-splittings with opposite sign merge annihilating the altermagnetic properties and on which surfaces the altermagnetism is preserved. Looking at the three principal surface orientations, we find that for several cases two surfaces are blind to the altermagnetism, while the altermagnetism survives for one surface orientation. Which surface preserves the altermagnetism depends also on the magnetic order. We qualitatively show that an electric field orthogonal to the blind surface can activate the altermagnetism. Our results predict which surfaces to cleave in order to preserve altermagnetism in surfaces or interfaces and this paves the way to observe non-relativistic altermagnetic spin-splitting in thin films via spin-resolved ARPES and to interface the altermagnetism with other collective modes. We open future perspectives for the study of altermagnetic effects on the trivial and topological surface states. ## I Introduction Interfaces and surfaces have been the focus of intense research in the past few decades from both points of pure science and for the creation of new devices [1; 2; 3; 4; 5; 6]. This increased interest in interfaces and surfaces is due to the discovery of new properties and new phases non-existing in the bulk and to the progress from a technological point that allows to handle and manipulate materials in low dimensions and low thicknesses [7; 8; 9]. Very recently, the spin-splitting in the electronic bands, typical of ferromagnets, was found in commensurate compounds with crystal-symmetry compensated magnetic order. [10] This magnetic phase was called altermagnetism (AM) [10; 11; 12; 13]or antiferromagnetism with non-interconvertible spin-structure motif pair [14]. The presence of AM requires the electronic charge of the spin-up (down) atoms to be mapped in the charge of the spin-down (up) atoms without translations, inversion, or their combinations, but only with rototranslations or mirrors [15; 13]. Therefore, it is observable for some determined space groups and even more precisely, in the magnetic space groups of type-I and type-III [16]. If the system is metallic, the AM systems can exhibit spontaneous anomalous Hall effects (AHE) even in the absence of net magnetic moment due to the presence of a non-relativistic spin-splitting in the momentum space [17; 18]. The direction of the Hall vector is governed by the Neel vector defined as the difference between the magnetization vectors on the two different antiferromagnetic sublattices. [19; 20; 21; 22] Regarding technological applications, altermagnets may assume a leading role in realizing concepts in spincaloritronics [23], THz emissions in spintronic Figure 1: Brillouin zone with high-symmetry points for the tetragonal RuO\({}_{2}\). With subscripts 1 and 2, we indicate the two points in the \(k\)-space that have opposite non-relativistic spin-splitting that are S\({}_{1}\) and S\({}_{2}\) in this example. In magenta, we highlight the \(k\)-path S\({}_{1}\)-\(\Gamma\)-S\({}_{2}\) where the altermagnetic spin-splitting is maximized. We project the bulk Brillouin zone on the principal surfaces (100), (010) and (001). The projected high-symmetry points have an overline. Given the geometrical position of the \(k\)-points with opposite non-relativistic spin-splitting, the altermagnetic surface states survived on the (001) surface (colored in green), while the other two surfaces (colored in yellow) are blind to AM. The surfaces blind to AM are simple antiferromagnetic surfaces with double degeneracy. The projected high-symmetry points have a subscript if the band structure connecting \(\overline{\Gamma}\) still preserves the altermagnetic properties, they do not show subscript otherwise. systems [24], THz spin current dynamics in g-wave intermagnet hematite [25] and an efficient spin-charge conversion [26]. Furthermore, they can be used in Josephson junctions [27] and to generate neutral currents for spintronics [28]. However, there is scope for further exploration to identify techniques to enhance domain structure control in altermagnetic systems as far as spintronic applications are concerned. [29] As a solution to control altermagnetic compounds, circularly polarized light can be employed which exploits the _unique_ magneto-optical responses for efficient detection, induction and switching applications. [30] In this work, our objective is to have a general understanding of the evolution of the AM properties going toward surfaces and interfaces. Since AM strongly depends on the magnetic space groups, we expect a not straightforward behavior on the surfaces where dimensionality is reduced and the symmetries get affected. A hint that the dimensionality is relevant for the altermagnetic properties was obtained from the orbital-selective AM in the quasi-two-dimensional Ca\({}_{2}\)RuO\({}_{4}\) system, where the three-dimensional d\({}_{xz}\)/d\({}_{yz}\) bands show AM spin-splitting while the two-dimensional d\({}_{xy}\) bands do not [31]. More in detail, in this paper we describe the evolution of AM on the surfaces and interfaces projecting the 3D Brillouin zone on the 2D Brillouin zone. A representative example is reproduced in Fig. 1. Considering the principal surface orientations, the AM survives on one surface and gets annihilated on the other two orientations due to the merging of the \(k\)-points with opposite altermagnetic spin-splitting. Which surface is altermagnetic depends on the magnetic space group, the magnetic order and the details of the crystal structure. Here, we calculate the surface states for three of the most common space groups in which AM is present, the orthorhombic space group no. 62, the hexagonal space group no. 194 and the tetragonal space group no. 136. Additionally, the space group no. 62 hosts nonsymmorphic symmetries and it was recently shown that a large AHE can be produced in this space group in the case of metals. [32; 21] The paper is organized as follows: the second section is dedicated to the results and discussion. Three subsections are dedicated to the results of the three different space groups and a fourth subsection is devoted to the discussion. Finally, the authors draw conclusions with future prospects. ## II Results and discussion Before studying the altermagnetic surface states, we need to divide the surfaces of the antiferromagnets into two categories depending on the interplay between the magnetic order and the surface. We have the spin-polarized surface as presented in the schematic Fig. 2(a) and the non spin-polarized surface shown in the schematic Fig. 2(b). The spin-polarized surfaces cannot be investigated for the study of the altermagnetism because the spin-polarization of the surface will be confused with the spin-polarization from the altermagnetism. Therefore, experimental studies should be focused on the non spin-polarized surfaces. We will see that the (001) surfaces of MnTe and A-type LaMnO\({}_{3}\) are spin-polarized and therefore are not suitable for experimental investigation of altermagnetism. ### Orthorhombic space group Pbnm(62): the case of LaMnO\({}_{3}\) In this first subsection, we investigate the altermagnetic surface states of the space group no. 62. As a testbed compound for this space group, we choose LaMnO\({}_{3}\) which belongs to the large family of perovskites [33]. The experimental magnetic ground state of the LaMnO\({}_{3}\) is the A-type AFM. Therefore, we consider LaMnO\({}_{3}\) within this magnetic order and calculate the altermagnetic surface states without relativistic effects (see Supporting Information (SI) section-I for the computational details). We plot the symmetries of the Brillouin zone in Fig. 3(a). With subscripts 1 and 2, we indicate the two points in the \(k\)-space that have opposite non-relativistic spin-splitting, namely the couple of \(k\)-points R\({}_{1}\)/R\({}_{2}\) and T\({}_{1}\)/T\({}_{2}\) in this example. The positions of the R\({}_{1}\)/R\({}_{2}\) and T\({}_{1}\)/T\({}_{2}\) points are difficult to predict a priori but they can be easily calculated within first-principle calculations of the band structure from \(\Gamma\) towards all the equivalent points. The positions of the R\({}_{1}\) and R\({}_{2}\) points strongly depend on the space group, the magnetic order [31] and the details of the crystal structure as the Wyckoff positions of the magnetic atoms [21]. To study the altermagnetic surface states on the principal surface orientations, we project the positions of R\({}_{1}\), R\({}_{2}\), T\({}_{1}\) and T\({}_{2}\) on that surfaces. In general, when the \(k\)-points with opposite non Figure 2: (a) Schematic representation of a spin-polarized surface, the surface layer has a net magnetic moment. (b) Schematic representation of a non spin-polarized surface, the surface layer has zero total magnetic moment. relativistic spin-splitting merge on the projected surface Brillouin zone, the AM gets annihilated as shown in Fig. 1. From the projection of the 3D Brillouin zone on the 2D surface Brillouin zone of the A-type phase of LaMnO\({}_{3}\), we obtain that the (010) and (001) surfaces are blind to the AM and host double-degenerate antiferromagnetic surface states. On the (100) projected Brillouin zone, R\({}_{1}\) does not merge with R\({}_{2}\). On the contrary, the \(k\)-paths R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\) and T\({}_{1}\)-\(\Gamma\)-T\({}_{2}\) merge in the surface projected \(\overline{\text{T}}_{1}\)-\(\overline{\text{T}}_{2}\)\(k\)-path as shown in Fig. 3(a). Therefore, we study the (100) surface orientation where we expect to find the altermagnetic surface states. The bulk band structure of LaMnO\({}_{3}\) along the path T\({}_{1}\)-\(\Gamma\)-T\({}_{2}\) is represented in Fig. 3(b), the altermagnetic spin-splitting at half-way between \(\Gamma\) and the T points is of the order of 30 meV in the valence band maxima. The band structure of the spin-up channel along \(\Gamma\)-T\({}_{1}\) (or T\({}_{2}\)) is the same as the band structure of the spin-down channel along \(\Gamma\)-T\({}_{2}\) (or T\({}_{1}\)). The altermagnetic surface states for spin-up and spin-down are represented in Fig. 3(c,d), respectively. The altermagnetic properties are entirely preserved on the (100) surfaces and even the size of the non-relativistic spin-splitting on the surface is unchanged with respect to the bulk spin-splitting. Indeed, analogously to the bulk, the (100) surface band structure of the spin-up channel along \(\overline{\Gamma}\)-\(\overline{\text{T}}_{1}\) (or \(\overline{\text{T}}_{2}\)) is the same as the band structure of the spin-down channel along \(\overline{\Gamma}\)-\(\overline{\text{T}}_{2}\) (or \(\overline{\text{T}}_{1}\)). The symmetries of the Brillouin zone depend on the magnetic order, a different magnetic order will change the position of R\({}_{1}\) and R\({}_{2}\) varying the surfaces that are blind or not to the AM [31]. The (001) surface of the A-type magnetic order is spin-polarized. The Mn atoms with opposite spins are stacked along the z-axis and are connected by a vector (0,0,c/2). When we create the slab with the tight-binding model, we have the (001) orientation with two different terminations. In this case, the Mn\({}_{\uparrow}\) and Mn\({}_{\downarrow}\) atoms are equivalent with respect to the surfaces (100) and (010). More generally, if the vector normal to the surface is orthogonal to the connecting vector between spin-up and spin-down, therefore, spin-up and spin-down are equivalent. In the case of the (001) surface, the slab is composed of alternated MnO\({}_{2}\) and LaO layers with spins alternating up and down for the A-type magnetic order. The slab will be composed of the following order of layers Mn\({}_{\uparrow}\)O\({}_{2}\)/LaO/..../Mn\({}_{\downarrow}\)O\({}_{2}\)/LaO. Therefore, in the (001) surface the Mn\({}_{\uparrow}\) and Mn\({}_{\downarrow}\) are inequivalent since Mn\({}_{\uparrow}\) is on the surface and Mn\({}_{\downarrow}\) on the subsurface. Increasing the number of layers, this difference gets reduced but it does not vanish leaving the spin-up and spin-down band structure slightly inequivalent but this is completely unrelated to the AM. We define this case as surface uncompensated antiferromagnetism. This surface uncompensated antiferromagnetism is described in SI(section-III). ### Hexagonal space group P6\({}_{3}\)/mmc(194): the case of MnTe The structural ground state of bulk MnTe is the \(\alpha\)-phase with a hexagonal crystal structure [34; 35] that is magnetic below Neel temperature \(T_{\text{N}}=310\) K [34]. MnTe is one of the prototype systems for AM [13; 15; 36; 37; 38] due to the large spin-splitting, large AHE due to the intrinsic p-doping [18], and strongly sensitive valence bands to the orientations of magnetic moments [39]. The symmetries of the Brillouin zone for hexagonal MnTe are shown in Fig. 4(a). With subscripts 1 and 2, we indicate the two points in the \(k\)-space that have opposite non-relativistic spin-splitting. We project the positions of L\({}_{1}\) and L\({}_{2}\) points on the principal surface orientations in order to investigate the AM. At the surfaces (1\(\overline{1}\)0) and (001), the L\({}_{1}\) and L\({}_{2}\) points merge, therefore AM is annihilated and these surfaces are blind to altermagnetism. On the contrary, the (110) surface states present AM because L\({}_{1}\) does not merge with L\({}_{2}\), as it is shown in Fig. 4(a). Also in this case as in the LaMnO\({}_{3}\), the (001) surface is spin-polarized since the magnetic atoms with opposite spin are connected by a vector (0,0,c/2). The (001) surface shows surface uncompensated magnetism in the same fashion as for LaMnO\({}_{3}\) (see SI(section-III)). The bulk band structure of MnTe along the path L\({}_{1}\)-\(\Gamma\)-L\({}_{2}\), where the altermagnetic spin-splitting is present, is represented in Fig. 4(b). Figure 3: (a) Symmetries of the Brillouin zone for the orthorhombic LaMnO\({}_{3}\) with the position of R\({}_{1}\), R\({}_{2}\), T\({}_{1}\) and T\({}_{2}\) and their projections on the (100) surface orientation. In magenta, we highlight the \(k\)-paths T\({}_{1}\)-\(\Gamma\)-T\({}_{2}\) and R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\) where the altermagnetic spin-splitting is maximized. (b) Bulk band structure along the \(k\)-path T\({}_{1}\)-\(\Gamma\)-T\({}_{2}\), in blue we represent the spin-up channel and in red we represent the spin-down channel. Altermagnetic surface states on the (100) surface orientation for the (c) spin-up and (d) spin-down channel. The Fermi level is set to zero. In the surface band structure, the red color means large spectral weight while the blue color means zero spectral weight. The band structure of the spin-up channel along \(\Gamma\)-L\({}_{1}\) (or L\({}_{2}\)) is the same as the band structure of the spin-down channel along \(\Gamma\)-L\({}_{2}\) (or L\({}_{1}\)). The same properties of the bulk are preserved in the altermagnetic surface states on the (110) surface for spin-up and spin-down channels as shown in Fig. 4(c,d), respectively. Analogously to the bulk, the (110) surface band structure of the spin-up channel along \(\overline{\Gamma}\)-L\({}_{1}\) (or \(\overline{\text{L}}_{2}\)) is the same as the band structure of the spin-down channel along \(\overline{\Gamma}\)-L\({}_{2}\) (or \(\overline{\text{L}}_{1}\)). Several investigations were done on the interface between CrSb or MnTe (belonging to the same space group) and topological insulators in order to create axion insulators. [40; 41; 42] Since the (001) surface of MnTe is blind to AM, the AM is not present in the (001) heterostructures of topological insulators and MnTe that were previously studied [42]. ### Tetragonal space group P4\({}_{2}\)/mmm(136): the case of RuO\({}_{2}\) In this subsection, we calculate the altermagnetic surface states of the RuO\({}_{2}\) compound which belongs to the space group no. 136 and it is one of the most studied systems in the field of AM owing to its large spin-splitting, large Neel temperature and metallicity leading to the AHE [13; 15]. Here, we report the Brillouin zone and the electronic properties in Fig. 5. From the symmetries of the Brillouin zone in Fig. 5(a), we derive the presence of altermagnetic surface states for the (001) surface orientation. The altermagnetic surface states are absent on the two other principal surface orientations since the \(k\)-points with opposite non-relativistic spin-splitting characters annihilate each other when projected on the surfaces making the (100) and (010) surfaces blind to AM. The non-relativistic spin-splitting observed along the \(k\)-path S\({}_{1}\)-\(\Gamma\)-S\({}_{2}\) in the electronic bulk properties presented in Fig. 5(b) is in agreement with the literature. As compared to LaMnO\({}_{3}\) and MnTe systems discussed earlier, the altermagnetic surface states of RuO\({}_{2}\) have different properties as evident from Fig. 5(c-f). This nature Figure 4: (a) Symmetries of the Brillouin zone for hexagonal MnTe with the position of L\({}_{1}\) and L\({}_{2}\) points and their projections on the (110) surface orientation. In magenta, we highlight the \(k\)-path L\({}_{1}\)-\(\Gamma\)-L\({}_{2}\) where the altermagnetic spin-splitting is maximized. (b) Bulk band structure along the \(k\)-path L\({}_{1}\)-\(\Gamma\)-L\({}_{2}\), in blue we represent the spin-up channel and in red we represent the spin-down channel. Altermagnetic surface states on the (110) surface orientation for the (c) spin-up and (d) spin-down channel. The Fermi level is set to zero. In the surface band structure, the red color means large spectral weight while the blue color means zero spectral weight. Figure 5: (a) Symmetries of the Brillouin zone for the tetragonal RuO\({}_{2}\) with the position of S\({}_{1}\) and S\({}_{2}\) points and their projections on the (001) surface orientation. In magenta, we report the k-path S\({}_{1}\)-\(\Gamma\)-S\({}_{2}\) where the altermagnetic spin-splitting is maximized. (b) Bulk band structure along the \(k\)-path S\({}_{1}\)-\(\Gamma\)-S\({}_{2}\), in blue we represent the spin-up channel and in red we represent the spin-down channel. Altermagnetic surface states of the (001) surface orientation for the (c,d) spin-up (top and bottom surfaces) and (e,f) spin-down channel (top and bottom surfaces). The AM is sensitive to surface termination as evident here in the top and bottom surface of the slab. The Fermi level is set to zero. In the surface band structure, the red color means large spectral weight while the blue color means zero spectral weight. of altermagnetic surface states originates from the different terminations of the slab presented in Fig. 6 wherein the top surface is occupied with Ru\({}_{\downarrow}\) atoms, while the bottom surface is occupied with the Ru\({}_{\uparrow}\) atoms. Whereas the sub-surfaces are opposite with the top subsurface occupied with Ru\({}_{\uparrow}\) and the bottom subsurface is Ru\({}_{\downarrow}\). As a result, the surface states originating from the Ru\({}_{\downarrow}\) atoms on the top surface are altermagnetic partners of the surface states originating from the Ru\({}_{\uparrow}\) atoms on the bottom surface. Differently from RuO\({}_{2}\), in the LaMnO\({}_{3}\) and MnTe, the terminations of the altermagnetic surface contain both atoms with majority spin-up and spin-down. ### Electric field control of the surface states Let us consider the surfaces without spin-polarization as described in Fig. 2(b). We should clarify that the altermagnetic systems do not lose completely all the altermagnetic properties once we create a slab parallel to a blind surface. Indeed, an external electric field perpendicular to the slab can reactivate the altermagnetism. We can understand it from the qualitative behavior of the surface in the presence of an external electric field shown in Fig. 7. Let us consider the case of the LaMnO\({}_{3}\) Brillouin zone, an electric field will break the symmetry related to spin-splitting along \(\Gamma\)-R\({}_{1}\) and \(\Gamma\)-R\({}_{2}\) that are equal and opposite. The electric field along the [001] direction will create R\({}_{1}\) and R\({}_{2}\) points in the bottom of the Brillouin zone while it will create R\({}_{1}\) and R\({}_{2}\) points in the top of the Brillouin zone. Once we project on the (001) surface, we will have two different points that we define R\({}_{3}\) and R\({}_{4}\) with different spin-splitting. Therefore, an electric field perpendicular to the blind surface will generate altermagnetism. This will be relevant for the electric control in antiferromagnetic spintronics in the case of insulators. The interplay between electric field and altermagnetism was investigated recently by Guo et al. [43]. ### Discussion We have proven that there is a strong surface-dependence of the altermagnetic properties. Therefore, every surface or interface study of AM should be performed on a specific surface or interface orientation to observe the effect. Following the recipe given in this work, we can derive the altermagnetic-active surfaces going also beyond the principal surface orientations. For instance, the AM is preserved on the surface (110) of RuO\({}_{2}\) as also shown in recent experiments [44]. In the case of alternate positions along all directions of the R\({}_{1}\) and R\({}_{2}\) points as in the C-type of YVO\({}_{3}\) perovskite [31], the altermagnetic surface states will be observable on the (110), (101), (011) and (111) surfaces while all principal surface orientations will be blind to the AM. As a drawback of this approach, we have to mention that usually on the surfaces and interfaces we have dangling bonds, different magnetic order and/or surface reconstruction that can change the properties of the bulk [45]. In all our calculations, we assume that there is no surface reconstruction or change of the magnetic order on the surfaces or interface with respect to the bulk. However, if the surface will present just a simple buckling without a change of the magnetic order or the symmetry relevant for the AM, the predictions about the altermagnetic surface states done in this paper are valid. The results presented in this paper are essential for future steps toward the interplay between the AM and topological surface states [46]. Additionally, the schematic Figure 6: Slab structure of RuO\({}_{2}\) with (001) surface orientation. We have the equivalent top and bottom surfaces terminating with RuO\({}_{2}\) for spin-up and spin-down channels with a 3.11 nm thickness slab. The red and blue balls represent the Ru\({}_{\downarrow}\) and Ru\({}_{\uparrow}\), respectively. The green balls represent the oxygen atoms. Figure 7: (a) In the absence of an external electric field (\(\vec{E}=0\)) AM is annihilated on the (001) surface since \(\overline{R}\) contains the projections of R\({}_{1}\) and R\({}_{2}\). However, in the presence of an external electric field (\(\vec{E}\neq 0\)) the altermagnetic surface states are activated on the (001) surface. We define R\({}_{3}\) as the point that contains the projections of R\({}_{1}\) and R\({}_{2}\) while R\({}_{4}\) contains the projections of R\({}_{2}\) and R\({}_{1}\). representation of symmetries of the Brillouin zone shown in Fig. 1 contains information about the altermagnetic properties for a given space group and magnetic order. Further investigations could lead to a faster and easier evaluation of the Hall vector orientation based on the direction of the Neel vector and on the symmetries of the Brillouin zone obtained using exclusively DFT results. ## III Conclusions We investigated the surface states of altermagnetic systems considering three representative space groups that host numerous altermagnetic compounds: one tetragonal, one orthorhombic and one hexagonal. We calculate the 2D projected Brillouin zone from the 3D Brillouin zone and we describe the method to determine the surfaces where the opposite spin-splittings merge annihilating the altermagnetic properties and the surfaces where the AM is preserved. For instance, looking at the three principal surface orientations, we find that two surfaces are blind to AM, while the AM survives for one surface orientation in all considered cases. Where it is preserved, the altermagnetic spin-splitting of the surface states gets unchanged with respect to the bulk. Which surface preserves the AM depends on the relative position of the high-symmetry points (for instance R\({}_{1}\) and R\({}_{2}\) for the orthorhombic case) in the Brillouin zone. Since the position of these high-symmetry points depends on the magnetic order, also which is the surface hosting the altermagnetic surface states depends on the magnetic order. Using an electric field orthogonal to non spin-polarized blind surfaces, we were able to break the inversion symmetry and create altermagnetic surface states on surface orientations that were blind to altermagnetism without field. The results obtained in this paper are a necessary step for further investigations that could lead to a faster and easier evaluation of the Hall vector orientation using exclusively DFT results. Future investigations on the spin texture of the topological surface states are needed in the case of altermagnetic systems. Our results predict which surfaces to cleave in order to observe altermagnetic spin-splitting in thin films via spin-resolved ARPES. We open future perspectives for the study of altermagnetic effects on the trivial and topological surface states. ###### Acknowledgements. We thank T. Dietl and V. V. Volobuev for the useful discussions. The work is supported by the Foundation for Polish Science through the International Research Agendas program co-financed by the European Union within the Smart Growth Operational Programme (Grant No. MAB/2017/1). We acknowledge the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant g91-1418, g91-1419 and g91-1426 for the availability of high-performance computing resources and support. We acknowledge the CINECA award under the ISCRA initiative IsC99 "SILENTS", IsC105 "SILENTSG" and IsB26 "SHINY" grants for the availability of high-performance computing resources and support. We acknowledge the access to the computing facilities of the Poznan Supercomputing and Networking Center Grant No. 609. ## Appendix A Computational details We performed density functional theory-based _first-principles_ calculations as implemented in Quantum ESPRESSO.[48] We performed the calculations without spin-orbit coupling effects. The antiferromagnetic ground state was obtained using ultra-soft pseudopotentials under generalized gradient approximation with Perdew-Burke-Ernzerhof type of exchange-correlation functional.[49; 50] The kinetic energy cut-off of 65 Ry and charge density cut-off of 780 Ry were used with a Monkhorst-Pack grid (\(k\)-mesh) of 12 \(\times\) 12 \(\times\) 8.[51] High electronic self-consistency convergence criteria of at least \(10^{-10}\) were followed in all the calculations. The antiferromagnetic ground state of RuO\({}_{2}\) was obtained by implementing Hubbard U with GGA for Ru atoms with \(U=2\) eV and \(J_{H}=0.15\)_U_. After these calculations, we performed wannierization for all the systems using Wannier90 code.[52] The exact tight-binding Hamiltonian generated from wannierization was then used to calculate the altermagnetic surface states using semi-infinite Green's function approach implemented in WannierTools code.[53] ## Appendix B Notations and Structural Details Within this paper, (xyz) is the notation to describe the surface plane, while [xyz] is the notation for the direction orthogonal to the surface plane. We report the crystal symmetries and the structural details of the investigated compounds for a complete understanding of the surface orientations described in the main text. The crystal structures are presented in Figure 8: Crystal structures for a) hexagonal MnTe (yellow balls are Mn and blue balls are Te), b) orthorhombic LaMnO\({}_{3}\) (orange balls are La, yellow balls are Mn, and green balls are O) and c) tetragonal RuO\({}_{2}\) (purple balls are Ru and green balls are O) obtained from the materials project repository.[47] Fig. 8. The Bravais lattice vectors for space group no. 62 are **a\({}_{1}\)**=(a,0,0), **a\({}_{2}\)**=(0,b,0) and **a\({}_{3}\)**=(0,0,c) while the reciprocal lattice vectors are **b\({}_{1}\)**=(\(\frac{2\pi}{a}\),0,0), **b\({}_{2}\)**=(0,\(\frac{2\pi}{b}\),0) and **b\({}_{3}\)**=(0,0,\(\frac{2\pi}{c}\)). The Bravais lattice vectors for space group no. 136 are **a\({}_{1}\)**=(a,0,0), **a\({}_{2}\)**=(0,a,0) and **a\({}_{3}\)**=(0,0,c) while the reciprocal lattice vectors are **b\({}_{1}\)**=(\(\frac{2\pi}{a}\),0,0), **b\({}_{2}\)**=(0,\(\frac{2\pi}{a}\),0) and **b\({}_{3}\)**=(0,0,\(\frac{2\pi}{c}\)). The Bravais lattice vectors for space group no. 194 are **a\({}_{1}\)**=(\(\frac{a}{2}\),\(\frac{a\sqrt{3}}{2}\),0), **a\({}_{2}\)**= (\(\frac{a}{2}\),-\(\frac{a\sqrt{3}}{2}\),0) and **a\({}_{3}\)**= (0,0,c). The reciprocal lattice vectors are **b\({}_{1}\)**= (\(\frac{2\pi}{a}\),\(\frac{2\pi}{a\sqrt{3}}\),0), **b\({}_{2}\)**= (\(\frac{2\pi}{a}\),-\(\frac{2\pi}{a\sqrt{3}}\),0) and **b\({}_{3}\)**= (0,0,\(\frac{2\pi}{c}\)). The [110]\(k\)-space direction is parallel to the vector **b\({}_{1}\)**+**b\({}_{2}\)**=(\(\frac{4\pi}{a}\),0,0). The lattice parameters for the three systems under consideration were obtained from the materials project repository [47]. The optimized lattice parameters used for MnTe were a=4.107 A and c=6.467 A. For LaMnO\({}_{3}\) we used the Pbnm setting [33] with a=5.585 A, b=5.871 A and c=7.777 A while for RuO\({}_{2}\) we performed the calculations with a=b=4.482 A and c=3.111 A. MnTe and RuO\({}_{2}\) have two magnetic atoms in the unit cells, therefore only one antiferromagnetic configuration is possible. LaMnO\({}_{3}\) is an orthorhombic perovskite with the four Mn magnetic atoms in the 4b Wyckoff positions, the A-type magnetic order consists of the 2 Mn atoms at z=0 with spin-up and 2 Mn atoms at the reduced coordinates z=0.5 with spin-down [31]. ## Appendix C Surface uncompensated magnetism on (001) surface of MnTe The spin-up channel and spin-down channel surface states of the (001) surface orientation of MnTe slightly differ due to surface uncompensated magnetism. Indeed, the (001) slab is asymmetric as we can see in Fig. 9. The two terminations of the slab (top and bottom) are inequivalent, therefore, the spin-up and spin-down channels are inequivalent. The surface band structures for the spin-up and spin-down channels and for top and bottom surfaces are reported in Fig. 10. We can observe minor differences among all cases. In the case of DFT simulations with a symmetric slab (that however will not preserve the stoichiometry), we would recover the symmetries observed in the RuO\({}_{2}\) case. The same effect of the surface uncompensated magnetism is present on the (001) surface of LaMnO\({}_{3}\), however, the micromagnetic spin-splitting in LaMnO\({}_{3}\) is one order of magnitude smaller than MnTe, therefore, this uncompensated magnetism effect is not appreciable in LaMnO\({}_{3}\).
2304.06274
EWT: Efficient Wavelet-Transformer for Single Image Denoising
Transformer-based image denoising methods have achieved encouraging results in the past year. However, it must uses linear operations to model long-range dependencies, which greatly increases model inference time and consumes GPU storage space. Compared with convolutional neural network-based methods, current Transformer-based image denoising methods cannot achieve a balance between performance improvement and resource consumption. In this paper, we propose an Efficient Wavelet Transformer (EWT) for image denoising. Specifically, we use Discrete Wavelet Transform (DWT) and Inverse Wavelet Transform (IWT) for downsampling and upsampling, respectively. This method can fully preserve the image features while reducing the image resolution, thereby greatly reducing the device resource consumption of the Transformer model. Furthermore, we propose a novel Dual-stream Feature Extraction Block (DFEB) to extract image features at different levels, which can further reduce model inference time and GPU memory usage. Experiments show that our method speeds up the original Transformer by more than 80%, reduces GPU memory usage by more than 60%, and achieves excellent denoising results. All code will be public.
Juncheng Li, Bodong Cheng, Ying Chen, Guangwei Gao, Tieyong Zeng
2023-04-13T05:17:54Z
http://arxiv.org/abs/2304.06274v1
# EWT: Efficient Wavelet-Transformer for ###### Abstract Transformer-based image denoising methods have achieved encouraging results in the past year. However, it must uses linear operations to model long-range dependencies, which greatly increases model inference time and consumes GPU storage space. Compared with convolutional neural network-based methods, current Transformer-based image denoising methods cannot achieve a balance between performance improvement and resource consumption. In this paper, we propose an Efficient Wavelet Transformer (EWT) for image denoising. Specifically, we use Discrete Wavelet Transform (DWT) and Inverse Wavelet Transform (IWT) for downsampling and upsampling, respectively. This method can fully preserve the image features while reducing the image resolution, thereby greatly reducing the device resource consumption of the Transformer model. Furthermore, we propose a novel Dual-stream Feature Extraction Block (DFEB) to extract image features at different levels, which can further reduce model inference time and GPU memory usage. Experiments show that our method speeds up the original Transformer by more than 80%, reduces GPU memory usage by more than 60%, and achieves excellent denoising results. All code will be public. Image denoising, vision Transformer, wavelet transform, dual-stream network, efficient model. ## I Introduction Image denoising is a popular topic in image restoration (IR), which aims to reconstruct a clean image from the noisy one. As the key step in many practical applications, the quality of denoised images will significantly affect the performance of downstream tasks, such as image classification [1, 2], image segmentation [3, 4], target detection [5, 6]. However, due to the complex noise environment, image denoising is still a challenging inverse problem. In the past few decades, researchers have made many explorations and attempts on single image denoising (SID). The method of SID can be divided into traditional denoising methods [7, 8, 9, 10, 11] and learning-based methods. Among them, traditional methods are usually implemented in an iterative manner, which is inefficient. In addition, manual design is required and the generalization performance is poor. For learning-based methods, the purpose is to learn the mapping between noisy and clean images, thus making the model has denoising ability. Recently, with the wide application of deep learning in various fields and the excellent performance of convolutional neural networks (CNN) in computer vision, many CNN-based methods [12, 13, 14, 15, 16, 17] have been proposed for SID. Most of them use the powerful feature extraction abilities of CNN to extract image features and use various strategies for modeling, which have achieved gratifying results. Recently, with the proposal and wide application of the visual Transformer model, a new research domain has been provided for SID. Facts have proved that the ability of Transformer to extract long-range dependencies of images makes it have better denoising performance than CNN model. Therefore, some representative image restoration Transformer models [18, 19, 20, 21] have been proposed. However, since the mechanism of Transformer is to use matrix operations to operate on the features of each pixel in the image, which will cause excessive consumption of time and space. Although the current Transformer-based image restoration method uses the patch processing method, dividing the image into multiple patches for operation, it still occupy a large amount of GPU memory space, resulting in longer inference time. Therefore, it is difficult to balance the performance and resource consumption of the model. The above problems make it difficult for the Transformer model to run on server devices with low GPU performance, which greatly limits the research Fig. 1: Model performance and size comparison with classic single image denoising methods on CBSD68 (\(\sigma\) = 50). of Transformer on SID tasks. To overcome the Transformer's bottleneck in image denoising, we propose a novel Efficient Wavelet Transformer (EWT). Although both DWT and Transformer are common technologies, as far as we know, this is the first model that introduced the wavelet transform into Transformer and applies it to image restoration task. It is worth mentioning that we do not forcefully combine them but elegantly integrate them according to their own advantages and disadvantages. EWT uses the reversible nature of wavelets as the sampling unit for model input and output, which can effectively improve the inference speed of the Transformer model and reduce a large amount of GPU memory usage. In the network backbone, we refer to the shift-windows self-attention mechanism in Swin Transformer, and combine the local feature extraction and aggregation capabilities of CNN to construct a dual-stream feature extraction block (DFEB) that combines the respective advantages of Transformer and CNN. In summary, the main contributions of this work are as follows: * We consider the limitations of Transformer in image restoration tasks and propose a novel Efficient Wavelet-Transformer (EWT) for SID. This is the first attempt of Transformer in wavelet domain, which increases the speed of the original Transformer by more than **80%** and reduces GPU memory consumption by more than **60%**. * We propose an efficient Multi-level Feature Aggregation Module (MFAM). MFAM is a lightweight feature aggregation module that can make full use of hierarchical features by using local and global residual learning. We also propose an elegant Dual-stream Feature Extraction Block (DFEB), which combines the advantages of CNN and Transformer that can take into account the information of different levels to better extract image features. * We fully demonstrate the effectiveness of wavelets in Transformer models. Solve the drawbacks of the slow inference speed and high GPU memory usage of Transformer in image restoration tasks. In other words, EWT is a new attempt to balance model performance and resource consumption, which is helpful for more work in the future. The rest of this paper is organized as follows. Related works are reviewed in Section II. A detailed explanation of the proposed EWT is given in Section III. The experimental results, ablation analysis, and discussion are presented in Section IV, V, and VI respectively. Finally, we draw a conclusion in Section VII. ## II Related Works Recently, several Transformer methods for image denoising have been proposed to demonstrate the effectiveness of the Transformer architecture in this task. Although these methods have achieved good performance, they will occupy a large amount of GPU memory space and prolong the inference time of the network, which is extremely unfavorable for the promotion and application of Transformer in image restoration. In this paper, we aim to explore an efficient Transformer model for image denoising that considers both model performance and resource consumption. ### _CNN-based SID Methods_ With the development of deep learning, CNN-based image restoration methods have achieved advanced results and greatly promoted the development of SID. The success of these methods is attributed to its powerful feature extraction ability and well-designed network structure, which can extract coarse and fine-grained features through different receptive fields. For example, Zhang et al. [12] proposed a DnCNN for the Gaussian noise removal, which achieved competitive results by took advantage of batch normalization and residual learning. Yang et al. [13] proposed a BM3D-Net, which is a nonlocal-based network that introduced BM3D into CNN by using wavelet shrinkage. Zhang et al. [14] proposed a flexible FFDNet, which took the noise level map and the noisy image as the inputs for image denoising. Fang et al. [22] proposed a multi-level edge features guided MLEFGN, which can make full use of edge features to reconstruct noise-free images. Zhang et al. [15] proposed an efficient Residual Dense Network (RDN) to extract abundant local features via densely connected convolutional layers. Most of the aforementioned methods committed to budding efficient modules to extract local features to reconstruct noise-free images. In addition, in order to restore more detailed features, many methods [16, 23] directly increase the depth of the network, which results in a substantial increase in the parameters of the model. To better encode image global information, the goal of current research is to explore more powerful deep learning models. ### _Transformer-based IR Methods_ In order to model the dependency of pixel-level features, researchers began to pay attention to Transformer in NLP. The self-attention unit in Transformer can well model the long-distance dependencies in the sequence. However, due to the particularity of the image, directly expanding it into a sequence as the input of the Transformer will cause excessive computational overhead. In order to solve this problem, ViT [24] uses the idea of dividing an image into multiple sub-images of the same size. Later, in order to better promote the flow of information between sub-images, Swin Transformer [25] introduced the idea of window displacement to indirectly model the entire image and demonstrated its excellent performance in high-level vision tasks such as image classification and target detection. Recently, some works also apply Transformer to image restoration tasks, such as IPT [18] and SwinIR [19]. Among them, IPT draws on the network structure of DERT [26], which use 3\(\times\)3 convolution with a step size of 3 to reduce the dimensionality of the image. This method can alleviate the dimensionality problem to a certain extent. However, the demanding requirements for GPU memory, training datasets, and reasoning time are unacceptable. SwinIR directly migrated the Swin Transformer to IR task and achieved outstanding results. However, SwinIR stacks a large number of Transformers, the execution time and GPU memory consumption are still very high. Although Transformer can improve the performance of the model, its own mechanism will bring a lot of GPU memory consumption and time overhead. In addition, Transformer cannot encode the two-dimensional position information of the image, and needs to embed relative position or absolute position encoding. In this regard, CNN inherently has the ability to encode the position of the image. Therefore, our goal is to incorporate CNNs and explore a more elegant and efficient Transformer for image restoration. ### _Wavelet-based IR Methods_ Wavelet is widely used in image processing tasks. With the rise of deep learning, some studies combine wavelet with CNN and achieved excellent results. For example, Bae et al. [27] found that learning on wavelet sub-bands is more effective, and proposed a Wavelet Residual Network (WavResNet) for image restoration. After that, Bae et al. [28] also proposed a deep wavelet super-resolution network to recover the lost details on the wavelet sub-bands. Zhong et al. [29] jointed the sub-bands learning with CliqueNet [30] structures for wavelet domain super-resolution. Liu et al. [31] proposed a Multi-level Wavelet-CNN (MWCNN) for image restoration, which use multi-level wavelet to complete related tasks. Inspired by these methods, we intend to explore the performance of Transformer in the wavelet domain and build a more lightweight Transformer model with wavelet. ## III Efficient Wavelet-Transformer (EWT) ### _Network Architecture_ As shown in Fig. 2, EWT mainly consists of three parts: Discrete Wavelet Transform (DWT), feature processing, and Inverse Wavelet Transform (IWT). Specifically, at the top of the model, we first use the DWT to downsample the image, which can effectively extract the high and low-frequency information of the image while reducing the resolution of the image. In the middle part of the model, a Multi-level Feature Aggregation Module (MFAM) is introduced for feature processing. This module can significantly improve the model inference speed while ensuring effective feature extraction. Finally, we use the IWT to restore the image and reconstruct its corresponding noise-free image. Define \(I_{noisy}\in H\times W\times C\) as the original input noisy image, the DWT down-sampling layer \(f_{DWT}\) will convert \(I_{noisy}\) into 4 wavelet sub-images: \[I_{LL},I_{LH},I_{HL},I_{HH}=f_{DWT}(I_{noisy}), \tag{1}\] where \(I_{LL},I_{LH},I_{HL},I_{HH}\in\frac{H}{2}\times\frac{W}{2}\times C\) are 4 sub-images with different frequencies. We concatenate them as the shallow features \(F_{e}\in\frac{H}{2}\times\frac{W}{2}\times 4C\) of EWT, and then use them for feature extraction: \[F_{in}=f_{conv}(F_{e}), \tag{2}\] \[F_{out}=f_{MFAM}(F_{in}), \tag{3}\] where \(f_{conv}(\cdot)\) is a 3 \(\times\) 3 convolutional layer used to extract the basic information of the image as the initial features. And these features are sent to MFAM to further extract more effective features. After that, a 3 \(\times\) 3 convolutional layer also applied on the output \(F_{out}\) to obtain the merged features \(F^{{}^{\prime}}_{out}\): \[F^{{}^{\prime}}_{out}=f_{conv}(F_{out}), \tag{4}\] and the global residual learning strategy is used to aggregate \(F_{e}\) and \(F^{{}^{\prime}}_{out}\) as the finally reconstructed feature \[F_{r}=F_{e}+F^{{}^{\prime}}_{out}. \tag{5}\] Finally, the IWT operation is used to transform the features to the original resolution and reconstruct the noise-free image \[I^{{}^{\prime}}_{clean}=f_{IWT}(F_{r}), \tag{6}\] where \(f_{IWT}(\cdot)\) denotes inverse wavelet and \(I^{{}^{\prime}}_{clear}\) is the reconstruct clean image. During training, EWT is optimized with \(L1\) loss function. Given a training dataset \(\left\{I^{i}_{noisy},I^{i}_{clean}\right\}_{i=1}^{S}\), we solve \[\hat{\theta}=\arg\min_{\theta}\frac{1}{S}\sum_{i=1}^{S}\left\|F_{\theta}(I^{i} _{noisy})-I^{i}_{clean}\right\|_{1}, \tag{7}\] where \(\theta\) denotes the parameter set of our EWT, \(F(I_{noisy})=I^{{}^{\prime}}_{clean}\) is the reconstruct noise-free image. ### _Wavelet-based Image Sampling_ Effective sampling of an image is a necessary considered problem in image restoration tasks since the resolution of the input image is usually very large. This means that it will take a lot of calculation costs to deal with them. Although the image size can be reduced by cropping, it will result in the inability to capture the global information of the image. To solve this Fig. 2: The complete architecture of the proposed Efficient Wavelet-Transformer (EWT), Among them, MFAM is used for feature processing. problem, many methods have been proposed to reduce the image resolution, such as pooling or convolution operations. For image restoration tasks, the final output needs to be restored to the original image size. However, aforementioned operations will cause irreversible loss of information. To address this issue, we introduced wavelet to replace the downsampling operation thus reduce the image resolution. As shown in Fig. 3, when the Discrete Wavelet Transform (DWT) is applied to the image, the original image will be decomposed into four sub-images. Plenty of previous works have pointed out that these sub-bands have different frequencies, which mainly reflect the color of the filled area and the edge of the object. Specifically, \(I_{LL}\) is the low-frequency information sub-band of the image, which is an approximation of the original image. \(I_{LH}\) and \(I_{HL}\) are the horizontal and vertical sub-bands of the image, reflecting the edge characteristics of these two directions. \(I_{HH}\) is the diagonal sub-band of the image, reflecting the diagonal edge feature. Taking these sub-images as inputs of the model can guide the model to pay attention to frequency information and help to restore the texture details. Meanwhile, the connection between each sub-image can be established by the deep neural network, so that the model can extract deeper information. **Moreover, the wavelet is reversible and will not cause any loss of information, which is conducive to image restoration.** Therefore, we use Discrete Wavelet Transform (DWT) as the down-sampling module and use Inverse Wavelet Transform (IWT) as the up-sampling module in our EWT. In summary, the advantages of this method are: (1). The wavelet is reversible, so all information can be preserved through this sampling method; (2). Wavelet can capture the frequency and position information of the image, which is beneficial to restore the detailed features of the image; (3). Using wavelet can reduce the image resolution thus reducing the GPU memory consumption. Meanwhile, this process will not produce redundant parameters and can speed up the inference speed of the model, which benefit for efficient model building. (4). Wavelet will relatively increase the receptive area of the receptive field, so that the model can obtain richer features, which is benefit for image restoration. ### _Multi-level Feature Aggregation Module_ As the core component of the entire model, Multi-level Feature Aggregation Module (MFAM) is specially designed for feature extraction and aggregation in the wavelet domain. As shown in Fig. 2, MFAM consists of a series of DFEBs and a ConvBlock, which are responsible for the extraction and aggregation of features at different levels of the image, respectively. Different from the current methods simply stacking Transformer layers, we carefully design a double-branched structural unit (DFEB), and adopt the dense connection to combine the outputs of each DFEB. In this way, the hierarchical features of the model can be better aggregated to enhance the feature representation. Then, a ConvBlock is applied to incorporate these features: \[F_{d}=\sum_{i=1}^{N}F_{i}, \tag{8}\] \[F_{d}^{{}^{\prime}}=f_{ConvB}(F_{d}), \tag{9}\] where \(F_{i}\) represents the output of the \(i\)-th DFEB, \(f_{ConvB}\) denotes the ConvBlock, and \(F_{d}^{{}^{\prime}}\) denotes the aggregated features. Finally, the global residual learning strategy is applied \[F_{out}=F_{in}+F_{d}^{{}^{\prime}}. \tag{10}\] **Dual-stream Feature Extraction Block (DFEB)**: Most Transformer-based methods limit the use of convolutional layers and only use it for feature aggregation or downsampling. However, we found that if the proportion of Transformer is too high, the model performance and resource consumption will be seriously unbalanced. This is because there are matrix operations on large tensors in Transformer, which will consume a huge of GPU computing and storage resources \[Attention(Q,K,V)=Softmax(Norm(QK^{T}))V. \tag{11}\] Our experiments also show that taking a large number of Transformers will not significantly improve the model performance. On the contrary, it will greatly increase the calculation time and GPU memory consumption of the model. Meanwhile, we find that the CNN-based method is significantly faster than the Transformer-based method. Moreover, as the most widely used neural network in computer vision, CNN has been well proven to have the natural ability to capture image information. In particular, CNNs can extract the positional information of images without the need for additional positional encoding embeddings while Transformer does not have the ability to encode location information. Although most visual Transformers have embedded the position-coding operation, most of these operations are designed by human intuition. Compared to the ability of CNN to automatically learn location information, this is far from enough. Therefore, directly replacing CNN with Transformer is a sub-optimal solution. In this work, we focus on elegantly combining CNN and Transformer to find a better solution. Inspired by the idea of multi-scale feature extraction, we find that the multi-branch structure can better guide the model to learn information at different scales. In addition, the parallelism of the multi-branch structure allows each branch to extract different features without interfering with each other, reducing the information dilution problem caused by excessive stacking of neural network modules. In a multi-scale Fig. 3: The schematic diagram of Discrete Wavelet Transform (DWT). CNN network, each branch is usually assigned a convolution kernel with different size to obtain different features under multiple receptive fields. In this work, we use Transformer as an alternative to multiple receptive fields. Specifically, we use Transformer and CNN as two branches to extract different features respectively, because CNN has strong local feature extraction ability and Transformer has better global encoding ability. Based on the above ideas, we designed a Dual-stream Feature Extraction Block (DFEB). DFEB is the most important component of MFAM, which is a dual-branch feature extraction module. The purpose of DFEB is to extract different levels of information and aggregate them to improve the expressive ability of the model. As shown in Fig. 4, DFEB contains two branches: surface information extraction branch and fine-gained information branch. When the features are sent to the DFEB, it will be divided into two groups, one group is used to extract rough features, and the other one is used to model the relationship among pixels and to learn the global information. Specifically, the surface information extraction branch only contains a ConvBlock (Fig. 5), which is a simple module composed of two convolutional layers and a ReLU activation function. This structure benefit for image restoration, especially for image surface information extraction. In the output, the weighted result of the convolution part is added to the input part to enhance the expression of shallow information. In the fine-gained information branch, we introduce the visual Transformer to extract the fine-grained information. Many methods have proved that Transformer can better model the pixel-level features of the image. However, since the image belongs to two-dimensional data, processing it in a serialized manner will destroy the location information of the image. Meanwhile, due to the huge overhead of the Transformer, it is unsuitable to directly model an entire feature map. Therefore, we borrowed the idea of Swin Transformer [25] to decompose the feature map into smaller windows. Meanwhile, the window displacement mechanism is also be applied to enhance the information flow and interaction between windows. As shown in Fig. 5, (SW) MAS denotes the (Shift Window) Multi-Head Self-Attention mechanism proposed by Swin Transformer. Considering that the working mechanism of CNN and Transformer is different, adding the output features of these two branches directly will lead to information confusion. Therefore, we concatenate the output of CNN and Transformer to get rich features, and then a convolutional layer is used to weight and fuse different features to guide the module to learn useful features adaptively. ## IV Experiments ### _Datasets_ In this paper, we use 800 training images in DIV2K [32] as the training set. For evaluation, we choose six benchmark test sets, including Set12 [33], BSD68 [34], Kodak24 [35], CBSD68 [36], and Urban100 [37]. In addition, we choose additive white Gaussian noise (AWGN) as our research object since AWGN is the best approximation of the real mixture noise, which can simulate the disturbance of real noise to the image. Following previous works, we use Set12, BSD68, and Urban100 to evaluate the performance of EWT in grayscale images, and use Kodak24, CBSD68, and Urban100 to evaluate the denoising effect of model on color images. Meanwhile, to further verify the effectiveness and robustness of EWT, we utilize SIDD [38] and RN15 [39] to evaluate the denoising performance of the model in the real image denoising task. ### _Implementation Details_ Before training, we generate noisy images by adding AWGN with different noise levels. To verify the effectiveness of the model, we set the noise level \(\sigma\) = 15, 25, and 50 for grayscale images and set \(\sigma\) = 10, 30, and 50 for color images. During training, we randomly choose 16 noisy patches as inputs and these patches are randomly rotated and flipped to enhance the data. In addition, EWT is implemented with PyTorch framework and updated with the Adam optimizer. In the final model, we use a single-scale wavelet to sample the image. The size of all convolution kernels in the model is \(3\times 3\), the \(\lambda\) in the ConvBlock is set to \(0.1\), and the embedding dimension of MFAM is set to \(180\). In addition, we use \(4\) DFEB in MFAM, and each DFEB contains \(1\) ConvBlock and \(6\) Transformer blocks. In the Transformer, the window size is \(8\), the number of attention heads is \(6\), and the MLP dimension is as twice as the embedding dimension. Fig. 4: The complete architecture of Dual-stream Feature Extraction Block. Fig. 5: The complete architecture of ConvBlock and Transformer. ### _Comparisons with State-of-the-art Methods_ **Gray-scale Image Denoising:** In Tabel I, we report the PSNR results of different SID methods on three benchmark test sets. Obviously, EWT achieves competitive results and the best average results on these test sets with different noise levels. It is worth noting that MWCNN is also a wavelet-based SID model, which achieved slightly better results than EWT on BSD68 (\(\sigma\) = 25 and 50). However, it cannot be ignored that the results of MWCNN under other test sets are all worse than our EWT, and the average result is 0.14dB worse than EWT. Meanwhile, MWCNN uses multiple training sets to train the model, which contains 5744 images (7 times of our training images). Under this disparity, EWT still achieves close or better results, which fully demonstrates its effectiveness. In Fig. 6, we provide the visual comparison of the denoised images with noise levels \(\sigma\) = 50. In this part, we choose three most representative CNN-based image denoising methods for comparison, including DnCNN [12], FFDNet [14], and MLEFGN [22]. Among them, DnCNN and FFDNet are the two most classic CNN-based image denoising models. According to the figure, we can clearly observe that the images reconstructed by DnCNN and FDDNet are too smooth, and they have lost texture details and edge information. As for Fig. 6: Visual comparison on grayscale images with \(\sigma=50\). Obviously, our EWT can reconstruct high-quality noise-free images with clear edges. MLEFGN, it can reconstruct more clear noise-free images, but the edges of the image are not accurate and complete enough. In contrast, our EWT can reconstruct high-quality images with clear and accurate texture details and edges. This further illustrates the effectiveness and excellence of EWT. **Color Image Denoising:** As for color image denoising, we use Kodak24, CBSD68, and CUHban100 to verify its performance. In this part, we choose three most representative CNN-based image denoising methods for comparison, including DnCNN [12], ADNet [14], and MLEFGN [22]. According to TABLE II, we can clearly observe that our EWT still achieves excellent results on color images, especially on Urban100. Among them, RDN is recognized as one of the most advanced SID models, which is specially designed for color image denoising. Compared with it, our EWT achieved close results on Kodak24 and better results on CBSD68 and CUHban100. It is worth noting that our EWT achieves better average result than RDN with only half of the parameters (EWT: 11M vs RDN: 22M). These results fully demonstrate the denoising ability of EWT on color images, further validating the effectiveness of EWT. In Fig. 7, we provide the visual comparisons of the denoised images with \(\sigma\) = 50 on CBSD68. In this part, we also choose three most representative CNN-based image denoising methods for comparison, including DnCNN [12], ADNet [45], and MLEFGN [22]. Obviously, our EWT can reconstruct high-quality noise-free images with sharper and more accurate edges. Taking the human face as an example, our EWT can reconstruct clearer and more accurate contours. This is due to the fact that the Transformer introduced in EWT can capture the global information of the face, thereby reconstructing high-quality face. All these results further illustrate the effectiveness of the proposed EWT. **Restoration of Other Synthetic Noise:** The noise used in practical applications is usually more than Gaussian noise, and other noises are also very common, such as Poisson noise and Speckle noise. Since it has a more complex distribution, it also needs to be considered emphatically. In order to verify the general applicability of the method in this paper, TABLE III compares EWT with three classic image restoration Transformer methods. The results show that EWT also performs well in other noisy images. This is due to the idea of combining wavelet transform in this paper, which ensures that the Transformer always maintains attention to the image texture details during the feature extraction process. This further validates the effectiveness of our proposed EWT, and also reflects the generality of EWT on different noisy images. **Real Image Denoising:** Real image denoising is a more difficult task since real image noise comes from multiple sources. In this part, real noisy images are used to further assess the practicability of the proposed EWT. In TABLE IV, we provide PSNR comparisons of EWT with other models specially designed for real image denoising. Among them, * denote the model using additional training sets to train the model. Obviously, our model still achieves the best results even without using additional training sets. This further validates the effectiveness and versatility of our EWT. In addition, we also provide the visual comparison on SIDD [38] and RNI15 [39] sets in Figs. 8 and 9, respectively. Obviously, our EWT still can reconstruct high-quality noise-free images. This shows that EWT also performs well on the real image denoising task. ## V Ablation Studies ### _Wavelet Investigations_ In our method, the wavelet plays a vital role in shortening the execution time and GPU memory consumption. To verify Fig. 7: Visual comparison on color images with \(\sigma=50\). Obviously, our EWT can reconstruct high-quality noise-free images with clear edges. this statement, we compare it with SwinIR [19]. SwinIR is a famous Transformer-based image restoration model, which does not use wavelet or other operations to change the image resolution. It is worth noting that SwinIR uses additional training sets and the GPU memory required for it exceeds the maximum limit of our device. For a fair comparison, the embedding dimension of MFAM in SinvIR and EWT are both reduced from 180 to 120, and these two models are retrained under the same data set and settings. In addition, we also consider the combination of DWT and SwinIR to further illustrate the effectiveness of EWT and the rationality of its structure. Meanwhile, we label these two modified models as SwinIR* and EWT*, respectively. According to TABLE V, we can clearly observe that EWT* and SwinIR* have a similar number of parameters, and EWT achieves close PSNR results to SwinIR with only 1/6 running time and 1/3 GPU memory. In addition, we also noticed that directly combine DWT with SwinIR will degrade the performance of the model since it does not optimize the structural design of the network. Contrastly, our EWT achieves better results due to its well-designed network structure and effective DFEB. This huge breakthrough fully demonstrated the advantages of wavelet and further verified the advancement and effectiveness of EWT. In order to further verify the influence of multi-level wavelet on the model performance, we designed a series of studies in TABLE VI. Among them, cases 1, 2, and 3 denote the different Fig. 8: Visual comparison on real-noise images (SIDD [38]). Obviously, EWT can reconstruct high-quality noise-free images. Fig. 9: Visual comparison on real-noise images (RN15 [39]). Obviously, EWT can reconstruct high-quality noise-free images. levels of wavelet with fixed patch size. According to these results, we can find that when the level of wavelet increases, the required execution time and GPU memory consumption will be greatly reduced, but it cannot be ignored that the performance of the model will also decrease. This is because multiple downsampling operation makes the resolution of the image gradually decrease, so the GPU memory consumption is also greatly reduced. However, low-resolution will also cause the loss of local information of the image, making it difficult to reconstruct high-quality images. Therefore, multi-level wavelet-based models can be applied to mobile devices, which have strict restrictions on memory and execution time. In summary, the wavelet is effective to balance model performance and resource consumption. At the same time, multi-level wavelet can be considered according to actual needs. ### _DFEB Investigations_ As the most important part of EWT, Dual-stream Feature Extraction Block (DFEB) is designed for feature extraction while reducing the model size and shortening the running time. This is benefits from the double-branch structure in DFEB, which can elegantly combine CNN and Transformer. In order to verify the effectiveness of this strategy, we designed a series of experiments in TABLE VII. Among them, all models only use two DFEBs and are trained with patchsize=64 for quick verification. According to the table, we can observe that the use of convolutional layers will lead to an increase in the number of parameters and FLOPs, and the use of Transformer will lead to more GPU memory consumption and longer execution time. Therefore, the model using our proposed strategy achieves intermediate results across multiple metrics. However, it is worth mentioning that our method achieves the best PSNR result and has a good balance between the performance, execution time, GPU memory consumption, FLOPs, and size of the model. All these results fully validate the necessity and effectiveness of the combination of CNN and Transformer. In addition to this, we also study the impact of the number of DFEBs on model performance, execution time, and GPU usage in TABLE VIII. In this part, we set the patchsize to 64 to speed up training. Obviously, when the number of DFEBs is increased from 1 to 2, the model performance improves by 0.17dB. Continuing to increase the number of DFEBs can further improve the performance of the model, but the growth rate will gradually decrease. At the same time, it cannot be ignored that as the number of DFEBs increases, the GPU memory consumption and execution time of the model will greatly increase. Therefore, to ensure the efficiency of the model, we use 4 DFEBs in the final version of EWT. ### _Comparision with SwinIR_ In the previous subsection, we compared EWT with SwinIR [19] to verify the positive effect of wavelet on the model. Here we provide more datasets and methods (Uformer [20] and Restormer [21] ) to further verify the effectiveness of EWT. All models are retrained under the same dataset and training settings. In TABLE IX we provide the number of parameters of the model, GPU memory used for training, PSNR results and average execution time on different test sets. As can be seen from the results in the table, EWT achieved better results than Uformer and Restorer with less GPU memory and execution time, maintaining a good balance between performance and operating efficiency. It is worth noting that Uformer does improve efficiency through multi-level downsampling but seriously affects the performance of the model. This is why we introduced the wavelet transform to replace the downsampling operation since the downsampling operation will cause a large number of features to be lost. On the whole, our EWT is a very potential method for image denoising and provide a new solution for image restoration. In Figs. 10 and 11, we provide the visual comparisons with SwinIR [19] on grayscale and color images, respectively. It is worth noting that the SwinIR results used here are the de-noised image reconstructed by the original paper provided pre-trained model, which uses DIV2K [32] (800 training images), Flickr2K [54] (2650 images), BSD500 [36] (400 training and testing images) and Waterloo Exploration Database [55] (4744 images) for training. However, our EWT only use 800 training images from DIV2K, which is 1/10 of the SwinIR training set. According to the results, we can clearly observe that although SwinIR achieved slightly better PSNR results than our EWT, the reconstructed denoised images are also smoother and lack texture details. In contrast, our EWT can reconstruct sharper and more accurate image edges. This is because the introduced wavelet can capture the frequency and position information of the image, which is beneficial to restore the detailed features of the image. Therefore, we can draw the following conclusions: (1). Compared with SwinIR, our EWT can achieve close results with less GPU memory consumption and faster inference time; (2). Compared with SwinIR, our reconstructed denoised images have richer texture details and more accurate edges. All these results further validate the effectiveness of EWT. To sum up, our method has more advantages than previous Transformer-based models, which achieve a good balance between the performance and efficiency of the model. ### _Comparision with MWCNN_ In this paper, we proposed a novel Efficient Wavelet-Transformer (EWT) for single image denoising. This is the first attempt of Transformer in wavelet domain. As we mentioned in the previous section, EWT was proposed inspired by MWCNN [31]. Therefore, we give a detailed comparison with MWCNN in TABLE XI. According to the table, we can clearly observe that our EWT achieves better results on the vast majority of datasets and noise levels with fewer parameters. This fully demonstrates the effectiveness of the proposed EWT. Meanwhile, it also means that it is meaningful and feasible to combine wavelet and Transformer, which further promoted the development of the wavelet in SID. ### _Model Size Investigations_ Increasing the depth of the model is the easiest way to improve the model performance. However, it cannot be ignored that these models [15, 16, 23] also accompanied by a large number of parameters. In Fig. 1, we provide the performance and parameter comparisons of EWT with other SID models, including IRCNN [42], DnCNN [12], FFDNet [14], ADNet [45], BRDNet [56], MLEFGN [22], RNAN [44], RDN [15], DIDN [16], and IPT [18]. Among them, the red star represents EWT. Obviously, EWT achieves competitive results with few parameters, which strike a good balance between the performance and size of the model. Moreover, we provide a detailed comparison with DHDN [23] and DIDN [16] in TABLE X. **Obviously, EWT achieves best results on CBSD68 and close results on Kodak24 with only 1/14 parameters of DHDN and DIDN.** All these results validate that EWT is an efficient and accurate SID model. However, this does not mean that it is only suitable for SID. EWT is a general model that can be applied to other image restoration tasks, such as image super-resolution, image dehazing, and image deraining. In future works, we will further explore its effectiveness on other image restoration tasks, and optimize the model according to different tasks. ## VII Conclusion In this paper, a novel Efficient Wavelet-Transformer (EWT) is proposed for single image denoising. Specifically, we introduced Discrete Wavelet Transform (DWT) and Inverse Wavelet Transform (IWT) for downsampling and upsampling operations, respectively. This method can greatly reduce the resolution of the image, thereby reducing GPU memory consumption, and will not cause any loss of information. Meanwhile, an efficient Multi-level Feature Aggregation Module (MFAM) is proposed to make full use of hierarchical features by using local and global residual learning. In addition, a novel Dual-stream Feature Extraction Block (DFEB) is specially designed for local and global features extraction, which combines the advantages of CNN and Transformer that can take into account the information of different levels. Extensive experiments show that our EWT achieves the best balance between the performance, size, execution time, and GPU memory consumption of the model.
2305.11421
PastNet: Introducing Physical Inductive Biases for Spatio-temporal Video Prediction
In this paper, we investigate the challenge of spatio-temporal video prediction, which involves generating future videos based on historical data streams. Existing approaches typically utilize external information such as semantic maps to enhance video prediction, which often neglect the inherent physical knowledge embedded within videos. Furthermore, their high computational demands could impede their applications for high-resolution videos. To address these constraints, we introduce a novel approach called Physics-assisted Spatio-temporal Network (PastNet) for generating high-quality video predictions. The core of our PastNet lies in incorporating a spectral convolution operator in the Fourier domain, which efficiently introduces inductive biases from the underlying physical laws. Additionally, we employ a memory bank with the estimated intrinsic dimensionality to discretize local features during the processing of complex spatio-temporal signals, thereby reducing computational costs and facilitating efficient high-resolution video prediction. Extensive experiments on various widely-used datasets demonstrate the effectiveness and efficiency of the proposed PastNet compared with state-of-the-art methods, particularly in high-resolution scenarios. Our code is available at https://github.com/easylearningscores/PastNet.
Hao Wu, Wei Xiong, Fan Xu, Xiao Luo, Chong Chen, Xian-Sheng Hua, Haixin Wang
2023-05-19T04:16:50Z
http://arxiv.org/abs/2305.11421v2
# PastNet: Introducing Physical Inductive Biases for Spatio-temporal Video Prediction ###### Abstract. In this paper, we investigate the challenge of spatio-temporal video prediction, which involves generating future videos based on historical data streams. Existing approaches typically utilize external information such as semantic maps to enhance video prediction, which often neglect the inherent physical knowledge embedded within videos. Furthermore, their high computational demands could impede their applications for high-resolution videos. To address these constraints, we introduce a novel approach called Physics-assisted Spatio-temporal Network (PastNet) for generating high-quality video prediction. The core of our PastNet lies in incorporating a spectral convolution operator in the Fourier domain, which efficiently introduces inductive biases from the underlying physical laws. Additionally, we employ a memory bank with the estimated intrinsic dimensionality to discretize local features during the processing of complex spatio-temporal signals, thereby reducing computational costs and facilitating efficient high-resolution video prediction. Extensive experiments on various widely-used datasets demonstrate the effectiveness and efficiency of the proposed PastNet compared with a range of state-of-the-art methods, particularly in high-resolution scenarios. Our code is available at [https://github.com/easylearningscores/PastNet](https://github.com/easylearningscores/PastNet). Spatiotemporal predictive learning, Physical Inductive Biases, high-resolution video prediction + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing Footnote †: journal: Computer and Image Processing frames (Wang et al., 2017; Wang et al., 2018). This problem bears considerable relevance to an array of applications, including human motion prediction (Beng et al., 2017), climate change analysis (Wang et al., 2018), and traffic flow forecasting (Wang et al., 2018). In literature, a multitude of methods have been devised for efficacious video prediction, integrating deep neural networks to capture complex correlations within spatio-temporal signals (Chen et al., 2017). Early approaches (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) amalgamate convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to extract features from RGB frames and predict future trends, respectively. Several techniques also employ deep stochastic models to generate video prediction while taking into account diverse potential results (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). More recently, a range of algorithms has sought to enhance video prediction by incorporating external information such as optical flow, semantic maps, and human posture data (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). For example, SADM (Chen et al., 2017) combines semantic maps with flow fields to supply contextual information exhibiting superior compatibility. Nevertheless, these external inputs could not be readily available in practical situations (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). In light of this consideration, a recent study (Wang et al., 2018) demonstrates that a basic CNN-based model can achieve state-of-the-art performance through end-to-end optimization. Despite their remarkable achievements, the performance of existing approaches remains far from satisfactory for the following reasons: (1) **Neglect of Underlying Physical Principles.** Current methods typically employ deep neural networks to extract information from spatial space and the associated visual domains (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). However, video frames could be governed by underlying physical principles, such as partial differential equations (PDEs) (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). For instance, climate videos are typically dominated by high-order equations. As a consequence, it is anticipated to explore these physical principles for effective video prediction. (2) **Low Efficiency.** As a dense prediction problem (Wang et al., 2018), the scalability of neural network models is critical for high-resolution video prediction (Chen et al., 2017). Regrettably, the majority of existing models rely on complex neural networks, such as deep CNNs and Vision Transformers (Wang et al., 2018; Wang et al., 2018), which entail significant computational costs and render them unsuitable for large-scale high-resolution videos. To address these concerns, this paper introduces a novel approach called Physics-assisted Spatio-temporal Network (PastNet) for high-quality video prediction. The core of our PastNet is to introduce inductive physical bias using the data itself, which holds the potential to solve underlying PDEs. Specifically, we introduce a convolution operator in the spectral space that initially transfers video frames into the Fourier domain, followed by efficient parallelizable channel fusion (Wang et al., 2018). Subsequently, we employ an inverse Fourier transform to generate the outputs. Furthermore, to enhance efficiency for high-resolution video implementation, our PastNet introduces a discrete spatio-temporal module, which not only estimates intrinsic dimensionality but also introduces memory banks to discretize local features during the processing of complex spatio-temporal signals, replacing local features with their nearest queries from the memory bank. Finally, a deconvolution decoder is incorporated to output the predictions, which are combined with outputs from the spectral space. Comprehensive experiments on various benchmark datasets substantiate the effectiveness and efficiency of our proposed PastNet. A glimpse of the compared results by various approaches is provided in Figure 1 and we can observe the huge superiority of our PastNet on MovingMNIST. Our main contributions can be summarized as follows: * New Perspective. We open up a new perspective to connect spatio-temporal video prediction with physical inductive biases, thereby enhancing the model with data itself. * Novel Methodology. Our PastNet not only employs a convolution operator in the spectral space to incorporate physical prior but also discretizes local features using a memory bank with the estimated intrinsic dimensionality to boost efficiency for high-resolution video prediction. * High Performance and Efficiency. Comprehensive experiments on a variety of datasets demonstrate that the PastNet exhibits competitive performance in terms of both effectiveness and efficiency. ## 2. Related Work ### Spatio-temporal Video Prediction Video prediction has emerged as an essential topic within the multimedia research community, and numerous methods have been proposed to address this challenge. Initial studies frequently examine spatio-temporal signals extracted from RGB frames (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). For instance, ConvLSTM (Wang et al., 2018) employs convolutional neural networks (CNNs) to encode spatial data, which is subsequently integrated with an LSTM model to capture temporal dependencies. PredNet (Wang et al., 2018) draws inspiration from neuroscience, enabling each layer to make local predictions for video sequences. MCnet (Wang et al., 2018) introduces multiple pathways to encode motion and content independently, which are combined into an end-to-end framework. Various approaches strive to merge video prediction with external information from optical flow, semantic maps and human posture data (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). As a representative unsupervised method, DVF (Wang et al., 2018) predicts missing frames using masked ones. HVP (Wang et al., 2018) treats this problem as video-to-video translation from semantic structures, inspired by hierarchical models. SADM (Chen et al., 2017) combines semantic maps and flow fields to provide more compatible contextual information. However, this external information could be inaccessible in real-world applications (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Furthermore, the efficiency and effectiveness of current solutions remain suboptimal for high-resolution videos. To surmount these obstacles, we propose a novel method that incorporates both physical inductive biases and quantization operations for high-quality video prediction. ### Physics-Informed Machine Learning Various machine learning problems can benefit from the incorporation of physical knowledge (Wang et al., 2018). Modern physics-informed machine learning approaches can leverage knowledge from three aspects, i.e., observational biases, inductive biases, and learning biases. Observational biases primarily arise from the data itself (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), offering a range of data augmentation strategies to expand datasets. Inductive biases guide the specific design of neural networks such as graph neural networks (Chen et al., 2017) and equivariant networks (Wang et al., 2018), which possess properties of respecting additional symmetry groups (Chen et al., 2017). Learning biases pertain to the process of imposing distinct constraints by incorporating loss objectives during optimization (Wang et al., 2018), adhering to the principles of multi-task learning. For instance, physics-inspired neural networks (Wang et al., 2017) (PINNs) typically include constraints related to derivatives from PDEs, resulting in superior performance in modeling dynamical systems and predicting molecular properties. To bolster predictive performance, our PastNet introduces inductive biases from the underlying PDEs of video frames by integrating learnable neural networks in the Fourier domain. ### Data Compression Compressing large-scale data is essential for enhancing the efficiency of both training and inference processes (Wang et al., 2017). Learning to hash is a widely employed technique that accomplishes this by mapping continuous vectors to compact binary codes while preserving similarity relationships, which achieves extensive progress in approximate nearest neighbor search (Beng et al., 2015). Another line to address this challenge is neural quantization. For instance, multi-codebook quantization (Wang et al., 2017) is akin to the process of k-means clustering, which stores centroids and assignments in the codebooks. Recently, VQ-VAE (Wang et al., 2017) has integrated neural quantization with auto-encoders, using discrete codes to reconstruct input images and leading to efficient models for large-scale applications. VQ-VAE has been successfully applied in various scenarios, including video generation (Wang et al., 2017), image inpainting (Wang et al., 2017), and semantic communication (Kang et al., 2017). In this paper, our DST module is inspired by VQ-VAE, discretizing local features to improve efficiency in high-resolution video prediction. ## 3. Methodology ### Overview This paper studies the problem of spatio-temporal video prediction and existing solutions usually neglect underlying physical principles and suffer from larger computational space. To tackle this, our proposed PastNet introduces both _Fourier-based physics-guided (FPG) module_ and _discrete spatio-temporal (DST) module_ for effectively and efficiently video prediction as in Figure 2. In particular, our FPG module first divides frame into non-overlapping patches and then introduce a Fourier-based _priori_ spectral filters with the introduction of physical inductive biases. Then, our DST module not only estimates intrinsic dimensionality but also introduces a discrete memory bank to effectively and efficiently capture spatio-temporal signals. Following is the problem definition and the detailed description of these key components of our proposed PastNet. **Problem Definition.** To enhance clarity, we offer a comprehensive explanation of the relevant concepts. Assume a video trajectory represents a dynamic physical system in the temporal domain, consisting of \(T\) time steps, denoted as \(\mathbf{V}_{1:T}=\{\mathbf{V}_{1},\cdots,\mathbf{V}_{T}\}\). Each snapshot captures \(C\) color space measurements over time at all locations within a spatial region, represented by an \(H\times W\) grid. From a spatial viewpoint, the observation of these \(C\) measurements at any specific time step \(i\) can be depicted as a tensor, \(\mathbf{V}_{i}\in\mathbb{R}^{C\times H\times W}\). Our objective is to leverage spatio-temporal data to deduce underlying physical priors and integrate feature representation learning in both spatial and temporal dimensions and predict the most probable future sequence of length \(T_{f}\), denoted as \(\mathbf{V}_{T+1:T+T_{f}}^{\prime}=\{\mathbf{V}_{T+1},\cdots,\mathbf{V}_{T+T_{f}}\}\). ### Fourier-based Physics-Guided (FPG) Module Our primary insight focuses on utilizing prior physical signals to achieve effective spatio-temporal video prediction. In fact, spatio-temporal data is often subject to complex, high-dimensional non-linear physical equations (e.g., Navier-Stokes equation), which are challenging to capture. Inspired by previous studies (Wang et al., 2017; Wang et al., 2017), we employ spectral methods and develop an algorithm that seamlessly combines trainable neural networks with Fourier-based a _priori_ spectral filters. It has been shown that the features transformed through Fourier transformation in the frequency domain correspond precisely with the coefficients of the underlying physical partial differential equation (Wang et al., 2017; Wang et al., 2017). As a result, we can utilize neural networks to approximate the analytical solution of the latent PDE. To be specific, we initially divide video frames into non-overlapping patches with initialized embeddings and subsequently transform them into a spectral space. The features within the frequency domain are fused, followed by an inverse transformation returning them to the spatial domain. This innovation supports a physical inductive bias derived from data, demonstrating significant potential for solving PDEs. Then, we introduce our FPG module in detail. **Embedding Initialization.** Given the input video \(\mathbf{V}\in\mathbb{R}^{T\times C\times H\times W}\), we extract the high-level learnable representations following ViT (Wang et al., 2017). In particular, we divide the frame into non-overlapping \(N=HW/\hbar\nu\) patches of size \(h\times w\) and then project them into patch embeddings \(\mathbf{E}^{pat}\in\mathbb{R}^{T\times h\times w\times d}\), where \(d\) denotes the embedding dimension. Position embeddings \(E^{pos}\in\mathbb{R}^{h\times w\times d}\) are also applied to get the initial token representation matrix \(\hat{\mathbf{V}}\in\mathbb{R}^{T\times h\times w\times d}\). In formulation, \[\hat{\mathbf{V}}_{t}=\mathbf{E}_{t}^{pat}+\mathbf{E}^{pos}, \tag{1}\] where \(\hat{\mathbf{V}}_{t}=\hat{\mathbf{V}}[t,:]\in\mathbb{R}^{h\times w\times d}\) makes up the matrix \(\hat{\mathbf{V}}\). **Fourier-based Spectral Filter.** We apply \(\hat{\mathbf{V}}\) as input to \(L\) layers of filters, each layer containing three essential components, i.e., _Fourier transform, separate mixing_ and _inverse Fourier transform_. Firstly, a 2D fast Fourier transform (FFT) is leveraged to generate the frequency domain token \(\mathcal{K}_{t}\in\mathbb{R}^{h\times w\times d}\) at time step \(t\): \[\mathcal{K}_{t}(u,v)=\sum_{x=0}^{h-1}\sum_{y=0}^{w-1}\hat{\mathbf{V}}_{t}(x,y)e^{ -2\pi i(\frac{u}{h}x+\frac{v}{w}y)}, \tag{2}\] where \(i\) is the imaginary unit, and \(u\) and \(v\) are the indices of rows and columns in the frequency domain, respectively. Secondly, the complex-valued token \(\mathcal{K}=\mathcal{K}_{1:T}\) is then split into its real and imaginary parts and concatenated along the channel dimension. To enhance the integration of feature information, we utilize token mixing across different channels, which allows for richer Fourier mode representations to emerge through greater fusion of channel-wise signals. It is implemented with separate MLPs for the real and imaginary parts separately as follows: \[\Re\hat{\mathcal{K}}_{t}(u,v)=\mathrm{MLP}_{\theta_{1}}(\Re\mathcal{K}_{t}(u,v )),\Im\hat{\mathcal{K}}_{t}(u,v)=\mathrm{MLP}_{\theta_{2}}(\Im\mathcal{K}_{t} (u,v)), \tag{3}\] where \(\Re\) and \(\Im\) denotes the operator to obtain the real part and imaginary part, respectively. Performing token mixing mixes different modes across the Fourier domain. As the Fourier domain possesses global attributes, it further explore long-range relationships for underlying physical features. Lastly, the mixed tokens are then transformed back to the spatial domain using the 2D inverse Fourier transform to obtain the output of spectral filter layers \(\mathbf{Y}\in\mathbb{R}^{T\times h\times w\times d}\) as follows: \[\begin{split} Y_{t}(x,y)&=\frac{1}{hw}\sum_{u=0}^{h-1 }\sum_{v=0}^{\frac{w}{2}}\hat{\mathcal{K}}_{t}(u,v)e^{2\pi i(\frac{h}{h}x+\frac {v}{w}y)}\\ &+\frac{1}{hw}\sum_{u=0}^{h-1}\sum_{v=\frac{w}{2}+1}^{h-1}\hat{ \mathcal{K}}_{t}(u,v)e^{2\pi i(\frac{h}{h}x+\frac{v-w}{w}y)},\end{split} \tag{4}\] where \(Y_{t}=Y[t,:,:]\in\mathbb{R}^{N\times M}\) makes up the matrix \(\mathbf{Y}\). **Spatial Extraction.** To better extract the latent spatial information, we introduce classic convolutional neural networks as a supplement of the _priori_ spectral filters, which can be formulated as follows: \[\hat{Y}_{t}^{FPG}=\text{Tanh}(\text{Conv2d}(\text{MLP}(Y_{t})+Y_{t})), \tag{5}\] The convolutional layer is known for its ability to extract features in the spatial domain through its local receptive fields. By leveraging these learned features as filters in the frequency domain, the convolutional layer can effectively mine potential physical information from the input data. The extracted physical information can significantly enhance the performance of spatiotemporal prediction. Overall, the Fourier-based physics-guided module transforms the spatial domain of latent physical information into the frequency domain using Fourier transforms, and learns the analytical solutions of PDEs using neural networks. This approach allows PastNet to handle complex geometries and high-dimensional problems. ### Discrete Spatio-temporal (DST) Module Our discrete spatio-temporal (DST) module aims to explore spatio-temporal signals in video frames in an efficient manner. To achieve this, we not only estimate the intrinsic dimensionality for the hidden space, but also introduces a memory bank to vector quantization. In particular, it consists of four different modules, i.e., _encoder, intrinsic dimensionality estimation, discrete quantization, dynamic propagation_ and _decoder_. Then, we introduce them in detail. **Encoder.** The encoder contains \(K_{e}\) ConvNormReLU blocks to capture spatial signals. Given \(Z^{(0)}=\mathbf{V}\) and an activation function \(\sigma\), we have: \[Z^{(i)}=\sigma\left(\text{LayerNorm}\left(\text{Conv2d}\left(Z^{(i-1)} \right)\right)\right),1\leq i\leq K_{e}, \tag{6}\] where \(Z^{(i-1)}\) and \(Z^{(i)}\) denote the input and output of the \(i\)-th block with the shapes \((T,C,H,W)\) and \((T,\hat{C},\hat{H},\hat{W})\), respectively. **Intrinsic Dimensionality Estimation.** How to decide the dimensionality of the hidden space remains a challenging problem (Dosovitskiy et al., 2017). In particular, too large dimensionality would bring in redundant computational time and potential overfitting while too small one would underfit data. Here, we turn to Levina-Bickel algorithm (Levina and Bickel, 2013) to acquire intrinsic dimensionality. In particular, we start with a large dimensionality followed by mapping \(Z^{(K_{e})}\) back to the input using a decoder and minimize the reconstruction loss objective as \(L_{rec}=||\mathbf{V}-\hat{\mathbf{V}}||\) where \(\hat{\mathbf{V}}\) is the reconstructed frame. Then, we identify the \(R\) nearest neighbours for each vector \(\mathbf{h}_{j}\in Z^{(K_{e})}\), i.e., \(\{\mathbf{h}_{j_{i}},\cdots,\mathbf{h}_{j_{R}}\}\) and calculate the Figure 2. An overview of the proposed PastNet, which consists of a Fourier-based Physics-guided (FPG) and a Discrete Spatio-temporal (DST) module. The PFI module first divides video frames into non-overlapping patches and introduce Fourier-based spectral filter with the introduction of physical biases. Then, its also extract spatial signals with convolutional neural networks. The DST module is an encoder-decoder architecture, which introduces a memory bank to discretize local features in estimated intrinsic dimensionality. local estimator for the vector as: \[D_{j}=\frac{1}{R-2}\sum_{m=1}^{R-1}\log\frac{d(\mathbf{h}_{j},\mathbf{h}_{jR})}{d(\mathbf{h}_{ j},\mathbf{h}_{jm})}, \tag{7}\] where \(d(\cdot,\cdot)\) denotes the cosine distance between two vectors. Finally, we take the average all local estimators to generate the final estimator: \[D=\frac{1}{J}\sum_{j=1}^{J}D_{j}, \tag{8}\] where \(J\) is the number of vectors in \(\mathbf{Z}^{(K_{e})}\) and \(\cdot\) denotes the ceiling function. After generating final estimated optimal dimension, we utilize \(D\) as the hidden embedding instead. **Discrete Quantization.** Previous methods usually process video features directly using spatio-temporal convolution modules. However, directly feeding video features into these modules would bring in huge computational cost. Therefore, we introduce a discrete memory bank to discretize feature vectors, which are constructed by an variational autoencoder (Wang et al., 2017; Wang et al., 2018). In this way, computational costs can be largely reduced to fit for large-scale video prediction. In detail, we initialize the memory bank with variational autoencoder. Here, each embedding vector \(\mathbf{z}\) from \(\mathbf{Z}^{(K_{e})}\) from the output of the encoder is mapped to the nearest point in the memory bank. The number of embeddings in the memory is set to \(D^{2}\) empirically. Given the memory bank with \(D^{2}\) embedding vectors, \(\{\mathbf{m}_{1},\cdots,\mathbf{m}_{D^{2}}\}\), we construct a mapping \(VQ\): \[VQ(\mathbf{z})=\mathbf{m}_{\hat{K}},\quad\text{ where }\quad\hat{k}=\operatorname{ argmin}_{k}\|\mathbf{z}(\mathbf{x})-\mathbf{m}_{k}\|_{2}\,, \tag{9}\] where each embedding \(\mathbf{z}\) is concatenated to generate matrix \(\tilde{Z}=VQ(\mathbf{Z}^{(K_{e})})\). The mapping connects continuous vectors with given vectors in the memory bank to save the computational cost. Then, to minimize the information loss, we map the concatenated matrix back to the input using a new decoder, i.e., \(\tilde{V}=Dec(Z)\). The whole framework is optimized using the following objective as: \[\hat{\mathcal{L}}=\|\mathbf{V}-\tilde{\mathbf{V}}\|+\left\|\text{sg}\left[\tilde{Z} \right]-\mathbf{Z}^{(K_{e})}\right\|_{2}^{2}+\beta\left\|\tilde{Z}-\text{sg}[\bm {Z}^{(K_{e})}]\right\|_{2}^{2}, \tag{10}\] where \(\beta\) denotes a parameter to balance these objective and \(\text{sg}(\cdot)\) is the stopgradient operator to cut off the gradient computation during back propagation. Here, the first term denotes the reconstruction loss and the last two terms minimize the quantization loss between continuous embedding vectors and their neighbours in the memory bank. **Dynamic Propagation.** After training the variational autoencoder, we remove the decoder, and then feed the quantized vector into temporal convolution on \(T\times D\) channels. In particular, each temporal convolution block involves a bottleneck followed by group convolution operator: \[\mathbf{Z}^{(i)}=\text{GroupConv2d}(\text{Bottleneck}(\mathbf{Z}^{(i-1)})),K_{e}<i \leq K_{e}+K_{t}, \tag{11}\] where Bottleneck denotes the 2D convolutional layer with \(1\times 1\) kernel and \(K_{t}\) is the number of blocks. The shape of input \(\mathbf{Z}^{(i-1)}\) and output \(\mathbf{Z}^{(i)}\) are \((T,D,\hat{H},\hat{W})\) and \((\hat{T},D,\hat{H},\hat{W})\), respectively. **Decoder.** Finally, our decoder contains \(K_{d}\) unConvNormReLU blocks to output the final predictions \(\hat{\mathbf{Y}}^{DST}=\mathbf{Z}^{(K_{e}+K_{e}+K_{d})}\). In formulation, we have: \[\begin{split} Z^{(i)}=&\sigma\left(\text{LayerNorm }\left(\text{unConv2d}\left(Z^{(i-1)}\right)\right)\right),\\ K_{e}+K_{t}+1\leq i\leq K_{e}+K_{t}+K_{d},\end{split} \tag{12}\] where unConv2d is implemented using ConvTranspose2d (Kang et al., 2017). The shape of input \(\mathbf{Z}^{(i-1)}\) and output \(\mathbf{Z}^{(i)}\) are \((\hat{T},D,\hat{H},\hat{W})\) and \((T,C,H,W)\), respectively. ### Framework Summarization Finally, we combine the output of both FPG and DST modules, which result in the final prediction: \[\hat{\mathbf{Y}}^{final}=\hat{\mathbf{Y}}^{\text{FPG}}\oplus\hat{\mathbf{Y}}^{DST}, \tag{13}\] where \(\oplus\) represents element-wise addition. The whole framework would be optimizing by minimizing the vanilla MSE loss between the predictions and the target frame. ## 4. Experiment ### Experimental Setups **Datasets.** In this paper, the datasets studied can be classified into two categories from the perspective of PDE modeling or physics equation description: **Non-natural Phenomenon** datasets and **Natural Phenomenon** datasets. The former includes **MovingMNIST**(Wang et al., 2017), **TrafficBJ**(Wang et al., 2017), and **KTH** datasets (Wang et al., 2017). Although they do not correspond to natural phenomena, the dynamic evolutionary processes expressed in these datasets can still be described by PDEs. The latter includes the **Storm EVent ImagRy (SEVIR)**(Wang et al., 2017), **Reaction Diffusion System (RDS)**, **Elastic Double Pendulum System (EDPS)**, and **Fire System (FS)** datasets (Dai et al., 2018), which correspond to natural phenomena such as meteorology, chemical reactions, mechanical vibrations, and fire. These datasets are often used in the study of PDE modeling and the description of physics equations to better understand and predict the evolution of natural phenomena. We conduct experiments on seven datasets for evaluation. Here we summarize the details of the datasets used in this paper, The statistics are shown in the Table 1. **Evaluation metrics.** We adopt Mean Squared Error (MSE), Mean Absolute Error (MAE), Multi-Scale Structural Similarity (MS-SSIM), Peak Signal-to-Noise Ratio (PSNR), and Learned Perceptual Image \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Dataset** & \(N\_train\) & \(N\_test\) & \((C,H,W)\) & \(T\) & \(K\) \\ \hline MovingMNIST & 9000 & 1000 & \((1,64,64)\) & 10 & 10 \\ TrafficBJ & 19627 & 1334 & \((2,32,32)\) & 4 & 4 \\ KTH & 108717 & 4086 & \((1,128,128)\) & 10 & 20 \\ SEVIR & 4158 & 500 & \((1,384,384)\) & 10 & 10 \\ RDS & 2000 & 500 & \((3,128,128)\) & 2 & 2 \\ EDPS & 2000 & 500 & \((3,128,128)\) & 2 & 2 \\ FS & 2000 & 500 & \((3,128,128)\) & 2 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of the datasets used in the experiments. The number of training and test sets of the dataset are \(N\_train\) and \(N\_test\) respectively, where the size of each image frame is \((C,H,W)\), and the length of the input and prediction sequences are \(T\), \(K\) respectively. Patch Similarity (LPIPS) to evaluate the quality of the predictions. Lower values of MSE, MAE, and LPIPS and higher values of SSIM and PSNR imply better performance. **Implementation details**. The PastNet model features a consistent backbone architecture for all datasets, in which the FPG component is composed of 8-_layer Fourier-based Spectral Filter_. The DSM encoder incorporates 3-_layer convolution block layers_ and 3-_layer residual blocks_, while the decoder utilizes 1-_layer convolution block_, 4-_layer residual blocks_, and 2-_layer deconvolution blocks_. All experiments in this paper were conducted on an NVIDIA A100-PCIE-40GB. ### Performance Comparison We conduct a thorough evaluation of PastNet by comparing it with several baseline models on both non-natural and natural phenomena datasets. This includes competitive RNN architectures such as ConvLSTM (Wang et al., 2017), PredRNN-V1-2 (Wang et al., 2017), E3D LSTM (Wang et al., 2017), SAConvLSTM (Wang et al., 2017), PhyDnet (Wang et al., 2017), and MIM (Zhang et al., 2017). We also evaluate state-of-the-art CNN architecture SimVP (Liu et al., 2017) for non-natural phenomena datasets. For natural phenomenon datasets, we evaluate models that incorporate physical information, such as DLP (Wang et al., 2017), which uses an advection-diffusion flow model and achieves state-of-the-art performance on the SST dataset, it is commonly used for generic physical processes. We also evaluate NLDM (Wang et al., 2017), which combines manifold learning theory and autoencoder to discover fundamental variables hidden in physical experimental data for spatiotemporal prediction, as well as PhyDnet and SimVP. Our evaluation is meticulous to ensure the validity of the results. Table 2 demonstrates that PastNet outperforms other models on non-natural phenomena datasets. Specifically, PastNet achieves the best MSE and MAE metrics on MovingMNIST, with values that are 20% and 10% lower than SimVP, respectively. Moreover, PastNet also achieves higher MS-SSIM and PSNR metrics than other models, indicating better prediction ability for dynamic changes in videos. On TrafficBJ, while PhyDnet has the best MSE and PSNR performance, PastNet comes in second place and achieves top spot for MAE and MS-SSIM. On KTH, PastNet achieves the best MSE and MAE performance, with values that are 33.6% and 35.8% lower than the second-place MIM model, respectively. Overall, PastNet Figure 4. Example of prediction results on the TrafficBJ dataset. **Top: input Traffic flow; Middle: future real Traffic flow; Bottom: PastNet predicted Traffic flow.** Figure 5. Example of prediction results on the SEVIR dataset. **Top: input weather sequence; Middle: future real weather sequence; Bottom: PastNet predicted weather sequence.** Figure 3. Example of prediction results for the Moving MNIST dataset. **Top: input motion digital sequence; Middle: future real motion digital sequence; Bottom: PastNet predicted motion digital sequence.** shows significant advantages over other baseline methods in all evaluation metrics. Results in Table 3 show that PastNet achieves the best performance in most evaluation metrics for all natural phenomena datasets. On the SEVIR dataset, PastNet achieves the lowest MSE and MAE scores, significantly better than other methods such as DLP, PhyDnet, and NLDM. On the RDS dataset, PastNet achieves the lowest MSE \(\times\) 100 score, significantly better than other methods. On the EDPS dataset, PastNet achieves the highest MS-SSIM score, significantly better than SimVP. Overall, these results demonstrate that PastNet is highly effective in accurately predicting physical quantities and preserving structural information in various physical datasets, outperforming other state-of-the-art methods such as DLP, PhyDnet, NLDM, and SimVP. We present the qualitative prediction results of PastNet for various datasets, highlighting its capability to accurately predict future images. Our findings demonstrate that PastNet can accurately predict numerical motions in MovingMNIST as depicted in Figure 3. Additionally, Figure 4 shows that PastNet reliably predicts traffic flow changes in the TrafficBJ dataset. Furthermore, PastNet performs well in the SEVIR weather dataset, as shown in Figure 5, where it accurately predicts high-resolution satellite images and infers future weather changes. The visualization results for other datasets are available in the Appendix A. ### Ablation Study In this section, we demonstrate PastNet's competitive performance on a wide range of datasets through a series of ablation studies. Table 4 and Figure 6 present the quantitative and qualitative results of these studies for different model structures, respectively. Specifically, **PastNet w/o FPG** removes the FPG module from the PastNet model. **PastNet w/o FPG + UNet** removes the FPG module from the PastNet model and uses UNet as an alternative module. **PastNet w/o FPG + ViT** removes the FPG module from the PastNet model and uses ViT as an alternative module. **PastNet w/o FPG + SwinT** removes the FPG module from the PastNet model and uses SwinT as an alternative module. **PastNet w/o DST** removes the DST module from the PastNet model. **PastNet** is the base PastNet model, which includes both the DST and FPG modules. * PastNet outperforms other models with the lowest MSE, MAE, and highest SSIM, indicating its superior performance in video prediction. * The inclusion of the DST module in PastNet improves the model's training speed, making it a more efficient option. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{**MovingMNIST**} & \multicolumn{3}{c|}{**TrafficBJ**} & \multicolumn{3}{c}{**KTH**} \\ \cline{2-13} & MSE & MAE & MS-SSIM & PSNR & MSE \(\times\) 100 & MAE & MS-SSIM & PSNR & MSE & MAE / 10 & MS-SSIM & PSNR \\ \hline ConvLSTM & 105.41 & 188.96 & 0.7498 & 26.27 & 48.45 & 18.921 & 0.9782 & 37.72 & 126.15 & 128.32 & 0.7123 & 23.58 \\ PredRNN-V1 & 81.87 & 147.31 & 0.8697 & 29.99 & 46.49 & 17.784 & 0.9789 & 38.11 & 101.52 & 99.21 & 0.8292 & 25.55 \\ E3D LSTM & 67.25 & 136.99 & 0.8907 & 31.02 & 44.89 & 17.219 & 0.9731 & 38.71 & 86.17 & 85.55 & 0.8663 & 27.92 \\ SA-ConvLSTM & 79.76 & 142.21 & 0.8692 & 30.21 & 43.99 & 17.093 & 0.9672 & 38.98 & 89.35 & 87.20 & 0.8372 & 27.25 \\ PhyDnet & 58.22 & 145.76 & 0.9012 & 32.13 & **42.21** & 16.975 & 0.9821 & **39.94** & 66.95 & 56.73 & 0.8831 & 28.02 \\ MIM & 66.28 & 119.87 & 0.9017 & 31.12 & 43.98 & 16.645 & 0.9712 & 38.99 & 56.59 & 54.86 & 0.8666 & 28.97 \\ PredRNN-V2 & 48.42 & 126.18 & 0.8912 & 33.19 & 43.89 & 16.982 & 0.9723 & 39.02 & 51.15 & 50.64 & 0.8919 & 29.92 \\ SimVP & 32.22 & 90.12 & 0.9371 & 37.17 & 43.32 & 16.897 & 0.9822 & 39.29 & 40.99 & 43.39 & 0.9061 & 33.72 \\ PastNet & **31.77** & **89.33** & **0.9447** & **38.38** & 42.93 & **16.405** & **0.9876** & 39.42 & **33.83** & **35.26** & **0.9279** & **35.28** \\ \hline \hline \end{tabular} \end{table} Table 2. Quantitative prediction results of PastNet compared to Baselines on various Non-natural Phenomenon datasets. The evaluation metrics selected for this study are MSE \(\downarrow\), MAE \(\downarrow\), MS-SSIM \(\uparrow\), and PSNR \(\uparrow\), with a lower value (\(\downarrow\)) indicating better performance for MSE and MAE, and a higher value (\(\uparrow\)) indicating better performance for MS-SSIM and PSNR. The best result is indicated in boldface, while the second-best result is indicated with an underline in the table caption. Figure 7. PastNet outperforms other models in terms of efficiency and convergence rate on the MovingMNIST dataset. Specifically, it achieves the lowest LPIPS score in the shortest training time, as shown on the _Left_ side of the figure. In addition, it achieves the highest MS-SSIM and PSNR scores within the same epochs, as depicted in the Middle and _Right_ sides of the figure, respectively. * The FPG module is crucial for improving PastNet's performance on the Natural Phenomenon dataset, with Unet as a potential alternative. This demonstrates PastNet's versatility and ability to adapt to different datasets and tasks. ### Efficiency and Convergence Rate Analysis Figure 7 on the left side clearly demonstrates the advantages of PastNet in terms of both time and LPIPS metrics. Notably, PastNet's training time for 100 epochs is only 4.16 hours, considerably faster than other models. Additionally, it achieves outstanding results in terms of LPIPS scores, indicating that PastNet can complete training more efficiently in a shorter amount of time while generating higher-quality images. The middle and right sides of Figure 7 illustrate the rapid improvement of PastNet in SSIM and PSNR metrics. Within approximately 60 epochs, PastNet achieves an SSIM metric of around 0.92, while other models remain below 0.85 at the same point. After 100 epochs, PastNet reaches a PSNR metric of approximately 38, which is far superior to other models. These results \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**MSE**} & \multirow{2}{*}{**MAE**} & \multirow{2}{*}{**SSIM**} & \multirow{2}{*}{**Time (h)**} \\ \cline{3-3} \cline{5-5} & & & & \\ \hline PastNet w/o FPG & 133.8 & 103.6 & 0.67 & 3.19 \\ PastNet w/o FPG + UNet & 115.4 & 99.35 & 0.61 & 16.54 \\ PastNet w/o FPG + ViT & 287.3 & 139.5 & 0.49 & 23.87 \\ PastNet w/o FPG + SwinT & 321.8 & 158.7 & 0.54 & 24.12 \\ PastNet w/o DST & 192.3 & 118.5 & 0.64 & 4.92 \\ \hline PastNet & **73.21** & **52.31** & **0.74** & **4.16** \\ \hline \hline \end{tabular} \end{table} Table 4. Experimental results of ablation with different model structures on the SEVIR dataset.The evaluation metrics selected for this study are MSE \(\downarrow\), MAE \(\downarrow\), MS-SSIM \(\uparrow\), and Time(h) \(\uparrow\), with a lower value (\(\downarrow\)) indicating better performance for MSE, MAE and Time(h), and a higher value (\(\uparrow\)) indicating better performance for MS-SSIM. \begin{table} \begin{tabular}{l|l l l l l} \hline \hline Dataset & Model & MSE & MAE & MS-SSIM & PSNR \\ \hline \multirow{4}{*}{SEVIR} & DLP & 300.42 & 140.82 & 0.6772 & 36.59 \\ & PhyDnet & 97.70 & 72.22 & 0.7137 & 43.03 \\ & NLDM & 295.93 & 170.73 & 0.6982 & 36.71 \\ & SimVP & 68.68 & 47.71 & 0.7231 & 49.09 \\ & PastNet & **66.13** & **44.84** & **0.7568** & **49.78** \\ \hline \multirow{4}{*}{RDS} & DLP & 1.38 & 2.08 & 0.9763 & 44.52 \\ & PhyDnet & 0.51 & 1.25 & 0.9874 & 47.01 \\ & NLDM & 1.03 & 1.78 & 0.9594 & 46.56 \\ & SimVP & 0.15 & 0.67 & 0.9896 & 51.06 \\ & PastNet & **0.13** & **0.63** & **0.9997** & **51.79** \\ \hline \multirow{4}{*}{EDPS} & DLP & 3.53 & 281.82 & 0.9337 & 42.53 \\ & PhyDnet & 1.08 & 167.23 & 0.9983 & 45.92 \\ & NLDM & 2.51 & 237.43 & 0.9455 & 42.83 \\ & SimVP & **0.93** & **150.11** & 0.9882 & **46.25** \\ & PastNet & 0.94 & 168.53 & **0.9991** & 45.98 \\ \hline \multirow{4}{*}{FS} & DLP & 7.78 & 426.12 & 0.9266 & 38.38 \\ & PhyDnet & 4.41 & 327.32 & 0.9423 & 40.47 \\ \cline{1-1} & NLDM & 7.21 & 411.14 & 0.9392 & 38.62 \\ \cline{1-1} & SimVP & 3.01 & 261.19 & 0.9647 & 42.03 \\ \cline{1-1} & PastNet & **2.19** & **222.08** & **0.9861** & **43.24** \\ \hline \hline \end{tabular} \end{table} Table 3. Quantitative prediction results of PastNet compared to Baselines on various Natural Phenomenon datasets. The evaluation metrics selected for this study are MSE \(\downarrow\), MAE \(\downarrow\), MS-SSIM \(\uparrow\), and PSNR \(\uparrow\), with a lower value (\(\downarrow\)) indicating better performance for MSE and MAE, and a higher value (\(\uparrow\)) indicating better performance for MS-SSIM and PSNR. The best result is indicated in boldface, while the second-best result is indicated with an underline in the table caption. Figure 8. (a) presents qualitative visualization results for NSE, while (b) displays the qualitative visualization results for SWE. The examples of the input PDE equations flow are denoted by \(l\), examples of future PDE equations are denoted by \(l\), predicted PDE equation examples are denoted by \(l\), predicted PDE equation examples are denoted by \(l\), and the error between predicted and true results is denoted by \(l\), which is measured using the relative L2 error metric. Specifically, the relative L2 errors are calculated to be 0.0072 and 0.0007 for NSE and SWE, respectively. The formula for calculating the relative L2 error is given by Relative \(L2\) Error \(=\frac{\|\mathbf{y}-\hat{y}\|_{2}}{\|\mathbf{y}-\hat{y}\|_{2}}\), where \(\mathbf{y}\) represents the true values, \(\hat{y}\) represents the predicted values, \(\hat{y}\) represents the mean of the true values, and \(\|\cdot\|_{2}\) denotes the L2 norm. These results suggest that the model is able to make accurate predictions, with a particularly low relative L2 error for SWE. highlight PastNet's superior convergence rate during training, enabling it to rapidly produce high-quality image results. In summary, PastNet is an accurate and efficient model for video prediction, with fast convergence and high-quality results. It represents a promising direction for future research and practical applications. ### Potential for Solving PDE Equations To investigate the potential of PastNet for solving physical problems, we consider the Navier-Stokes equations, which represent viscous incompressible fluids in the form of vorticity on the unit torus, and the Shallow-Water equations, which are obtained from the general Navier-Stokes equations. We focus on their 2D forms, and more detailed descriptions of them are shown in Appendix B. We use PastNet to solve the Navier-Stokes equations with viscosity \(\nu=10^{-3}\) and a resolution of \(64\times 64\) for training and testing. We use the flow field at 10 input time steps to predict the flow field at 10 future time steps (\(10_{timesteps}\mapsto 10_{timesteps}\)). For the Shallow-Water equations, we fix the resolution to \(128\times 128\) for training and testing and use the flow field at 50 input time steps to predict the flow field at 50 future time steps (\(50_{timesteps}\mapsto 50_{timesteps}\)). We train for 200 epochs and record the metrics MSE, MAE, and the time per epoch. The quantitative and qualitative results are presented in Table 5 and Figure 8, respectively. The results in Table 5 indicate that PastNet has potential for solving PDE equations, especially for the SWE equation, as it achieved lower MSE and MAE values for the SWE equation than for the NSE equation, while the time per epoch was similar for both equations. ## 5. Conclusion In this paper, we investigate the problem of spatio-temporal video prediction and propose a novel method named PastNet to tackle the problem. The key insight of our PastNet is to incorporate a spectral convolution operator in the Fourier domain, which effectively introduces inductive biases from the underlying physical laws. Moreover, we introduce both local feature discretization and intrinsic dimensionality estimation to reduce the computational costs with accuracy retained. Extensive experiments on a range of popular datasets show that the proposed PastNet is more effective and efficient than state-of-the-art techniques. In future works, we would develop more effective video prediction techniques by introducing high-level physical domain knowledge in various fields.
2307.00355
Comparing Mobile Testing Tools Using Documentary Analysis
Due to the high demand for mobile applications, given the exponential growth of users of this type of technology, testing professionals are frequently required to invest time in studying testing tools, in particular, because nowadays, several different tools are available. A variety of tools makes it difficult for testing professionals to choose the one that best fits their goals and supports them in their work. In this sense, we conducted a comparative analysis among five open-source tools for mobile testing: Appium, Robotium, Espresso, Frank, and EarGrey. We used the documentary analysis method to explore the official documentation of each above-cited tool and developed various comparisons based on technical criteria reported in the literature about characteristics that mobile testing tools should have. Our findings are expected to help practitioners understand several aspects of mobile testing tools.
Gustavo da Silva, Ronnie de Souza Santos
2023-07-01T14:52:27Z
http://arxiv.org/abs/2307.00355v1
# Comparing Mobile Testing Tools Using ###### Abstract Due to the high demand for mobile applications, given the exponential growth of users of this type of technology, testing professionals are frequently required to invest time in studying testing tools, in particular, because nowadays, several different tools are available. A variety of tools makes it difficult for testing professionals to choose the one that best fits their goals and supports them in their work. In this sense, we conducted a comparative analysis among five open-source tools for mobile testing: Appium, Robotium, Espresso, Frank, and EarGrey. We used the documentary analysis method to explore the official documentation of each above-cited tool and developed various comparisons based on technical criteria reported in the literature about characteristics that mobile testing tools should have. Our findings are expected to help practitioners understand several aspects of mobile testing tools. software testing, mobile testing, testing tools. ## I Introduction Our modern society strongly relies on technology. Nowadays, software products play an important role in people's lives [1, 2], as they are responsible for supporting and automating complex processes and activities in the most diverse types of organizations and industries. Among the various software products currently available, mobile applications are one of the most used by people worldwide; therefore, they are constantly in high demand [3, 4]. Recent studies demonstrate that there is an amount of 6.84 billion smartphones in the world right now, and mobile applications consume over 90% of the time spent by smartphone users daily [5]. Mobile technology, such as smartphones, has become a common platform for managing everyday tasks and activities, which makes it crucial for many essential activities, including work, leisure, and education [1]. This means that failures, malfunctions, or unexpected behavior in mobile apps can produce several adverse outcomes resulting in significant losses for companies and industries that rely on these applications. This scenario demonstrates the relevance of mobile testing in software development to ensure the quality of mobile applications [3, 6]. In general, software testing is defined as the process of evaluating a software program aiming to identify differences among its requirements, the obtained outcome, and what users expect it to do [7]. In particular, mobile testing is the process through which applications for modern mobile devices are checked for functionality, usability, performance, and other quality aspects on different devices, platforms, and networks aiming to provide a similar experience to the billions of users of this technology [8]. Testing professionals who work with mobile testing have many resources available to support them in their activities, including several different tools [4]. Each of these tools has its own characteristics, including [9, 10]: a) support for different platforms (e.g., iOS, Android); b) test automation resources; c) test integration; d) test coverage; d) community support. However, such variety requires these professionals to understand the features of each tool to identify the most suitable for their needs. Therefore, in this study, we explore different mobile testing tools to answer the following research question: _Research question: What are the main characteristics of some popular mobile testing tools used by software testing professionals?_ From this introduction, our study is organized as follows. In Section II, we present a review of mobile testing. In Section III, we describe how we conducted the bibliographic documentary analysis. In Section IV, we present the comparison among tools. Finally, Section V summarizes the contributions of this study. ## II Background The quality of the mobile applications is paramount in deciding whether or not a project (i.e., a software application for mobile devices) will be successful. This quality can only be attested by an adequate and efficient testing process that considers the particularities of technologies and the needs of a diverse group of users composed of individuals from different countries, cultures, backgrounds, needs, and expectations [11, 12]. In this sense, the user experience is one primary key when testing a mobile application because failures and bugs can make users avoid reusing the application, especially considering that these users rely on the apps on a daily basis. Previous research reported that about 48% of users would not try an app again if they experienced failures, leading to fewer downloads and reduced revenue for software companies [13]. Software testing for mobile applications is not trivial as it involves exhausting verification of usability, connectivity, security, privacy, and other features that this type of software typically requires. In addition, professionals need to consider the heterogeneity of the technologies used to implement mobile applications and the diverse context in which the software will be used [11, 12]. Previous studies demonstrated that to achieve improvements in the testing process, testing professionals need support from testing tools [6, 9]. When adequate tools are not used during the testing activities, the susceptibility to errors in the software increases, and often the potential success of the software reduces [14]. Over the years, some testing tools have become popular among mobile testing professionals. Here are some examples of popular testing tools considering their analysis in previous studies [3, 4], discussions with practitioners [9, 10], [14], and the grey literature: * _Appium_: a software testing automation tool for mobile apps that allows developers and testers to create and run automated tests and simulate user interactions such as screen taps, swipes, and text entries. This tool is popular for mobile app testing due to its flexibility. * _Robotium_: a software testing automation tool for mobile applications that allows developers and testers to create unit and integration tests in addition to simulating user interactions (e.g., screen taps, swipes, and text entry). * _Espresso_: a software testing automation developed by Google. This tool is designed to facilitate the creation of automated tests for Android applications, providing APIs to simulate user interactions, especially user interface (UI). * _Frank_: a software testing automation tool for iOS mobile apps that supports several programming languages and can be integrated with other test automation tools. * _EarlGrey_: a software testing tool developed by Google to support iOS operating systems. It is focused on user interface (UI) testing and allows developers and testers to write automated tests. Several other tools are available for software testing professionals and can be used depending on their needs. Selecting the right tool is a crucial decision for these professionals due to the impact on quality activities in the software development process [4, 7, 9, 15]. ## III Method Previous studies have explored testing tools and presented a comparison among them [3, 4, 15]. However, it is important to highlight that these tools are in constant evolution; features are frequently updated, and new functions are released to improve the support offered to software testing professionals. Our study updates the published results regarding some tools and includes new ones in the comparison. In addition, we employed a bibliographic documentary analysis, which is a different method from those used in the previous studies identified in the literature. In this section, we describe our method. ### _Documentary Analysis_ Documentary analysis is a qualitative method focused on reviewing documents to explore and discuss a research problem. This strategy requires researchers to locate, interpret, integrate, and draw conclusions about the evidence obtained from valid documents (e.g., guidelines and official reports) [16]. The bibliographic documentary analysis method supports the elaboration of discussions through the articulation of indicators based on concepts identified in documents. The primary outcome produced in this process is integrating documentary material into summarized data that can guide decisions [17]. The information and insights derived from documents are valuable additions to a knowledge base, in particular, because the documents can be analyzed to verify findings or corroborate evidence from other sources or even to track updates and the evolution of discussions [18, 16]. The main advantage of the documentary analysis method is its simplicity, which allows researchers to provide practitioners with well-structured research exploring available documents. This characteristic makes this method less costly than other qualitative research methods. Finally, the method provides broad coverage, allowing the investigation of different periods of time, various events, and many settings [18]. In this study, we developed a documentary analysis following the steps presented in Figure 1 ### _Analysis Criteria_ Based on the literature about mobile testing [3, 4, 9, 11, 12], we chose six criteria to be used in the evaluation of the five mobile testing tools selected to be explored using documentary analysis. These criteria are: * _Multiple Platforms Support_: Support for multiple mobile platforms is crucial for a mobile testing tool as it allows professionals to create and run tests on different operating systems and mobile devices, that is, testing on a variety of devices used by individuals around the world. This ensures that the app is cross-platform tested, helping to identify issues or inconsistencies that could affect the user experience. * _Test Automation_: Test automation is one essential feature of a mobile testing tool, as it supports practitioners in running tests with no or little human intervention. This helps increase test efficiency, reduce the time required in the development of features, and minimize the risk of human error. * _Device Emulation_: Similar to multiple platform support, mobile device emulation is used to execute tests on different types of devices, allowing testing applications in a wide variety of scenarios and configurations. This feature reduces the need for having a significant number of devices physically available to everyone in the project to complete their tests and increases the team's capability of running tests simultaneously for multiple devices. * _Debugging_: Debugging is the strategy that supports practitioners in identifying and fixing issues in the application's code. A mobile testing tool with debugging features helps professionals solve problems more quickly and effectively. * _Tool Documentations_: The availability of documentation is essential to have professionals understand the tool and get used to the features. Practitioners expect the official tool documentation to be complete and organized, with clear instructions about all available resources. Thus, the quality of the documentation is critical to ensure that the testing team can make the most of the tool's features and obtain the best possible outcomes from its usage. * _CI/CD Pipeline_: Continuous Integration and Continuous Delivery (CI/CD) are fundamental in software development nowadays. In most agile environments, tests are expected to run automatically as soon as the code is integrated into the repository. A CI/CD pipeline allows tests to run continuously and identify bugs in the early development stages. Therefore, CI/CD features are essential for mobile testing tools. ### _Data Analysis_ We used thematic analysis [19] to explore the documentation of the mobile testing tools and draw comparisons among them. Thematic analysis is a comprehensive strategy that supports researchers in the identification of cross-references among sources of qualitative data [20]. This method has been successfully used in software engineering [19]. We kept the thematic analysis focused on the documentary analysis criteria presented above. In this process, we downloaded the documentation of each tool considered in this study and highlighted relevant characteristics associated with each criterion. Finally, we built a table summarizing the information and described the outcomes to inform practitioners. ## IV Findings Below, we present the summarized data obtained from the comparison of the five mobile testing tools. In summary, Appium was the tool that presented the better outcome among the analyzed tools since it offered higher coverage for different platforms, and it included characteristics of all criteria that guided the analysis. ### _Multiple Platform Support_ Appium is the tool that offers the most comprehensive support for multiple mobile platforms, including iOS, Android, and Windows Phone. This tool uses a WebDriver-based approach to interact with apps, which increases its effectiveness for cross-platform testing. Robotium and Espresso are tools created only for Android; therefore, they support testing for multiple Android versions. Although Robotium is not compatible with many mobile platforms, our analysis indicates that it can be effective for testing Android applications. As for Espresso, it offers a set of APIs for executing UI testing, and it integrates well with developer tools, such as Android Studio. Fig. 1: Method Frank and EarlGrey are tools for testing iOS devices; therefore, they can only be used to test apps on this platform. Both these tools support testing in multiple iOS versions and allow practitioners to code in different programming languages, such as Ruby and Swift. Overall, considering this criterion, choosing the ideal tool will depend on the target mobile operating system and the specific needs of each test. In general, Appium would be a choice for those seeking broad platform support coverage. On the other hand, Robotium and Espresso are reasonable options for cross-platform testing on Android, while Frank and EarlGrey are good options for testing on iOS platforms. ### _Test Automation_ Considering test automation, all analyzed tools are open-source and provide practitioners with a similar set of automation features (e.g., writing and running tests, importing and exporting testing results, and integration with development/coding IDEs). However, each of them includes specific characteristics that professionals can consider when choosing the tool: * Appium supports both native and hybrid mobile apps for different platforms. * Robotium has a simple syntax and can automate multiple applications simultaneously, supporting complex test cases and providing higher test coverage. * Expresso is fast with the execution of multiple applications at the same time. It can also be used with the JUnit testing framework. * Frank allows testers to write automated tests in natural language (English) or Ruby, in addition to allowing real-time interaction with the app to simulate touch events and screen capture. * EarlGrey supports the automation of functional and integration tests in Objective-C, Swift, or JavaScript. In summary, each of these tools has unique features that make them ideal for automating different testing scenarios for mobile applications. Appium is ideal for cross-platform testing, while Espresso and Robotium are solid options for UI testing on Android apps. Frank and EarlGrey are iOS application-specific automation testing tools that provide robust functional and integration testing capabilities. ### _Device Emulation_ Appium lets practitioners emulate apps on real devices or emulators. It supports emulators like Android Emulator and iOS Simulator. This means that tests can be run in a controlled environment without the need for an actual device. The tool also allows the emulation of gestures, multi-touch, taps, and swipes to simulate user interaction with the application. Robotium and Espresso allow emulating applications on real or emulated devices. Android Emulator is the default emulator to run tests on both tools. Frank and EarlGrey have a similar emulation process, but in this case, they are dependent on an iOS simulator to emulate various iOS devices and simulate different settings such as screen orientation and language. EarlGrey stands out over Frank with the ability to support physical devices for UI testing, which generates more accurate results than emulation. In general, all the mentioned tools support the emulation of applications on real or emulated devices. However, some tools such as Frank, EarlGrey, Espresso, and Robotium are dependent on the platform used in the device, while Appium stands out for offering support for various types of devices. ### _Debugging_ All analyzed tools offer standard debugging features such as pausing test execution and viewing test logs. However, some of them include unique features, such as: * Appium supports debugging of native and hybrid applications and offers real-time debugging features such as viewing the device's screen and interacting with elements on the screen while running the test. * Expresso and EarlGrey include advanced debugging features such as debugging failed tests, viewing the application hierarchy, and real-time debugging. * Frank is highly configurable and allows customization of debugging according to the testing needs. Fig. 2: Comparison among Mobile Testing Tools In summary, since all analyzed tools offer similar debugging features for different test and development scenarios, we understand that this criterion should be evaluated based on the specific needs of the project and the characteristics of the application under development. ### _Documentation_ Appium includes a comprehensive and well-organized official documentation covering many aspects of the tool, such as configuration, main features, and customization options. In addition, the documentation is available in multiple languages, including English, Chinese, Spanish, Japanese, and Russian. The documentation also includes recommended patterns to be used in the testing process. Robobium has an extensive documentation with detailed information about available features and methods. The documentation provides guidelines on how to use the tool to write automated tests, including code samples and step-by-step instructions. However, Robotuim documentation is only available in English. Espresso released a complete and comprehensive documentation with information about features, methods, recommended patterns, guidelines on how to use the tool, code examples, and tutorials. Espresso documentation is available in multiple languages, including English, Chinese, Korean, and Japanese. Frank differs from the other tools for not having such a comprehensive documentation. The document contains information on installing, configuring, and using the tool; however, the presentation is not intuitive. Yet, there are code examples and tutorials included in it. Frank's documentation is available only in English. EarlGrey provides practitioners with a well-structured documentation. The document starts with an overview of the tool, followed by its features and how each one of them works. The documentation includes detailed guidelines on how to set up the development environment. Similar to Robotuim and Frank, EarlGrey's documentation is only available in English. In summary, most analyzed tools include a comprehensive and well-organized documentation that provides detailed information to guide practitioners toward using their features and processes. However, language coverage is a limitation for most of the tools. Finally, we highlight the importance of supplementary sources of information included in some documentation, e.g., tutorials and code examples, which is relevant for some practitioners to improve their understanding of the tool usage. ### _CI/CD Pipeline_ All analyzed tools offer CI/CD support to facilitate testing in agile environments. Yet, there are some differences among them that might be useful to practitioners when selecting the best tool for their work, such as: * Appium offers a relatively straightforward CI/CD integration with Jenkins, Bamboo, and other CI/CD services. Appium generates detailed reports of the tests performed, making it easier to identify problems. * Robotium can be easily integrated with Jenkins and also generates detailed test reports. However, this tool offers limited features when compared to Espresso or Appium. * Espresso can be easily integrated with Gradle, the Android project building system, and other CI/CD systems such as Jenkins. It also generates detailed reports. * Frank can be integrated with CI/CD tools like Jenkins and offers detailed reports, but considering iOS-focused tools, it offers fewer features than EarlGrey. * EarlGrey can be integrated with CI/CD tools such as Jenkins and CircleCI, and similar to the previous tools, it also provides professionals with features to generate detailed reports. In summary, our analysis demonstrates that all five tools support integration with CI/CD tools and offer options to generate detailed reports. However, tools like Appium, Espresso, and EarlGrey offer a greater set of advanced features (e.g., integration tools, types of reports, and visualization options). Robotium and Frank are simpler considering this criterion but still provide features that meet the need of practitioners depending on their project. ## V Conclusion In this study, we applied a documentary analysis method to explore the official documentation of five mobile testing tools, namely, Appium, Espresso, Robotium, Frank, and EarlGrey, and established a comparative synthesis among them. For this analysis, we considered six criteria: platform support, test automation, device emulation, debugging, tool documentation, and CI/CD pipeline. Our findings demonstrated that Appium is the most complete tool for mobile testing among those evaluated, as it supports multiple platforms, and it obtained a positive and robust outcome in all aspects evaluated in the study. In addition, Espresso and Robotium are potential alternatives for professionals who need to focus on testing for the Android environment, while EarlyGrey is a good alternative for iOS testing. Frank can also be used in an iOS environment; however, this is the tool that presented more limitations among the ones analyzed. Selecting a testing tool that is a good fit to support the quality process in a software project is crucial for software professionals. We expect that this paper supports practitioners in their first steps toward making this decision, as we conducted this analysis with popular mobile testing tools and used a strategy that can be easily consumed by those working in the software industry. As a qualitative study relying only on the documentary analysis method, we understand that our findings have some limitations related to the method itself [18], including a) biased selectivity resulting from the limited number of documents analyzed, since we decided to focus only on the official documentation of the tools; and b) insufficient details, since the analysis did not include other data sources, e.g., testers' experiences. However, this paper is designed for practitioners, so we opted for a method that is simple to follow and effective in producing results that can effectively and straightforwardly inform software professionals. The obtained results triggered opportunities for future work. Following this study, we plan to design and perform an experimental analysis using test scripts to check the behavior of each tool, considering the same criteria and additional aspects, e.g., performance, usability, and help support. We also plan to recruit a sample of professionals that have experience with these tools to improve the comparison obtained from the documentary analysis with real-work inputs coming from different software practitioners and projects.
2310.13094
A secondary index for non-Fredholm operators associated with quantum walks
We study an analogue of chirality operators associated with quantum walks on the binary tree. For those operators we introduce a K-theoretic invariant, an analogue of the index of Fredholm operators, and compute its values in the ring of di-adic integers
Toshikazu Natsume, Ryszard Nest
2023-10-19T18:51:06Z
http://arxiv.org/abs/2310.13094v1
# A secondary index for non-Fredholm operators associated with quantum walks ###### Abstract. We study an analogue of chirality operators associated with quantum walks on the binary tree. For those operators we introduce a K-theoretic invariant, an analogue of the index of Fredholm operators, and compute its values in the ring of di-adic integers. ## 1. introduction A self-adjoint operator \(\Gamma\) is called a _symmetry_ if \(\Gamma^{2}=1\). Notice that \(\frac{1\pm\Gamma}{2}\) are projections. Suppose that a pair of symmetries \(\Gamma_{1},\Gamma_{2}\) on a Hilbert space \(H\) is given. The operator \(U=\Gamma_{1}\Gamma_{2}\) is a unitary, and \(\Gamma_{2}U\Gamma_{2}=U^{*}\). Set \(Q=U-U^{*}\). Then \(\Gamma_{2}Q+Q\Gamma_{2}=0\). This means that with respect to the decomposition \(H=\operatorname{Ran}(\frac{1+\Gamma_{2}}{2})\oplus\operatorname{Ran}(\frac{ 1-\Gamma_{2}}{2})\) we have \[Q=\begin{pmatrix}0&-Q_{+}^{*}\\ Q_{+}&0\end{pmatrix}.\] The target of the investigation is the operator \(Q_{+}\) for specific symmetries arising in the study of quantum walks. ### One-dimensional case Let us review classical one-dimensional quantum walks. The Hilbert space \(\ell^{2}(\mathbb{Z})\) is equipped with a canonical orthonormal basis \(\{e_{n}\}_{n\in\mathbb{Z}}\). Let \(L\) be the forward-shift \(Le_{j}=e_{j+1}\). The first symmetry is the operator \(\Gamma\) on \(\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{2}\) defined by \[\Gamma=\frac{1}{\sqrt{2}}\begin{pmatrix}1&L^{*}\\ L&-1\end{pmatrix}.\] The two-sphere \(S^{2}\) can be described as the set \(\{(x,\zeta)\in\mathbb{R}\times\mathbb{C}:x^{2}+|\zeta|^{2}=1\}\). We call a sequence \((a,b)=(a(n),b(n))\in S^{2}\) a _walk_. A walk \((a,b)\) is _non-wandering_ if the limits \[a(\pm\infty)=\lim_{n\to\pm\infty}a(n),\;b(\pm\infty)=\lim_{n\to\pm\infty}b(n)\] exist. The two sequences \(a,b\) define diagonal operators on \(\ell^{2}(\mathbb{Z})\) in a canonical way. Then the non-wandering walk \((a,b)\) defines an operator \(C\) called a _coin operator_ by \[C=\begin{pmatrix}a&b^{*}\\ b&-a\end{pmatrix}.\] The coin operator is the second symmetry we use. The operator \[Q_{+}=\tfrac{1-C}{2}Q\tfrac{1+C}{2}:\operatorname{Ran}\bigl{(}\tfrac{1+C}{2} \bigr{)}\to\operatorname{Ran}\bigl{(}\tfrac{1-C}{2}\bigr{)}\] ## 1. Introduction The \(\mathbb{Z}\)-invariant \(\mathbb{Z}\ **Definition 2.1** ([2, Def. 5.1]).: The shift operator \(S\) on \(\ell^{2}(T)\) is defined by \[(Sf)(v)=\left\{\begin{array}{cc}f(par(v))\;,&v\neq r\\ 0\;,&v=r\end{array}\right..\] The shift operators defined for general trees are not necessarily bounded. **Theorem 2.2**.: _The shift operator \(S\) on a rooted tree \(T\) is bounded if and only if there exists an \(B>0\) such that \(\sharp(chi(u))\leq B\) for all \(u\in v(T)\)._ The following two sections describe the C*-algebras involved in the construction of the index and the final results, in the case when For the binary tree the shift operator is bounded. **Proposition 2.3**.: The adjoint of \(S\) is \[S^{*}f(u)=\sum_{v\in chi(u)}f(v).\] Proof.: Straightforward computations. We have that \[(S^{*}S)(f)(u)=\sum_{v\in chi(u)}(Sf)(v)=\sum_{v\in chi(u)}f(par(v))=2f(u).\] So, if we set \(L=\frac{1}{\sqrt{2}}S\), then \(L\) is an isometry. The structure of isometries on Hilbert spaces is known. **Theorem 2.4** (Wold-von Neumann decomposition).: _If \(U\) is an isometry on a Hilbert space, then \(U\) is a unitary, or a direct sum of copies of the unilateral shift, or a direct sum of a unitary and copies of the unilateral shift._ A consequence of the Wold-von Neumann Theorem is that every non-unitary isometry has spectrum the closed unit disk. The unitary part can be described as follows. For an isometry \(U\) on a Hilbert space, denote by \(Z\) the orthogonal complement of \(\cup_{n}\ker((U^{*})^{n})\). Then \(U_{|Z}\) is a unitary. When \(Z=0\), the isometry is called _proper_. A proper isometry is unitarily equivalent to an isometry of the form \(V\otimes I\), where \(V\) is the unilateral shift on \(\ell^{2}(\mathbb{N})\). **Lemma 2.5**.: The isometry \(L=\frac{1}{\sqrt{2}}S\) is proper. Proof.: Recall that \(L^{*}e_{u}=(1/\sqrt{2})e_{par(u)},u\neq r\), and \(L^{*}e_{r}=0\). Then for any \(u\in v(T)\) there exists an \(n\in\mathbb{N}\) such that \((L^{*})^{n}e_{u}=0\). This implies that \(Z=(\cup_{n}\ker((L^{*})^{n}))^{\perp}=0\). **Corollary 2.6**.: The \(C^{*}\)-algebra \(C^{*}(L)\) generated by \(L\) on \(\ell^{2}(T)\) is isomorphic to the \(C^{*}\)-algebra \(C^{*}(V)\) generated by \(V\) on \(\ell^{2}(\mathbb{Z}_{+})\). We want to construct an operator analogous to \(\Gamma\) in the classical setting. Set \(E=[L^{*},L]=1-LL^{*}\), and \[\Gamma=\begin{pmatrix}0&L^{*}\\ L&E\end{pmatrix}.\] Then \(\Gamma\) is a symmetry on \(\ell^{2}(T)\otimes\mathbb{C}^{2}\). We want to construct the coin operator \(C\). It is known that there exists a compactification \(\overline{T}\) of \(T\) such that \(\overline{T}\setminus T=K\) the ternary Cantor set. We consider the closed subset \(T_{0}=v(T)\cup K\) of \(\overline{T}\). Let \(C\) be as above with \(a,b\in C(T_{0})\). Recall that \(f\in C(T_{0})\) acts on \(\ell^{2}(T)\) as point-wise multiplication operator. We follow the line of constructions of \(U=\Gamma C\) and \(Q=U-U^{*}\). Set \[\varepsilon=\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{1+a}&-\sqrt{1-a}\\ b/\sqrt{1+a}&b/\sqrt{1-a}\end{pmatrix}.\] Then \(\varepsilon\) is a unitary, and \(\varepsilon^{*}C\varepsilon=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\). **Lemma 2.7**.: We have \(EL=0\). Proof.: \(((1-F)S\phi)(u)=(S\phi)(u)-(S\phi(u^{\prime})=\phi(par(u))-\phi(par(u^{\prime} ))=0\). Using Lemma 2.7 repeatedly we can show that \[\varepsilon^{*}Q\varepsilon=\begin{pmatrix}0&-Q_{+}^{*}\\ Q_{+}&0\end{pmatrix},\] where \[Q_{+}=\frac{\overline{b}}{\sqrt{1-a}}\,L\sqrt{1+a}-\sqrt{1-a}\,L^{*}\frac{b}{ \sqrt{1+a}}+\frac{\overline{b}}{\sqrt{1-a}}E\frac{b}{\sqrt{1+a}}.\] We want to determine if \(Q_{+}\) is Fredholm. ## 3. Noncommutative geometric approach Denote by \(A\) the \(C^{*}\)-algebra generated by \(S,C(T_{0})\) on \(\ell^{2}(T)\). Then **Proposition 3.1**.: The \(C^{*}\)-algebra \(A\) contains the ideal \(\mathcal{K}(\ell^{2}(T))\). Proof.: It is enough to show that for any \(u,v\in v(Y)\), the rank one operator \(\theta_{u,v}:\mathbb{C}e_{v}\to\mathbb{C}e_{u}\) belongs to \(A\). There exists a unique path \(\{u_{j}\}\) such that \(u_{0}=r,u_{n}=u\) and that \(u_{j-1}=par(u_{j})\). Then \(S^{n}e_{u}=e_{r}\). Denote by \(\chi_{u}\in C(\overline{T}))\) the characteristic function for the set \(\{u\}\). Then \(\theta_{r,u}=S^{n}\chi_{u}\). Therefore \(\theta_{r,u}\in A\). Obviously \(\theta_{u,r}=\theta_{r,u}^{*}\in A\). Consequently \(\theta_{u,v}=\theta_{u,r}\theta_{r,v}\in A\). Consider the short exact sequence: \[0\to\mathcal{K}(H)\to A\xrightarrow{\sigma}A/\mathcal{K}(H)\to 0.\] Notice that the quotient algebra \(A/\mathcal{K}(H)\) is not abelian. The following proposition is the key. **Proposition 3.2**.: We have that \(A/\mathcal{K}\cong C^{*}(L)\otimes C(K)\). Proof.: We want to construct a linear map \(s:C^{*}(L)\otimes C(K)\to A\) such that \(\sigma\circ s\) is bijective. The set \(K_{0}\) of endpoints of the removed open sets in the construction is dense in \(K\). Therefore continuous functions are determined by the restrictions on \(K_{0}\). For any \(x\in K\) there exists a unique sequence of vertexes \(\{u_{j}\}\) such that \(u_{0}(x)=r,u_{j}(x)=\operatorname{par}(u_{j+1}(x))\), and \(\lim_{j\to\infty}u_{j}(x)=x\) in \(\overline{T}\). We use this picture to construct \(s\). Let \(f\in C^{*}(L)\otimes C(K)=C(K,C^{*}(L))\). We proceed by induction to create \(\tilde{f}\in A\). The first step is to take care of two points \(0,1\). Set \(\tilde{f}(u_{j}(0))=f(0)\in C^{*}(L)\), and \(\tilde{f}(u_{j}(1))=f(1)\in C^{*}(L)\). The second step is to take care of the points \(1/3,2/3\). We have the sequence \(\{u_{j}(1/3)\}\). Now we know that \(u_{1}(0)=u_{1}(1/3)\). So we set \(C^{*}(V)ildef(u_{j}(1/3))=f(1/3),j\geq 2\), and \(\tilde{f}(u_{j}(1/3))=f(0),j=0,1\). Similarly we set \(\tilde{f}(u_{j}(2/3))=f(2/3),j\geq 2\), and \(\tilde{f}(u_{j}(2/3))=f(1),j=0,1\). The next step is to take care of \(1/9,2/9,7/9,8/9\). Then \(u_{2}(1/9)=u_{2}(0)\). So set \(\tilde{f}(u_{j}(1/9))=f(1/9),j\geq 3\), and \(\tilde{f}(u_{j}(1/9))=f(0),j\leq 2\). We proceed by induction to define \(\tilde{f}(u)\). Notice that any \(u\in v(T)\) is on a path to some \(x\in K_{0}\). Consequently we can define \(\tilde{f}\in A\). By the construction above it is obvious that the map \(s(f)=\tilde{f}\) is linear, and \(s(f)s(g)-s(fg)\in\mathcal{K}(\ell^{2}(T))\). So \(\sigma\circ s\) is a \(*\)-homomorphism \(C(K)\otimes C^{*}(L)\to A/\mathcal{K}(\ell^{2}(T))\). We need to show \(\sigma\circ s\) is bijective. Let \(f\cdot w,f\in C(T_{0}),w\in C^{*}(L)\). Then \(s(f_{|K})-f\in\mathcal{K}(\ell^{2}(T))\). This implies that \(\sigma\circ s\) is surjective. It is easy to see that if \(s(f)\in\mathcal{K}(\ell^{2}(T))\), then \(f=s(f)_{|K}=0\). By Corollary 2.6, \[A/\mathcal{K}(\ell^{2}(T))\cong C^{*}(V)\otimes C(K).\] _Observation_.: The operatot \(Q_{+}\) is not Fredholm. In order to see this, we only have to show that \(\sigma(Q_{+})\) is not invertible in \(C^{*}(V)\otimes C(K)\). We have \[\sigma(Q_{+})=\frac{\sqrt{1+a}}{\sqrt{1-a}}\tilde{b}V-\frac{\sqrt{1-a}}{\sqrt{ 1+a}}bV^{*}+|b|(1-VV^{*}).\] We claim that, for each fixed values of \(a(x),b(x),x\in K\), as an operator on \(\ell^{2}(\mathbb{Z}_{+})\), the operator \(\sigma(Q_{+})\) is not invertible. For this we show that \(\ker(\sigma(Q_{+})^{*})\) is non-trivial. Let \(\xi=\sum_{n=0}^{\infty}C_{n}e_{n}\in\ell^{2}(\mathbb{Z}_{+})\). Suppose that \(a(x)>0\), then it is straightforward to see that \(\xi\in\ker(\sigma(Q_{+})^{*})\) if and only if \[C_{1} =-\frac{\sqrt{1-a(x)}}{\sqrt{1+a(x)}}\frac{|b(x)|}{b(x)}C_{0}\] \[C_{n+1} =-\frac{1-a(x)}{1+a(x)}\frac{|b(x)|}{b(x)}C_{n-1},n\geq 1.\] Notice that \(0<\frac{1-a(x)}{1+a(x)}<1\). It follows that \(\dim\ker(\sigma(Q_{+})^{*})=1\). By a similar argument, if \(a(x)<0\) then \(\dim\ker(\sigma(Q_{+}))=1\) What we have seen so far is that we cannot extract any numerical invariant as long as we stick to the classical Fredholm index. We still want to get numerical information from the operatoe \(Q_{+}\). Thus we need to extend the notion of indices. Denote by \([A,A]\) the commutator ideal of \(A\). For nonzero \(f\in C(T_{0})\), the commutator \([S,f]\) is nonzero, and belongs to \(\mathcal{K}(H)\). This means that \([A,A]\cap\mathcal{K}(\ell^{2}(T))\neq 0\). Hence \([A,A]\cap\mathcal{K}(\ell^{2}(T))\) is a non-trivial ideal of \(\mathcal{K}(\ell^{2}(T))\). Since \(\mathcal{K}(\ell^{2}(T))\) is simple, we have \([A,A]\cap\mathcal{K}(\ell^{2}(T))=\mathcal{K}(\ell^{2}(T))\). Hence \([A,A]\supset\mathcal{K}(\ell^{2}(T))\). Now we have the following commutative diagram: \[\begin{array}{ccccccccc}0&\to&\mathcal{K}&\to&A&\to&A/\mathcal{K}&\to&0\\ &&\cap&&\|&&\downarrow&&\\ 0&\to&[A,A]&\to&A&\to&A/[A,A]&\to&0\end{array},\] where \(\mathcal{K}=\mathcal{K}(\ell^{2}(T))\). We know \[A/[A,A]\cong C^{*}(L)/[C^{*}(L),C^{*}(L)]\otimes C(K),\] and \(C^{*}(L)/[C^{*}(L),C^{*}(L)]\cong C^{*}(V)/[C^{*}(V),C^{*}(V)]\cong C(S^{1})\). Therefore we get the exact sequences: \[\begin{array}{ccccccccc}0&\to&\mathcal{K}&\to&A&\to&C^{*}(V)\otimes C(K)&\to&0 \\ &&\cap&&\|&&\downarrow&&\\ 0&\to&[A,A]&\to&A&\to&C(S^{1})\otimes C(K)&\to&0\end{array}.\] Since \(K_{1}(\mathcal{K})=K_{1}(C^{*}(V)\otimes C(K))=0\), a part of the six-term exact sequence is: \[0\to K_{0}(\mathcal{K})\to K_{0}(A)\to K_{0}(C^{*}(V)\otimes C(K))\to 0. \tag{3.1}\] Also we get \(K_{1}(A)=0\). Since \(K_{0}(C^{*}(V))=\mathbb{Z}\) is generated by the class of unit, \(K_{0}(C^{*}(V)\otimes C(K))\cong K_{0}(C(K))\). A well-known fact is that \[K_{0}(C(K))\cong\varinjlim\mathbb{Z}^{2^{n}}.\] This implies \[K_{0}(A)\cong\mathbb{Z}\oplus\varinjlim\mathbb{Z}^{2^{n}}.\] Apply the six-term exact sequence to the second short exact sequence we get \[\begin{array}{ccccccccc}K_{0}([A,A])&\to&Z\oplus\varinjlim\mathbb{Z}^{2^{n} }&\to&\varinjlim\mathbb{Z}^{2^{n}}\\ \uparrow&&&&\downarrow\\ \varinjlim\mathbb{Z}^{2^{n}}&\leftarrow&&0&\leftarrow&K_{1}([A,A])\end{array}.\] Call the quotient map \(\hat{\sigma}:A\to A/[A,A]=C(S^{1})\otimes C(K)\)_the secondary symbol map._ Recall that the connecting map \(\delta_{1}:K_{1}(A/[A,A]))\to K_{0}([A,A])\) is called the index map. For an \(a\in A\) if the element \(\hat{\sigma}(a)\) is invertible, then its analytic index \(\operatorname{ind}a\in K_{0}([A,A])=\mathbb{Z}\oplus\varinjlim\mathbb{Z}^{2^{ n}}\) is defined to be \(\delta_{1}([\hat{\sigma}(a)]))\in\varinjlim\mathbb{Z}^{2^{n}}\subset K_{0}([A,A])\). There exists a surjective homomorphism from the group \(\varinjlim\mathbb{Z}^{2^{n}}\) to the following group \(G\) of di-adic integers: \[G=\Big{\{}\frac{m}{2^{n}}:m\in\mathbb{Z},n=0,1,2,\cdots\Big{\}}.\] Let us go back to \(Q_{+}\). Denote by \(z\) the canonical generator of \(C(S^{1})\), and \(a_{K},b_{K}\) the restrictions of \(a,b\) onto \(K\), respectively. Then \[\hat{\sigma}(Q_{+})=\frac{\overline{b_{K}}}{\sqrt{1-a_{K}}}\,z\sqrt{1+a_{K}}- \sqrt{1-a_{K}}\,\overline{z}\frac{b_{K}}{\sqrt{1+a_{K}}},\] because \(E=1-LL^{*}\in[A,A]\). **Lemma 3.3**.: The symbol \(\hat{\sigma}(Q_{+})\) is invertible in \(C(S^{1})\otimes C(K)\) if and only if \(a_{K},b_{K}\) are invertible in \(C(K)\). Proof.: If there exists \(x\in K\) such that \(b_{K}(x)=0\), then for all \(z\in S^{1}\) we have \(\hat{\sigma}(Q_{+})(x,z)=0\). Thus \(\hat{\sigma}(Q_{+})\) is not invertible. Now suppose that there exists an \(x\in K\) such that \(a_{K}(x)=0\). From this it follows \(|b_{K}(x)|=1\). We have \(\hat{\sigma}(Q_{+})=\overline{b_{K}(x)}z-b_{K}(x)\overline{z}\). As \(|b_{K}(x)|=1\), we have \(\hat{\sigma}(Q_{+})(x,b_{K}(x))=0\). Thus \(\hat{\sigma}(Q_{+})\) is not invertible. Conversely, suppose that \(a_{K}(x)\neq 0,b_{K}(x)\neq 0\) for all \(x\in K\). Then \[\Big{|}\frac{\sqrt{1+a_{K}(x)}}{\sqrt{1-a_{K}(x)}}b_{K}(x)\Big{|}\neq\Big{|} \frac{\sqrt{1-a_{K}(x)}}{\sqrt{1+a_{K}(x)}}b_{K}(x)\Big{|}.\] This implies that \(\hat{\sigma}(Q_{+})(x,z)\neq 0\) for all \(z\in S^{1},x\in K\). **Definition 3.4**.: The class \(\delta_{1}([\hat{\sigma}(Q_{+})])\in\varinjlim\mathbb{Z}^{2^{n}}\subset K_{0} ([A,A])\) is called the _secondary index_ of the chirality operator \(Q_{+}\). Our goal is to extract a numerical invariant from \(\delta_{1}([\hat{\sigma}(Q_{+})])\in\varinjlim\mathbb{Z}^{2^{n}}\). ## 4. The index theorem We need a measure on the Cantor set in order to construct a trace on \(C(K)\). Recall that \(K\) is identified with the infinite product \(\prod_{i=1}^{\infty}X_{i}\) with \(X_{i}=\{0,1\}\) is the two point space. Let \(\mu_{0}\) be the discrete measure on \(\{0,1\}\) given by \(\mu_{0}(\{0\})=\mu_{0}(\{1\})=\frac{1}{2}\), and \(\mu\) be the product measure on \(K\). Then \(\mu\) is a probability measure on \(K\) whose cumulative distribution function is the Cantor function (Devil's staircase). Denote by \(\tau\) the trace on \(C(K)\) defined by \(\mu\). Let \(\varepsilon\) be the canonical densely defined cyclic \(1\)-cocycle on \(C(S^{1})\). We consider the cup product \(\varepsilon\sharp\tau\) on \(C(S^{1})\otimes C(K)\). **Theorem 4.1**.: _There exists a \(*\)-homomorphism \(\pi:[A,A]\to\mathcal{K}(\ell^{2}(\mathbb{Z}_{+}))\otimes C(K)\) such that the sequence_ \[0\to\mathcal{K}(H)\to[A,A]\to\mathcal{K}(\ell^{2}(\mathbb{Z}_{+}))\otimes C(K)\to 0\] _is exact._ Proof.: Let \(B\) be the \(C^{*}\)-subalgebra of \(A\) generated by \(L,L^{*}\). Then for any \(F\in C(T_{0})\) and \(\gamma\in B\) the commutator \([\gamma,F]\) belongs to \(\mathcal{K}(H)\). So, what we need to analyze is the commutator ideal \([B,B]\) of \(B\). It is straightforward to see that \([B,B]\) is generated by \(E=[L^{*},L]=1-LL^{*}\) The commutator ideal \([C^{*}(V),C^{*}(V)]\) of \(C^{*}(V)\) is exactly \(\mathcal{K}(\ell^{2}(\mathbb{Z}_{+}))\). Then the conclusion follows from the diagram: \[\begin{array}{ccccccccccc}0&\to&\mathcal{K}&\to&[A,A]&\to&[C^{*}(V),C^{*}(V)] \otimes C(K)&\to&0\\ &&\parallel&&\cap&&\cap&&\\ 0&\to&\mathcal{K}&\to&A&\to&C^{*}(V)\otimes C(K)&\to&0\end{array}.\] Set \(\omega=\pi^{*}(\mathrm{Tr}\sharp\tau)\), where \(\mathrm{Tr}\) is the canonical trace on \(\mathcal{K}(\ell^{2}(\mathbb{Z}_{+}))\). Let \(\varphi:K_{0}([A,A])\to\mathbb{R}\) be the map obtained by the coupling with \(\omega\), and let \(\psi:K_{1}(C(S^{1})\otimes C(K)))\to\mathbb{R}\) be the map obtained by the coupling with \(\varepsilon\sharp\tau\). **Theorem 4.2** (The Index Theorem).: _The following diagram is commutative:_ **Definition 4.3**.: The _numerical secondary index_ of the chirality operator is \[\mathrm{s\text{-}ind}Q_{+}=\varphi(\delta_{1}(\hat{\sigma}(Q_{+}))\in\mathbb{ R}.\] Recall that since \(\hat{\sigma}(Q_{+})\) is invertible, \(a_{K}\) is nonzero. Set \(U_{+}=\{x\in K:a_{K}(x)>0\},U_{-}=\{x\in K:a_{K}(x)<0\}\). Then \(K=U_{+}\cup U_{-}\) a disjoint union. Denote by \(\chi_{+},\chi_{-}\) the characteristic functions of \(U_{+},U_{-}\), respectively. Obviously The functions \(\chi_{\pm}\) are continuous on \(K\). **Corollary 4.4**.: We have that \(\mathrm{s\text{-}ind}\,Q_{+}=\tau(\chi_{+})-\tau(\chi_{-})\). For the proofs of Theorem 4.2 and Corollary 4.4 we need some facts about the \(K\)-theory of the Cantor set. The n-th step of the construction of the Cantor set removes \(2^{n-1}\) open intervals \((x_{i},y_{i}),i=1,\cdots,2^{n-1},x_{1}<y_{1}<\cdots,x_{2^{n-1}}<y_{2^{n-1}}\). Set \(y_{0}=0,x_{2^{n-1}+1}\). Denote by \(f_{j}^{(n)}\) the characteristic function of \(K\cap[y_{j},x_{j+1}]\). Then the \(K_{0}\)-classes of \(f_{j}^{(n)}\)'s generate \(K_{0}(C(K))\). As above \(z\) is the canonical generator of \(C(S^{1})\). Then the classes \([z]\otimes[f_{j}^{(n)}]\in K_{1}(C(S^{1}))\otimes K_{0}(C(K))\) generate \(K_{1}(C(S^{1})\otimes C(K))\). This class is represented by the unitary \(u=zf_{j}^{(n)}+(1-f_{j}^{(n)})\in C(S^{1})\otimes C(K)\). We need to compute \(\delta_{1}([u])\in K_{0}([A,A])\). We review a very useful formula to describe the index map in the six-term exact sequence of K-groups. Let \(0\to I\to A\xrightarrow{\pi}A/I\to 0\) be a short exact sequence of \(C^{*}\)-algebras with \(A\) being unital. We give a description of the index map \(\delta_{0}:K_{1}(A/I)\to K_{0}(I)\). For an invertible \(u\in A/I\), let \(M,N\in A\) be such that \(\pi(M)=u,\pi(N)=u^{-1}\). Then \[\Omega=\begin{pmatrix}1-MN&M\\ 2N-NMN&-(1-NM)\end{pmatrix}\] satisfies \(\Omega^{2}=1\). Hence \(\Omega\) is invertible. Now \(g=\Omega\big{(}\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\big{)}\) is invertible in \(M_{2}(A)\) and \(\pi(g)=\big{(}\begin{smallmatrix}u&0\\ 0&u^{-1}\end{smallmatrix}\big{)}\). We have \[g=\begin{pmatrix}M&1-MN\\ -(1-NM)&2N-NMN\end{pmatrix}.\] Following the standard recipe \[\delta_{0}([u])=\big{[}g\big{(}\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\big{)}g^{-1}\big{]}-\big{[}\big{(}\begin{smallmatrix}1&0 \\ 0&0\end{smallmatrix}\big{)}\big{]}.\] Recall \(e=g\big{(}\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\big{)}g^{-1}\in M_{2}(I^{\sim})\), and \(\hat{e}=e-\big{(}\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\big{)}\in M_{2}(I)\). By straightforward computations we get \[\hat{e}=\begin{pmatrix}-(1-MN)^{2}&**\\ *&(1-NM)^{2}\end{pmatrix},\] where \(*,**\) are nontrivial terms. We are interested in diagonal entries to compute a pairing with a cyclic cocycle. In general we look for a cyclic even-cocycle \(\omega\) on \(I\), and we extract a numerical invariant: \[\langle\omega,[e]-\big{[}\big{(}\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\big{)}\big{]}\rangle=\omega(\hat{e},\cdots,\hat{e}).\] Proof of Theorem 4.2.: Let \([u]\) be one of generators with \(u=zf_{j}^{(n)}+(1-f_{j}^{(n)})\). First, we compute \(\psi([u])=\langle\varepsilon\sharp\tau,u\rangle=\frac{1}{2\pi i}\varepsilon \sharp\tau(u^{-1},u)\). As \(u^{-1}=u^{*}=\overline{z}f+(1-f)\) where \(f=f_{j}^{(n)}\), we have \[\frac{1}{2\pi i}\varepsilon\sharp\tau(u^{-1},u)=\frac{1}{2\pi i}\varepsilon \sharp\tau(\overline{z}f+(1-f),zf+(1-f))=\frac{1}{2\pi i}\varepsilon( \overline{z},z)C^{*}(V)au(f^{2})=\tau(f).\] In order to compute \(\langle\omega,\delta_{1}([u])\rangle\) we apply the Falk Index Formula. Let \(F\in C(T_{0})\) be an extension of \(f=f_{j}^{(n)}\). Set \(M=LF+(1-F),N=FL^{*}+(1-F)\in A\). Then \(\pi(M)=u,\pi(N)=u^{*}\). For \(a,b\in B(H)\), if \(a-b\in\mathcal{K}(H)\) we denote by \(a\equiv b\). We have \([L,F]\equiv 0,F(1-F)\equiv 0\). From this \[1-MN\equiv F(1-LL^{*}),1-NM\equiv 0.\] Hence \((1-MN)^{2}\equiv F(1-LL^{*})^{2},(1-NM)^{2}\equiv 0\). Therefore \[\pi(\hat{e})=\begin{pmatrix}f(1-MM^{*})^{2}&**\\ *&0\end{pmatrix}.\] Notice that \(1-MM^{*}\) is a rank one projection on \(\ell^{2}(\mathbb{Z}_{+})\). This implies that \[\langle\omega,\delta_{1}([u])\rangle=\omega(\hat{e})=\operatorname{Tr}(1-MM^{ *})\tau(f)=\tau(f).\] Proof of Corollary 4.4.: By Theorem 4.2 it is sufficient to compute \(\langle\;\varepsilon\sharp\tau,[\hat{\sigma}(Q_{+})]\;\rangle\). For any given \(x\in K\) the function \(\hat{\sigma}(Q_{+})(x,\cdot)\) is invertible in \(C(S^{1})\). We compute \(\langle\varepsilon,[\hat{\sigma}(Q_{+})(x,\cdot)]\rangle\). Set \(\alpha=\frac{\sqrt{1+a(x)}}{\sqrt{1-a(x)}}\overline{b}(x),\beta=\frac{\sqrt{1- a(x)}}{\sqrt{1+a(x)}}b(x)\), and set \(g(z)=\hat{\sigma}(Q_{+})(x,z)=\alpha z-\beta\overline{z}\). Suppose that \(a(x)>0\). Then \(|\frac{\beta}{\alpha}|=\frac{1-a(x)}{1+a(x)}<1\). Then \[g^{-1} =\frac{1}{\alpha z-\beta\overline{z}}=\frac{\overline{z}}{\alpha- \beta\overline{z}^{2}}=\frac{\overline{z}}{\alpha}\cdot\frac{1}{1-(\beta/ \alpha)\overline{z}^{2}}\] \[=\frac{\overline{z}}{\alpha}\sum_{n=0}^{\infty}\bigl{(}\tfrac{ \beta}{\alpha}\overline{z}^{2}\bigr{)}^{n}.\] Then \[g^{-1}dg=\frac{\overline{z}}{\alpha}\sum_{n=0}^{\infty}\bigl{(}\tfrac{\beta}{ \alpha}\overline{z}^{2}\bigr{)}^{n}(\alpha dz-\beta d\overline{z}).\] From this we have \[\int_{S^{1}}g^{-1}dg=\int_{S^{1}}\tfrac{\overline{z}}{\alpha}\sum_{n=0}^{ \infty}\bigl{(}\tfrac{\beta}{\alpha}\overline{z}^{2}\bigr{)}^{n}\cdot\alpha dz =\int_{S^{1}}\tfrac{\overline{z}}{\alpha}\cdot\alpha dz=2\pi i.\] When \(a(x)<0\), we have \(|\frac{\alpha}{\beta}|<1\). By arguments similar to the above, we have \[\int_{S^{1}}g^{-1}dg=\int_{S^{1}}\tfrac{z}{\beta}\sum_{n=0}^{\infty}\bigl{(} \tfrac{\alpha}{\beta}z^{2}\bigr{)}^{n}\cdot(-\beta)dz=\int_{S^{1}}\tfrac{z}{ \beta}(-\beta d\overline{z})=-2\pi i.\] Therefore \[\langle\varepsilon,[\hat{\sigma}(Q_{+})(x,\cdot)]\rangle=\left\{\begin{array}[] {ll}1&\text{if }a(x)>0\\ -1&\text{if }a(x)<0\end{array}\right..\] From this the conclusion follows. ## 5. a generalization As in the classical setting we can introduce further parameters. Fix \((p,q)\in S^{2}\subset\mathbb{R}\times\mathbb{C}\). Then \[\Gamma=\begin{pmatrix}p&\overline{q}L^{*}\\ qL&E-p\end{pmatrix}\] is a symmetry. We use the coin operator \(C\) defined above. Then the chirality operator is \[Q_{+}=\frac{\overline{b}}{\sqrt{1-a}}\,qL\sqrt{1+a}-\sqrt{1-a}\,\overline{q}L^ {*}\frac{b}{\sqrt{1+a}}+\frac{\overline{b}}{\sqrt{1-a}}E\frac{b}{\sqrt{1+a}}-2 p|b|.\] For each \(x\in K,z\in S^{1}\) we have \[\hat{\sigma}(Q_{+})(x,z) =\frac{q\overline{b(x)}(1+a(x))}{|b(x)|}z-\frac{\overline{q}b(x)( 1-a(x))}{\sqrt{1-a(x)^{2}}}\overline{z}-2p|b(x)|\] \[=q(1+a(x))w-\overline{q}(1-a(x))\overline{w}-2p|b(x)|\] \[=g(x,w),\] where \(b(x)=|b(x)|e^{i\theta}\) and \(w=e^{-i\theta}z\). For a fixed \(x\in K\) consider a linear equation \(q(1+a(x))w-\overline{q}(1-a(x))\overline{w}-2p|b(x)|=0\) in \(w\). The unique solution is \[w_{0}=\frac{\overline{q}p|b(x)|}{a(x)|q|^{2}}.\] It is easy to show that \(|w_{0}|=1\) if and only if \(p^{2}=a(x)^{2}\). Thus **Proposition 5.1**.: The symbol \(\hat{\sigma}(Q_{+})\) is invertible in \(C(S^{1})\otimes C(K)\) if and only if \(|p|\neq|a(x)|\) for all \(x\in K\). For a fixed \(x\in K\) we want to compute \[\langle\varepsilon,[\hat{\sigma}(Q_{+})(x,\cdot)]\rangle =\frac{1}{2\pi i}\int_{S^{1}}\frac{1}{\hat{\sigma}(Q_{+})(x,z)}d \hat{\sigma}(Q)(x,z)\] \[=\frac{1}{2\pi i}\int_{S^{1}}\frac{1}{g(x,w)}dg(x,w).\] We have \[g^{-1}dg=\frac{w^{2}q(1+a(x))+\overline{q}(1-a(x))}{w\big{(}q(1+a(x))w^{2}-2p| b(x)|w-\overline{q}(1-a(x))\big{)}}dw.\] The method we used above does not seem plausible to be applied to our new situation. So we apply a classical method, _i.e._ the residue formula. The integrand is a meromorphic function with three poles \(0,\alpha,\beta\) where \[\alpha=\frac{|b(x)|(p+1)}{q(1+a(x))},\;\beta=\frac{|b(x)|(p-1)}{q(1+a(x))}.\] By straightforward computations we get the residues at \(0,\alpha,\beta\) as follows: \[Res(0)=-1,Res(\alpha)=1,Res(\beta)=1.\] It is easy to see that (1) \(\ |\alpha|=1\) if and only if \(a(x)=p\), (2) \(\ |\alpha|<1\) if and only if \(a(x)>p\), and (3) \(\ |\alpha|>1\) if and only if \(a(x)<p\). Similarly (1) \(\ |\beta|=1\) if and only if \(a(x)=-p\), (2) \(\ |\beta|<1\) if and only if \(a(x)>-p\), and (3) \(\ |\beta|>1\) if and only if \(a(x)<-p\). Under the assumption \(p>0\), if \(a(x)>p\), then \(\alpha,\beta\in\mathbb{D}\). It follows that \[I=\frac{1}{2\pi i}\int_{S^{1}}\frac{1}{g(x,w)}dg(x,w)=Res(0)+Res(\alpha)+Res( \beta)=1.\] If \(-p<a(x)<p\), then \(\alpha\notin\mathbb{D},\beta\in\mathbb{D}\). Hence \[I=Res(0)+Res(\beta)=0.\] If \(a(x)<-p\), we have \(\alpha,\beta\notin\mathbb{D}\). Then \[I=Res(0)=-1.\] Let \(p<0\). If \(a(x)>-p\), then \(\alpha,\beta\in\mathbb{D}\). It follows that \[I=\frac{1}{2\pi i}\int_{S^{1}}\frac{1}{g(x,w)}dg(x,w)=Res(0)+Res(\alpha)+Res( \beta)=1.\] If \(p<a(x)<-p\), then \(\alpha\in\mathbb{D},\beta\notin\mathbb{D}\). So \[I=Res(0)+Res(\alpha)=0.\] If \(a(x)<p\), then \(\alpha,\beta\notin\mathbb{D}\). Consequently \[I=Res(0)=-1.\] Set \(K_{+}=\{x\in K:a(x)>|p|\},K_{0}=\{x\in K:-|p|<a(x)<|p|\}\), and \(K_{-}=\{x\in K:a(x)<-|p|\}\). Denote by \(\chi_{+},\chi_{0},\chi_{-}\) the characteristic functions of \(K_{+}.K_{0},K_{-}\), respectively. Summarizing the arguments above we have **Theorem 5.2**.: _We have \(\operatorname{s-ind}Q_{+}=\tau(\chi_{+})-\tau(\chi_{-})\)._
2308.09286
Power spectrum with $k^6$ growth for primordial black holes
The decrease of both the rolling speed of the inflaton and the sound speed of the curvature perturbations can amplify the curvature perturbations during inflation so as to generate a sizable amount of primordial black holes. In the ultraslow-roll inflation scenario, it has been found that the power spectrum of curvature perturbations has a $k^4$ growth. In this paper, we find that when the speed of sound decreases suddenly, the curvature perturbations becomes scale dependent in the infrared limit and the power spectrum of the curvature perturbation only has a $k^2$ growth. Furthermore, by studying the evolution of the power spectrum in the inflation model, in which both the sound speed of the curvature perturbations and the rolling speed of the inflaton are reduced, we find that the power spectrum is nearly scale invariant at the large scales to satisfy the constraint from the cosmic microwave background radiation observations, and at the same time can be enhanced at the small scales to result in an abundant formation of primordial black holes. In the cases of the simultaneous changes of the sound speed and the slow-roll parameter $\eta$ and the change of the sound speed preceding that of the slow-roll parameter $\eta$, the power spectrum can possess a $k^6$ growth under certain conditions, which is the steepest growth of the power spectrum reported so far.
Rongrong Zhai, Hongwei Yu, Puxun Wu
2023-08-18T04:05:19Z
http://arxiv.org/abs/2308.09286v2
# Power spectrum with \(k^{6}\) growth for primordial black holes ###### Abstract The decrease of both the rolling speed of the inflaton and the sound speed of the curvature perturbations can amplify the curvature perturbations during inflation so as to generate a sizable amount of primordial black holes. In the ultraslow-roll inflation scenario, it has been found that the power spectrum of curvature perturbations has a \(k^{4}\) growth. In this paper, we find that when the speed of sound decreases suddenly, the curvature perturbations becomes scale dependent in the infrared limit and the power spectrum of the curvature perturbation only has a \(k^{2}\) growth. Furthermore, by studying the evolution of the power spectrum in the inflation model, in which both the sound speed of the curvature perturbations and the rolling speed of the inflaton are reduced, we find that the power spectrum is nearly scale invariant at the large scales to satisfy the constraint from the cosmic microwave background radiation observations, and at the same time can be enhanced at the small scales to result in an abundant formation of primordial black holes. In the cases of the simultaneous changes of the sound speed and the slow-roll parameter \(\eta\) and the change of the sound speed preceding that of the slow-roll parameter \(\eta\), the power spectrum can possess a \(k^{6}\) growth under certain conditions, which is the steepest growth of the power spectrum reported so far. Introduction During the standard slow-roll inflation, the solution of the Sasaki-Mukhanov equation for the evolution of the curvature perturbations \(\mathcal{R}\) contains, in the infrared limit, a constant term and a time-decaying one, and this solution results in a nearly scale-invariant power spectrum of the curvature perturbations [1; 2], which is well-consistent with the cosmic microwave background (CMB) radiation observations. The CMB observations have limited the amplitude of the power spectrum to the order of \(\mathcal{O}(10^{-9})\) at the CMB scale [3; 4; 5; 6; 7]. It has been found that, if the amplitude of the power spectrum of the curvature perturbations can be enhanced for about seven orders at the scales smaller than the CMB one [8; 9; 10; 11], a sizable amount of primordial black holes can be generated when these enhanced perturbations reenter the horizon during the radiation- or matter-dominated era [12; 13; 14; 15; 16; 17; 18; 19]. The amplitude of the power spectrum of the curvature perturbations in the standard slow-roll inflation can be expressed as \(\mathcal{P}_{\mathcal{R}}=\frac{H^{2}}{8\pi^{2}\epsilon c_{s}}\) when the mode exits the horizon during inflation, where \(H\) is the Hubble parameter which is approximately constant during inflation, \(\epsilon\) the slow-roll parameter and \(c_{s}\) the sound speed of the curvature perturbations. Thus, a natural way to amplify the curvature perturbations is to reduce the rolling speed of the inflaton which is proportional to \(\epsilon\) or to suppress the sound speed. Decreasing the inflaton's rolling speed can be realized in the ultraslow-roll inflation [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72], in which the slow-roll parameter \(\eta\), which is defined to be \(\eta=\frac{\dot{\epsilon}}{\epsilon H}\) with an overdot denoting a derivative with respect to the time \(t\), equals approximately \(-6\). During the transition of \(\eta\) from \(\eta\simeq 0\), which corresponds to the slow-roll inflation, to \(-6\), the Israel junction conditions [73; 74] are used to obtain the solution of the curvature perturbations in the ultraslow-roll phase. Expanding this solution in the infrared limit, one can see that the decaying term in the solution of the Sasaki-Mukhanov equation for the evolution of the curvature perturbations becomes a growing one. This growing term gradually dominates if the ultraslow-roll inflation persists a sufficiently long time and it results in the enhancement of the curvature perturbations to meet the requirement of formation of a sizable amount of primordial black holes. It has been found that the power spectrum of the curvature perturbations displays a \(k^{4}\) growth and has a dip preceding the \(k^{4}\) dependence [75; 76; 77]. The steeper \(k^{5}(\log k)^{2}\) growth of the power spectrum can be obtained if an \(\eta=-1\) middle phase between the slow- and ultraslow-roll inflations [76; 78] is added. When the amplification of the curvature perturbations results from the decrease of the sound speed [79; 80; 81; 82; 83], which equals 1 in the canonical scalar field inflation model, the solution of the curvature perturbations, although does not contain growing terms, still has a constant component and a decay part. However, the constant component becomes scale variant at small scales, which makes the power spectrum become enhanced. If the Israel junction conditions are utilized to match the curvature perturbation and its derivative at the time when the sound speed decreases suddenly, the power spectrum has a \(k^{4}\) growth [84; 83]. However, this junction conditions is inapplicable in the case that the sound speed suddenly decreases due to the appearance of the square of the delta function [85]. Therefore, in this paper we will employ an improved junction conditions [81] to restudy the evolution of the power spectrum when the sound speed decreases suddenly and find that the growth of the power spectrum is \(k^{2}\) rather than \(k^{4}\). Furthermore, in the Dirac-Born-Infeld-inspired nonminimal kinetic coupling inflation model [86], both \(\epsilon\) and \(c_{s}^{2}\) are closely related to the concrete form of the inflationary potential, which indicates that this inflation model may accommodate both a small sound speed of the curvature perturbations and a small rolling speed of the inflaton at the same time. When both the inflaton's rolling speed and the sound speed of the curvature perturbations are suppressed during inflation, will the growth of the power spectrum of the curvature perturbations be steeper than \(k^{4}\)? This is an interesting problem, which we are also going to address in this paper. The rest of this paper is organized as follows: In Sec. II, the evolution of the power spectrum is investigated when the speed of sound decreases suddenly. In Sec. III, we study the evolution of the power spectrum in the case that both the sound speed and the slow-roll parameter are suppressed and present our conclusions in Sec. IV. Throughout this paper, we set \(c=\hbar=M_{\rm Pl}=1\). Growth of power spectrum from the sudden decrease of the sound speed The evolution of the curvature perturbations \({\cal R}\) satisfies the Sasaki-Mukhanov equation, which in the Fourier space takes the form \[v_{k}^{\prime\prime}+\left(c_{s}^{2}k^{2}-\frac{z^{\prime\prime}}{z}\right)v_{k }=0\;, \tag{1}\] where \(v_{k}=z{\cal R}_{k}\), a prime indicates a derivative with respect to the conformal time \(\tau\), and \(z\) is defined as \[z^{2}\equiv\frac{2a^{2}\epsilon}{c_{s}^{2}} \tag{2}\] with \(a\) being the cosmic scale factor. From the definition of \(z\), one can obtain \[\frac{z^{\prime\prime}}{z}=(aH)^{2}\left(2-\epsilon+\frac{3}{2}\eta-3s+s^{2}+ s\epsilon-s\eta+\frac{1}{4}\eta^{2}-\frac{1}{2}\eta\epsilon\right)\;. \tag{3}\] Here \(s=\frac{\dot{c}_{s}}{c_{s}H}\). During the (ultra)slow-roll inflation, one has \(\epsilon\ll 1\), and thus \(aH\simeq-\frac{1}{\tau}\). Then Eq. (1) can be rewritten as \[v_{k}^{\prime\prime}+\left(c_{s}^{2}k^{2}-\frac{\nu^{2}-1/4}{(-\tau)^{2}} \right)v_{k}=0\;, \tag{4}\] where \[\nu\simeq\frac{3}{2}+\frac{1}{2}\eta-s\;. \tag{5}\] If \(\eta\) and \(c_{s}\) are constants, Eq. (4) has a general solution \[v_{k}(\tau)=\alpha\sqrt{-\tau}H_{\nu}^{(1)}(-c_{s}k\tau)+\beta\sqrt{-\tau}H_{ \nu}^{(2)}(-c_{s}k\tau)\;. \tag{6}\] Here \(H_{\nu}^{(1)}\) and \(H_{\nu}^{(2)}\) are the first and second Hankel functions, respectively, and \(\alpha\) and \(\beta\) are two constants. Now, we discuss the scenario of a sudden decrease of the sound speed, as shown in Fig. 1. In the first stage (\(\tau<\tau_{1}\)), which corresponds to the canonical slow-roll inflation, the sound speed \(c_{s}\) is equal to one and the slow-roll parameter \(\eta\) is near zero. Thus, one has \(\nu\simeq 3/2\) and \(\epsilon=\epsilon_{0}(\frac{\tau}{\tau_{0}})^{-\eta}\simeq\epsilon_{0}\). Imposing that the solution of the Sasaki-Mukhanov equation matches the plane-wave form in the ultraviolet regime (\(-k\tau\gg 1\)), we can derive the evolution of the curvature perturbations \[\mathcal{R}_{k}^{(0)}(\tau)=i\frac{H}{2\sqrt{\epsilon_{0}k^{3}}}e^{-ik\tau}(1+ ik\tau)\;. \tag{7}\] Apparently, the solution of the curvature perturbations contains a constant term and a time-decaying one since \(|\tau|\) decreases with the cosmic expansion during inflation, which results in a nearly scale-invariant power spectrum of the curvature perturbations in the superhorizon scales (\(-k\tau\to 0\)) with the amplitude of the power spectrum being \[\mathcal{P}_{0}=\frac{H^{2}}{8\pi^{2}\epsilon_{0}}\;. \tag{8}\] The CMB observations have limited \(\mathcal{P}_{0}\) to be \(\sim 10^{-9}\)[3]. At the moment \(\tau_{1}\), we assume that the sound speed decreases suddenly from 1 to a very tiny constant value \(c_{s_{1}}\) and the inflation enters the second stage. In this stage, the general solution of the curvature perturbations has the form \[\mathcal{R}_{k}^{(1)}(\tau)=\frac{ic_{s_{1}}H\left(-\tau\right)^{3/2}}{\sqrt{2 \epsilon_{0}}}\left[\alpha_{1}H_{3/2}^{(1)}(c_{s_{1}}k\tau)+\beta_{1}H_{3/2}^{ (2)}(c_{s_{1}}k\tau)\right]\;, \tag{9}\] where \(\alpha_{1}\) and \(\beta_{1}\) are two constants. Usually, the Israel junction conditions \(\mathcal{R}_{k}^{(0)}(\tau_{1})=\mathcal{R}_{k}^{(1)}(\tau_{1})\) and \(\mathcal{R}_{k}^{\prime(0)}(\tau_{1})=\mathcal{R}_{k}^{\prime(1)}(\tau_{1})\) are used to determine the values of \(\alpha_{1}\) and \(\beta_{1}\)[83; 84]. However, it has been found, in Ref. [85], that there is a square term of the delta function \(\delta(\tau-\tau_{1})\) arising form \((c_{s}^{\prime}/c_{s})^{2}\) in \(z^{\prime\prime}/z\) when a sudden variation of the sound velocity occurs, which makes the analysis impossible, and thus the Israel junction conditions need to be modified or a new variable has to be introduced. In [85], a new variable is defined, which satisfies the Israel junction conditions at \(\tau_{1}\). Here, we do not use this new variable [85] but consider an improved junction conditions, i.e. \({\cal R}_{k}\) and its conjugate momentum \(A{\cal R}_{k}^{\prime}\) are continuous at \(\tau_{1}\), where \(A\equiv\frac{2\epsilon}{c_{s}^{2}}\)[81]. We have checked that the introduction of the new viable and the improved junction conditions can give the same results. Considering the improved junction conditions \[{\cal R}_{k}^{(0)}(\tau_{1})={\cal R}_{k}^{(1)}(\tau_{1})\;,\quad A_{0}{\cal R }_{k}^{\prime(0)}(\tau_{1})=A_{1}{\cal R}_{k}^{\prime(1)}(\tau_{1}) \tag{10}\] with \[A_{0}=2\epsilon_{0}\;,\quad A_{1}=\frac{2\epsilon_{0}}{c_{s_{1}}^{2}}\;, \tag{11}\] we can obtain that \[\alpha_{1} = -\frac{(1-c_{s_{1}})\,\sqrt{\pi}e^{-i\left(1+c_{s_{1}}\right)k \tau_{1}}}{4\sqrt{c_{s_{1}}}},\] \[\beta_{1} = -\frac{(1+c_{s_{1}})\,\sqrt{\pi}e^{-i\left(1-c_{s_{1}}\right)k \tau_{1}}}{4\sqrt{c_{s_{1}}}}\;. \tag{12}\] Substituting \(\alpha_{1}\) and \(\beta_{1}\) into Eq. (9), we find the expression of the curvature perturbation in the second phase, and then we can derive the corresponding power spectrum, which is shown in Fig. 2. From it, one can find that the power spectrum only has a \(k^{2}\) growth. Thus, the result of a \(k^{4}\) growth, which is obtained from the standard Israel junction conditions, should be incorrect. To figure out the physical reason behind the \(k^{2}\) growth of the power spectrum, we expand the expression of the curvature perturbations (Eq. (9)) in the infrared limits: \(-c_{s_{1}}k\tau\to 0\) and \(-c_{s_{1}}k\tau_{1}\to 0\), and obtain \[{\cal R}_{k}^{(1)}(\tau) = \frac{iHe^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k^{3}}}-\frac{H\tau_ {1}e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k}}-\frac{ic_{s_{1}}^{2}H\tau_{1}^{2}k ^{1/2}e^{-ik\tau_{1}}}{4\sqrt{\epsilon_{0}}}+\frac{ic_{s_{1}}^{2}Hk^{1/2}e^{- ik\tau_{1}}}{4\sqrt{\epsilon_{0}}}(-\tau)^{2}+\ldots \tag{13}\] Apparently, the wave number \(k\) in Eq. (13) must satisfy the condition \(k\ll k_{c}\equiv-1/(c_{s_{1}}\tau_{1})\). It is easy to see that in the infrared limit the leading part of the curvature perturbations is independent of \(\tau\), which contains three different \(k\)-dependent terms, and the subleading part decays with time since \(|\tau|\) decreases during inflation. These characters are different from that in the case of the transition from the slow-roll inflation to the ultraslow-roll one, where there is an appearance of the growing term. From Eq. (13), we obtain the power spectrum of the curvature perturbations \[\frac{\mathcal{P}_{\mathcal{R}_{k}^{(1)}}}{\mathcal{P}_{0}}\simeq 1+\left(1-c_{s_ {1}}^{2}\right)\tau_{1}^{2}k^{2}+\frac{1}{4}c_{s_{1}}^{4}\tau_{1}^{4}k^{4} \tag{14}\] after neglecting all decaying terms. If the \(k^{2}\) term becomes comparable to the constant one, the wave number needs to be equal to about \[k_{1}\simeq-\frac{1}{\sqrt{1-c_{s_{1}}^{2}}\tau_{1}}\simeq c_{s_ {1}}k_{c}\;. \tag{15}\] The wave number at which the \(k^{4}\) term becomes comparable with the \(k^{2}\) one is \[k_{2}\simeq-\frac{2\sqrt{1-c_{s_{1}}^{2}}}{c_{s_{1}}^{2}\tau_{1}}\simeq\frac{ 2}{c_{s_{1}}}k_{c}\;. \tag{16}\] It is obvious that \(k_{1}\ll k_{c}\), but \(k_{2}\gg k_{c}\) since \(c_{s_{1}}\ll 1\). Thus, \(k_{2}\) is beyond the infrared condition \(k\ll k_{c}\), which means that the power spectrum has no \(k^{4}\) growth, and the steepest growth of the power spectrum is only \(k^{2}\). At the CMB scale, the first term in Eq. (14) dominates, which leads to a scale-invariant spectrum consistent with the CMB observations. Going to the scales which are smaller than the CMB one, the second term begins to play a dominant role. The power spectrum becomes scale dependent and has a \(k^{2}\) growth. These results are shown clearly in Fig. 2, in which the approximate result given in Eq. (14) is very consistent with the numerical one. There is no dip in the power spectrum since no term cancels the constant one, which is different from the case of the ultraslow-roll inflation. III Growth behavior of power spectrum when both the sound speed and the slow-roll parameter \(\eta\) are changed suddenly We have known that the enhancement of the power spectrum can be realized by decreasing the sound speed \(c_{s}\) or reducing the slow-roll parameter \(\epsilon\). In the following, we will study the growth of the power spectrum when both \(\epsilon\) and \(c_{s}\) are suppressed. For simplicity, we will consider that the sound speed changes suddenly from \(1\) to a constant much less than one, and the slow-roll parameter \(\eta\) suddenly from a constant near zero to a negative constant. A negative \(\eta\) will lead to the decrease of \(\epsilon\) since \(\epsilon\propto\tau^{-\eta}\). We first consider the case that the variations of \(c_{s}\) and \(\eta\) occur simultaneously. ### Simultaneous changes of sound speed and slow-roll parameter \(\eta\) The scenario considered in this subsection is shown in Fig. 3. Initially, the Universe undergoes a standard slow-roll inflation, in which \(c_{s}=1\), \(\epsilon\simeq\epsilon_{0}\ll 1\) and \(\eta\sim 0\). At time \(\tau_{1}\), the sound speed \(c_{s}\) and the slow-roll parameter \(\eta\) change suddenly from \(1\) and \(\sim 0\) to a small value \(c_{s_{1}}\) and a negative constant \(\eta_{1}\), respectively. Thus, during the second phase, the slow-roll parameter \(\epsilon\) decays as \(\epsilon(\tau)=\epsilon_{0}(\tau/\tau_{1})^{-\eta_{1}}\) and the sound speed is a small constant Figure 2: The evolution of the power spectrum as a function of wave number \(k\). The solid-gray and dashed-blue lines represent the numerical and approximate results, respectively. The dashed-red line indicates the \(k^{2}\) growth. From Eq. (6), we obtain the general solution of the curvature perturbations \[\overline{\mathcal{R}}_{k}^{(1)}(\tau)=-\frac{c_{s_{1}}H\tau_{1}^{-\frac{\eta_{1} }{2}}\tau^{\frac{3+\eta_{1}}{2}}}{\sqrt{2\epsilon_{0}}}\left[\alpha_{2}H_{\nu}^ {(1)}(c_{s_{1}}k\tau)+\beta_{2}H_{\nu}^{(2)}(c_{s_{1}}k\tau)\right] \tag{17}\] in the second phase, where \(\alpha_{2}\) and \(\beta_{2}\) are two constants, and \(\nu=(3+\eta_{1})/2\). Matching \(\mathcal{R}_{k}^{(0)}\) and \(\overline{\mathcal{R}}_{k}^{(1)}\) at \(\tau=\tau_{1}\) by using the improved junction conditions, \(\mathcal{R}_{k}^{(0)}(\tau_{1})=\overline{\mathcal{R}}_{k}^{(1)}(\tau_{1})\) and \(A_{0}\mathcal{R}_{k}^{\prime(0)}(\tau_{1})=A_{1}\overline{\mathcal{R}}_{k}^{ \prime(1)}(\tau_{1})\), one can achieve that \[\alpha_{2} = \frac{\pi e^{-ik\tau_{1}}}{4\sqrt{2c_{s_{1}}^{2}\tau_{1}^{3}k^{3} }}\bigg{[}\left((3+\eta_{1})\left(1+ik\tau_{1}\right)-(c_{s_{1}}k\tau_{1})^{2} \right)H_{\nu}^{(2)}(c_{s_{1}}k\tau_{1})\] \[-c_{s_{1}}k\tau_{1}\left(1+ik\tau_{1}\right)H_{\nu+1}^{(2)}(c_{s_{ 1}}k\tau_{1})\bigg{]}\;,\] and \[\beta_{2} = -\frac{\pi e^{-ik\tau_{1}}}{4\sqrt{2c_{s_{1}}^{2}\tau_{1}^{3}k^{3} }}\bigg{[}\left((3+\eta_{1})\left(1+ik\tau_{1}\right)-(c_{s_{1}}k\tau_{1})^{2} \right)H_{\nu}^{(1)}(c_{s_{1}}k\tau_{1})\] \[-c_{s_{1}}k\tau_{1}\left(1+ik\tau_{1}\right)H_{\nu+1}^{(1)}(c_{s_ {1}}k\tau_{1})\bigg{]}\;.\] Substituting \(\alpha_{2}\) and \(\beta_{2}\) into Eq. (17) gives the expression of the curvature perturbations during the second phase. Expanding this expression in the infrared limit (\(-c_{s_{1}}k\tau\to 0\) and \(-c_{s_{1}}k\tau_{1}\to 0\)), we arrive at \[\overline{\mathcal{R}}_{k}^{(1)}(\tau) = \frac{iHe^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k^{3}}}-\frac{H\tau_{ 1}e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k}}-\left(\frac{3ic_{s_{1}}^{2}H\tau_{1} ^{2}e^{-ik\tau_{1}}}{4\sqrt{\epsilon_{0}}(3+\eta_{1})}+\frac{ic_{s_{1}}^{2}H \eta_{1}\left(-\tau_{1}\right)^{-1-\eta_{1}}e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{ 0}}(1+\eta_{1})(3+\eta_{1})}(-\tau)^{3+\eta_{1}}\right)k^{1/2}\] (18) \[-\frac{i}{2}\left(\frac{3ic_{s_{1}}^{2}H\tau_{1}^{2}e^{-ik\tau_{1 }}}{4\sqrt{\epsilon_{0}}(3+\eta_{1})}+\frac{ic_{s_{1}}^{2}H\tau_{1}^{2}e^{-ik \tau_{1}}}{2\sqrt{\epsilon_{0}}(3+\eta_{1})}+\frac{ic_{s_{1}}^{2}H\tau_{1}^{2} e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}}(1+\eta_{1})(3+\eta_{1})}(-\tau)^{3+\eta_{1}} \right)k^{1/2}\] \[-\frac{i}{2}\left(\frac{3ic_{s_{1}}^{2}H\tau_{1}^{2}e^{-ik\tau_{1 }}}{4\sqrt{\epsilon_{0}}(3+\eta_{1})}+\frac{ic_{s_{1}} \[k_{1}\simeq-\frac{1}{\tau_{1}}=c_{s_{1}}k_{c}\;, \tag{20}\] and \[k_{2}\simeq-\frac{\sqrt{3+\eta_{1}}}{c_{s_{1}}\tau_{1}}=\sqrt{3+\eta_{1}}k_{c}\;. \tag{21}\] The condition \(k_{2}\ll k_{c}\) for a \(k^{4}\) growth requires that \(\sqrt{3+\eta_{1}}\ll 1\). This is hard to satisfy since \(\eta_{1}\) must be fine-tuned to be very close to \(-3\). Thus, usually the highest growth of the power spectrum can only reach \(k^{2}\). Furthermore, since the power spectrum only has the \(k^{2}\) growth, the dip phenomenon does not appear. These characters can be found clearly in Fig. 4, where the power spectrums from numerical and approximate analyses are plotted in the \(\eta_{1}=-2\) case. When \(\eta_{1}<-3\), since the coefficient of the \(k^{n}\) term is very tedious, we only consider three special cases (\(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\ll 1\), \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}=1\) and \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\gg 1\)), as shown in Table 1 in which the predominant terms of \(k^{n}\) are given, to investigate analytically the growth of the power spectrum. In the case of \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\ll 1\), we find that \[k_{1} \simeq -\frac{1}{\tau_{1}}=c_{s_{1}}k_{c}\;,\] \[k_{2} \simeq -\frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt{2}c_{s_{1}}\tau_{1 }}e^{\frac{(3+\eta_{1})N_{2}}{2}}=\frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt {2}}e^{\frac{(3+\eta_{1})N_{2}}{2}}k_{c}\;,\] \[k_{3} \simeq -\frac{\sqrt{2(1+\eta_{1})(3+\eta_{1})}}{c_{s_{1}}\tau_{1}}e^{ \frac{(3+\eta_{1})N_{2}}{2}}=\sqrt{2(1+\eta_{1})(3+\eta_{1})}e^{\frac{(3+\eta _{1})N_{2}}{2}}k_{c}\;. \tag{22}\] \begin{table} \begin{tabular}{c|c|c|c} & \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\ll 1\) & \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}=1\) & \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\gg 1\) \\ \hline \(\tau_{1}^{2}k^{2}\) & \(1\) & \(1+\frac{2\eta_{1}}{(1+\eta_{1})(3+\eta_{1})}\) & \(\frac{2c_{s_{1}}^{2}\eta_{1}e^{-(3+\eta_{1})N_{2}}}{(1+\eta_{1})(3+\eta_{1})}\) \\ \hline \(c_{s_{1}}^{2}\tau_{1}^{4}k^{4}\) & \(-\frac{2e^{-(3+\eta_{1})N_{2}}}{(1+\eta_{1})(3+\eta_{1})}\) & \(-\frac{(\eta_{1}^{2}+8\eta_{1}+6)e^{-(3+\eta_{1})N_{2}}}{(1+\eta_{1})^{2}(3+ \eta_{1})^{2}}\) & \(\frac{c_{s_{1}}^{2}\eta_{1}^{2}e^{-2(3+\eta_{1})N_{2}}}{(1+\eta_{1})^{2}(3+ \eta_{1})^{2}}\) \\ \hline \(c_{s_{1}}^{4}\tau_{1}^{6}k^{6}\) & \(\frac{e^{-2(3+\eta_{1})N_{2}}}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}\) & \(\frac{e^{-2(3+\eta_{1})N_{2}}}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}\) & \(\frac{e^{-2(3+\eta_{1})N_{2}}}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}\) \\ \end{tabular} \end{table} Table 1: The predominant terms of \(k^{n}\) in three different situations. Figure 4: The power spectrum in the \(\eta_{1}=-2\) case. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. The dashed-red line indicates the \(k^{2}\) growth. Apparently, the condition \(k_{3}<k_{c}\) is easy to be satisfied since \(\eta_{1}<-3\). Thus the power spectrum can have a \(k^{6}\) growth. Since the coefficient of the \(k_{4}\) term is negative, the power spectrum has a dip preceding the \(k^{6}\) growth. When \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}=1\), we have \[k_{1} \simeq \sqrt{\frac{\eta_{1}^{2}+4\eta_{1}+3}{(\eta_{1}^{2}+6\eta_{1}+3)} }c_{s_{1}}k_{c}\;,\] \[k_{2} \simeq \sqrt{\frac{(\eta_{1}^{2}+4\eta_{1}+3)(\eta_{1}^{2}+6\eta_{1}+3)} {(\eta_{1}^{2}+8\eta_{1}+6)}}c_{s_{1}}k_{c}\;,\] \[k_{3} \simeq \sqrt{\eta_{1}^{2}+8\eta_{1}+6}\;c_{s_{1}}k_{c}\;. \tag{23}\] We find that \(k_{1}\simeq k_{2}\simeq k_{3}<k_{c}\), which means that the power spectrum will go directly to the \(k^{6}\) growth after the scale-invariant spectrum. In the case of \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\gg 1\), one can obtain \[k_{1} \simeq \frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-2\eta_{1}}}e^{ \frac{1}{2}(3+\eta_{1})N_{2}}k_{c}\;,\] \[k_{2} \simeq \frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-\eta_{1}}}e^{ \frac{1}{2}(3+\eta_{1})N_{2}}k_{c}\;,\] \[k_{3} \simeq |\eta_{1}|c_{s_{1}}k_{c}\;. \tag{24}\] Since \(k_{2}\ll k_{c}\), the power spectrum will grow with \(k^{4}\) when the scales become smaller than the CMB one. It eventually goes into \(k^{6}\) growth due to \(k_{3}<k_{c}\). A negative \(\eta_{1}\) leads to a negative coefficient of the \(k^{2}\) term, which indicates that the power spectrum has a dip preceding the \(k^{4}\) growth. These characters can be seen in Fig. 5, where the numerical (gray -solid lines) and approximate (blue-dashed lines) results of the power spectrum with \(\eta_{1}=-4,-5,-6\) are plotted. One can see that the approximate results are consistent with the numerical ones. Moreover, Eq. (18) is clearly inapplicable for the cases of \(\eta_{1}=-1\) and \(\eta_{1}=-3\) due to the appearance of singularity. These two cases need to be treated separately. * \(\eta_{1}=-1\) We find that in the infrared limit, the solution of the curvature perturbations [Eq. (17)] has the form \[\overline{\mathcal{R}}_{k}^{(1)}(\tau)=\frac{iHe^{-ik\tau_{1}}}{2\sqrt{\epsilon _{0}k^{3}}}-\frac{H\tau_{1}e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k}}-\frac{ic_{ s_{1}}^{2}Hk^{1/2}e^{-ik\tau_{1}}}{8\sqrt{\epsilon_{0}}}\left[3\tau_{1}^{2}-(- \tau)^{2}\right]+\ldots\;. \tag{25}\] The solution of the curvature perturbations consists of the constant part and the decaying one, which is similar to the solution in the case of \(\eta_{1}>-3\). The power spectrum of the curvature perturbations has the form \[\frac{\mathcal{P}_{\overline{\mathcal{R}}_{k}^{(1)}}}{\mathcal{P}_{0}}\simeq 1 +\tau_{1}^{2}k^{2}+\frac{9}{16}c_{s_{1}}^{4}\tau_{1}^{4}k^{4}\;. \tag{26}\] Figure 5: The power spectrums in the \(\eta_{1}=-4\), \(-5\), and \(-6\) cases. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. The first, middle and last columns correspond to the \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\ll 1\), \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}=1\), and \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{2}}\gg 1\) cases, respectively. We can see that the \(k^{2}\) and \(k^{4}\) terms become comparable at \[k_{2}\approx-\frac{4}{3c_{s_{1}}^{2}\tau_{1}}=\frac{4}{3c_{s_{1}}}k_{c}\;. \tag{27}\] Since \(k_{2}>k_{c}\), the power spectrum has no \(k^{4}\) growth. The corresponding numerical and approximate results of the power spectrum are shown in Fig. 6. Apparently at the small scales the power spectrum grows with a \(k^{2}\) slope. * \(\eta_{1}=-3\) In the infrared limit, Eq. (17) can be simplified to be \[\overline{\cal R}_{k}^{(1)}(\tau) = \frac{iHe^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k^{3}}}-\frac{H\tau_{ 1}e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k}}+\frac{ic_{s_{1}}^{2}H\tau_{1}^{2}k^{ 1/2}e^{-ik\tau_{1}}}{8\sqrt{\epsilon_{0}}}\left[1+6\log\left(\frac{\tau}{\tau_ {1}}\right)\right] \tag{28}\] \[- \frac{ic_{s_{1}}^{2}H\tau_{1}^{3}k^{3/2}e^{-ik\tau_{1}}}{8\sqrt{ \epsilon_{0}}}\left[1+2\log\left(\frac{\tau}{\tau_{1}}\right)\right]+\ldots\;.\] The solution for the curvature perturbation consists of the constant term and the logarithmically growing one. This characteristic is different from the power-law growth in the \(\eta_{1}<-3\) case. Thus, we obtain that the power spectrum has the expression \[\frac{{\cal P}_{\overline{\cal R}_{k}^{(1)}}}{{\cal P}_{0}} \simeq 1+\frac{1}{2}\left(2-c_{s_{1}}^{2}\left(6N_{2}-1\right)\right) \tau_{1}^{2}k^{2}+\frac{1}{16}c_{s_{1}}^{2}\left(8+c_{s_{1}}^{2}\left(6N_{2}- 1\right)^{2}-16N_{2}\right)\tau_{1}^{4}k^{4} \tag{29}\] \[+ \frac{1}{16}c_{s_{1}}^{4}\left(2N_{2}-1\right)^{2}\tau_{1}^{6}k^ {6}\;.\] Figure 6: The power spectrum in the \(\eta_{1}=-1\) case.The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. The dashed-red line indicates the \(k^{2}\) growth. Since the maximum value of \(N_{2}\) is about \(30-40\) and \(c_{s_{1}}\sim{\cal O}(10^{-4})\), \(N_{2}c_{s_{1}}^{2}\) is significantly less than 1. Therefore, the expression of the power spectrum given in Eq. (29) can be simplified to be \[\frac{{\cal P}_{\overline{R}_{k}^{(1)}}}{{\cal P}_{0}}\simeq 1+\tau_{1}^{2}k^{2 }+\frac{1}{2}(1-2N_{2})c_{s_{1}}^{2}\tau_{1}^{4}k^{4}+\frac{1}{16}(2N_{2}-1)^ {2}c_{s_{1}}^{4}\tau_{1}^{6}k^{6}\;. \tag{30}\] From the above expression, we obtain \[k_{1} \simeq c_{s_{1}}k_{c}\;,\] \[k_{2} \simeq \frac{1}{\sqrt{N_{2}-1/2}}k_{c}\;,\] \[k_{3} \simeq \frac{2}{\sqrt{N_{2}-1/2}}k_{c}\;. \tag{31}\] Obviously, \(k_{1}\ll k_{c}\) and \(k_{2}\approx k_{3}\), which are less than \(k_{c}\) if \(N_{2}>9/2\). Thus, the power spectrum will have an era with a \(k^{2}\) growth and eventually with a \(k^{6}\) one at scales smaller than the CMB one when \(N_{2}>9/2\). Since the coefficient of the \(k^{4}\) term is negative, there is a dip preceding the \(k^{6}\) growth. These characters can be seen clearly in Fig. 7. Figure 7: The power spectrum in the \(\eta_{1}=-3\) case. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. ### Changes of sound speed previous to slow-roll parameter \(\eta\) The simultaneous change of \(c_{s}\) and \(\eta\) is a harsh requirement. In the following, we abandon it, and first consider that the change of the sound speed is followed by that of \(\eta\). We assume that the sound speed changes suddenly from 1 to a small constant \(c_{s_{1}}\) at time \(\tau_{1}\), and \(\eta\) varies from \(\sim 0\) to a negative constant \(\eta_{1}\) at \(\tau_{2}\) (\(|\tau_{2}|<|\tau_{1}|\)), as is shown in Fig. 8. The \(|\tau|>|\tau_{1}|\) era, which represents the first phase, is the standard slow-roll inflation. In the second phase (\(|\tau_{2}|<|\tau|<|\tau_{1}|\)), the sound speed of the curvature perturbations is a small value \(c_{s_{1}}\). When \(|\tau|<|\tau_{2}|\), which corresponds to the third phase, the slow-roll parameter \(\epsilon\) decreases with the power-law: \(\epsilon(\tau)=\epsilon_{0}(\tau/\tau_{2})^{-\eta_{1}}\) and the sound speed keeps the small value \(c_{s_{1}}\). When \(|\tau|>|\tau_{1}|\), the solution of \({\cal R}_{k}^{(0)}\) is given in Eq. (7). In the second phase, the solution of the curvature perturbations has the form \[\widetilde{\cal R}_{k}^{(1)}(\tau)=\frac{ic_{s_{1}}H\left(-\tau \right)^{3/2}}{\sqrt{2\epsilon_{0}}}\left[\alpha_{3}H_{3/2}^{(1)}(c_{s_{1}}k \tau)+\beta_{3}H_{3/2}^{(2)}(c_{s_{1}}k\tau)\right]\, \tag{32}\] where \(\alpha_{3}\) and \(\beta_{3}\) are two constants. Using the matching condition: \({\cal R}_{k}^{(0)}(\tau_{1})=\widetilde{\cal R}_{k}^{(1)}(\tau_{1})\) and \(A_{0}{\cal R}_{k}^{\prime(0)}(\tau_{1})=A_{1}\widetilde{\cal R}_{k}^{\prime(1) }(\tau_{1})\), one can obtain that \[\alpha_{3} = -\frac{\left(1-c_{s_{1}}\right)\sqrt{\pi}e^{-i\left(1+c_{s_{1}} \right)k\tau_{1}}}{4\sqrt{c_{s_{1}}}},\] \[\beta_{3} = -\frac{\left(1+c_{s_{1}}\right)\sqrt{\pi}e^{-i\left(1-c_{s_{1}} \right)k\tau_{1}}}{4\sqrt{c_{s_{1}}}}. \tag{33}\] If \(|\tau|<|\tau_{2}|\), the solution of the curvature perturbations becomes \[\widetilde{\mathcal{R}}_{k}^{(2)}(\tau)=-\frac{c_{s_{1}}H\tau_{2}^{-\frac{\eta_{ 1}}{2}}\tau^{\frac{3+\eta_{1}}{2}}}{\sqrt{2\epsilon_{0}}}\left[\alpha_{4}H_{ \nu}^{(1)}(c_{s_{1}}k\tau)+\beta_{4}H_{\nu}^{(2)}(c_{s_{1}}k\tau)\right], \tag{34}\] where \(\alpha_{4}\) and \(\beta_{4}\) are two constants and \(\nu=(3+\eta_{1})/2\). Using the matching condition: \(\widetilde{\mathcal{R}}_{k}^{(1)}(\tau_{2})=\widetilde{\mathcal{R}}_{k}^{(2) }(\tau_{2})\) and \(\widetilde{\mathcal{R}}_{k}^{\prime(1)}(\tau_{2})=\widetilde{\mathcal{R}}_{k }^{\prime(2)}(\tau_{2})\), one finds \[\alpha_{4} = \frac{i\pi^{3/2}c_{s_{1}}^{1/2}\tau_{2}k}{16}e^{-i(1+c_{s_{1}})k \tau_{1}}\left[\begin{array}{c}\xi H_{\frac{3+\eta_{1}}{2}}^{(2)}(c_{s_{1}} k\tau_{2})-\lambda H_{\frac{1+\eta_{1}}{2}}^{(2)}(c_{s_{1}}k\tau_{2})\end{array} \right], \tag{35}\] \[\beta_{4} = -\frac{i\pi^{3/2}c_{s_{1}}^{1/2}\tau_{2}k}{16}e^{-i(1+c_{s_{1}}) k\tau_{1}}\left[\begin{array}{c}\xi H_{\frac{3+\eta_{1}}{2}}^{(1)}(c_{s_{1}}k \tau_{2})-\lambda H_{\frac{1+\eta_{1}}{2}}^{(1)}(c_{s_{1}}k\tau_{2})\end{array} \right]\, \tag{36}\] where \[\xi=(1-c_{s_{1}})H_{\frac{1}{2}}^{(1)}(c_{s_{1}}k\tau_{2})+(1+c_{s _{1}})e^{2ic_{s_{1}}k\tau_{1}}H_{\frac{1}{2}}^{(2)}(c_{s_{1}}k\tau_{2})\,\] \[\lambda=(1-c_{s_{1}})H_{\frac{3}{2}}^{(1)}(c_{s_{1}}k\tau_{2})+(1+c _{s_{1}})e^{2ic_{s_{1}}k\tau_{1}}H_{\frac{3}{2}}^{(2)}(c_{s_{1}}k\tau_{2}). \tag{37}\] In the infrared region (\(-c_{s_{1}}k\tau\to 0\), \(-c_{s_{1}}k\tau_{1}\to 0\) and \(-c_{s_{1}}k\tau_{2}\to 0\) ), we obtain that \[\widetilde{\mathcal{R}}_{k}^{(2)}(\tau) = \frac{iHe^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k^{3}}}-\frac{H\tau_{ 1}e^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k}} \tag{38}\] \[- \left(\frac{ic_{s_{1}}^{2}H\left((3+\eta_{1})\tau_{1}^{2}-\eta_{ 1}\tau_{2}^{2}\right)e^{-ik\tau_{1}}}{4\sqrt{\epsilon_{0}}(3+\eta_{1})}+\frac {ic_{s_{1}}^{2}H\eta_{1}\tau_{2}^{-1-\eta_{1}}e^{i\pi\eta_{1}-ik\tau_{1}}}{2 \sqrt{\epsilon_{0}}(1+\eta_{1})(3+\eta_{1})}(-\tau)^{3+\eta_{1}}\right)k^{1/2}\] \[+ \left(\begin{array}{c}\frac{c_{s_{1}}^{2}H\left((3+\eta_{1}) \tau_{1}^{3}-3\eta_{1}\tau_{1}\tau_{2}^{2}+2\eta_{1}\tau_{2}^{3}\right)e^{-ik \tau_{1}}}{12\sqrt{\epsilon_{0}}(3+\eta_{1})}\\ +\frac{c_{s_{1}}^{2}H\left(\eta_{1}\tau_{1}-\tau_{2}-\eta_{1}\tau_{2}\right) \tau_{2}^{-1-\eta_{1}}e^{i\pi\eta_{1}-ik\tau_{1}}}{2\sqrt{\epsilon_{0}}(1+\eta _{1})(3+\eta_{1})}(-\tau)^{3+\eta_{1}}\end{array}\right)k^{3/2}\] \[+ \ldots\.\] Clearly, the solution given in Eq. (38) contains a time-independent part and a time-dependent one, which will grow with time when \(\eta_{1}<-3\). Using \(N_{2}\) and \(N_{3}\) to denote the number of \(e\)-folds during the second and third phases, respectively, i.e. \(\tau_{2}=\tau_{1}e^{-N_{2}}\) and \(\tau=\tau_{1}e^{-N_{2}-N_{3}}\), we obtain, from Eq. (38), the expression of the power spectrum of the curvature perturbations \[\frac{\mathcal{P}_{\widetilde{\mathcal{R}}_{k}^{(2)}}}{\mathcal{P }_{0}} \simeq 1+\left(1-c_{s_{1}}^{2}+\frac{c_{s_{1}}^{2}\eta_{1}}{3+\eta_{1}} e^{-2N_{2}}+\frac{2c_{s_{1}}^{2}\eta_{1}}{(1+\eta_{1})(3+\eta_{1})}e^{-2N_{2}-(3+ \eta_{1})N_{3}}\right)\tau_{1}^{2}k^{2} \tag{39}\] \[- \left(\begin{array}{c}\frac{1}{3}+\frac{2\eta_{1}e^{-3N_{2}}} {9+3\eta_{1}}-\frac{3\eta_{1}e^{-2N_{2}}}{9+3\eta_{1}}+\frac{2+2\eta_{1}-2\eta _{1}e^{N_{2}}}{(1+\eta_{1})(3+\eta_{1})}e^{-3N_{2}-(3+\eta_{1})N_{3}}\\ \\ -c_{s_{1}}^{2}\left(\frac{1}{2}-\frac{\eta_{1}}{6+2\eta_{1}}e^{-2N_{2}}-\frac{ \eta_{1}}{(1+\eta_{1})(3+\eta_{1})}e^{-2N_{2}-(3+\eta_{1})N_{3}}\right)^{2} \end{array}\right)c_{s_{1}}^{2}\tau_{1}^{4}k^{4}\] \[+ \left(\frac{1}{6}+\frac{\eta_{1}e^{-3N_{2}}}{9+3\eta_{1}}-\frac{ \eta_{1}e^{-2N_{2}}}{6+2\eta_{1}}+\frac{1+\eta_{1}-\eta_{1}e^{N_{2}}}{(1+\eta_{1 })(3+\eta_{1})}e^{-3N_{2}-(3+\eta_{1})N_{3}}\right)^{2}c_{s_{1}}^{4}\tau_{1}^{6 }k^{6}\;. \tag{39}\] We first study the \(\eta_{1}>-3\) case, which means that there are no growing terms in the solution of the curvature perturbations and all time-dependent terms in Eq. (38) decay with the cosmic expansion. Thus, all terms containing \(N_{2}\) and \(N_{3}\) in Eq. (39) can be neglected and the power spectrum can be simplified as \[\frac{{\cal P}_{\widehat{\cal R}_{k}^{(2)}}}{{\cal P}_{0}} \simeq 1+\tau_{1}^{2}k^{2}-\frac{1}{3}c_{s_{1}}^{2}\tau_{1}^{4}k^{4}+\frac{1}{36 }c_{s_{1}}^{4}\tau_{1}^{6}k^{6}\;. \tag{40}\] It can be found that the wave numbers at which the scale-invariant term is comparable to the \(k^{2}\) term and the \(k^{2}\) term is comparable to \(k^{4}\) one happen, respectively, at \[k_{1} \simeq c_{s_{1}}k_{c}\;,\] \[k_{2} \simeq \sqrt{3}k_{c}\;. \tag{41}\] The power spectrum has a growth rate of \(k^{2}\) since \(k_{2}>k_{c}\). These results can be found clearly in Fig. 9, where the evolutions of the power spectrum from numerical and approximate analyses are plotted in the \(\eta_{1}=-2\) case. At the CMB scale, the power spectrum is scale invariant which is consistent with the CMB observations. At scales smaller than the CMB scale, the power spectrum becomes scale-dependent with a \(k^{2}\) growth. When \(\eta_{1}<-3\), since the coefficient of \(k^{n}\) is very tedious, we only consider three special cases (\(N_{2}\gg N_{3}\), \(N_{2}\ll N_{3}\), and \(N_{2}=N_{3}\)) to investigate analytically the growth of the power spectrum. Figure 9: The power spectrum in the \(\eta_{1}=-2\) case. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. The red-dashed line indicates the \(k^{2}\) growth. First of all, when \(N_{2}\gg N_{3}\), all terms containing \(N_{2}\) in Eq. (39) can be neglected. Thus, the dominant term of \(k^{n}\) is the same as the one in the case of \(\eta_{1}>-3\). Therefore, the evolution of the power spectrum is the same as in the case of \(\eta_{1}>-3\), and only has the \(k^{2}\) growth. When \(N_{2}=N_{3}\), the expression for the power spectrum given in Eq. (39) can be simplified to be \[\frac{{\cal P}_{\widetilde{\cal R}_{k}^{(2)}}}{{\cal P}_{0}} \simeq 1+\tau_{1}^{2}k^{2}-\left(\frac{1}{3}-\frac{2\eta_{1}}{(1+\eta_ {1})(3+\eta_{1})}e^{-(5+\eta_{1})N_{3}}\right)c_{s_{1}}^{2}\tau_{1}^{4}k^{4} \tag{42}\] \[+ \left(\frac{1}{36}-\frac{\eta_{1}}{3(1+\eta_{1})(3+\eta_{1})}e^{ -(5+\eta_{1})N_{3}}+\frac{\eta_{1}^{2}}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}e^{- 2(5+\eta_{1})N_{3}}\right)c_{s_{1}}^{4}\tau_{1}^{6}k^{6}\;.\] Since different values of \(\eta_{1}\) will give different results, we will discuss this situation by considering following different cases: \[\frac{{\cal P}_{\widetilde{\cal R}_{k}^{(2)}}}{{\cal P}_{0}} \simeq \left\{\begin{array}{ll}1+\tau_{1}^{2}k^{2}-\frac{1}{3}c_{s_{1} }^{2}\tau_{1}^{4}k^{4}+\frac{1}{36}c_{s_{1}}^{4}\tau_{1}^{6}k^{6}&-5<\eta_{1}< -3\\ 1+\tau_{1}^{2}k^{2}-\frac{19}{12}c_{s_{1}}^{2}\tau_{1}^{4}k^{4}+ \frac{361}{576}c_{s_{1}}^{4}\tau_{1}^{6}k^{6}&\eta_{1}=-5\\ 1+\tau_{1}^{2}k^{2}+\frac{2\eta_{1}e^{-(5+\eta_{1})N_{3}}}{(1+\eta_{1})(3+\eta _{1})}c_{s_{1}}^{2}\tau_{1}^{4}k^{4}+\frac{\eta_{1}^{2}e^{-2(5+\eta_{1})N_{3} }}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}c_{s_{1}}^{4}\tau_{1}^{6}k^{6}&\eta_{1}<-5 \;.\end{array}\right.\] When \(-5\leq\eta_{1}<-3\), we find that \(k_{2}\simeq k_{c}\), which means that the power spectrum only has the \(k^{2}\) growth. In the \(\eta_{1}<-5\) case, we obtain \[k_{1} \simeq c_{s_{1}}k_{c}\;,\] \[k_{2} \simeq \frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-2\eta_{1}}}e^{ \frac{1}{2}(5+\eta_{1})N_{3}}k_{c}\;,\] \[k_{3} \simeq \frac{\sqrt{2(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-\eta_{1}}}e^{ \frac{1}{2}(5+\eta_{1})N_{3}}k_{c}\;. \tag{43}\] Apparently, \(k_{3}\) is less than \(k_{c}\) if \(\frac{\sqrt{2(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-\eta_{1}}}e^{\frac{1}{2}(5+ \eta_{1})N_{3}}\ll 1\), which can be realized easily since \(\eta_{1}<-5\). Thus, the power spectrum can have the \(k^{6}\) growth. Since there is a negative coefficient in the \(k^{4}\) term, the power spectrum has a dip preceding the \(k^{6}\) growth. In Fig. 10, the power spectrums in the cases of \(\eta_{1}=-4,-5\) and \(-6\) for the \(N_{2}\gg N_{3}\) and \(N_{2}=N_{3}\) cases are plotted. We can see that when \(N_{2}\gg N_{3}\), the highest growth slope of the power spectrum is \(k^{2}\). In the case of \(N_{2}=N_{3}\), the highest growth rate for \(\eta_{1}=-4\) and \(-5\) is still \(k^{2}\), while for \(\eta_{1}=-6\), the highest growth rate can reach up to \(k^{6}\). When \(N_{2}\ll N_{3}\), the power spectrum can be approximated as \[\frac{{\cal P}_{\widetilde{\cal R}_{k}^{(2)}}}{{\cal P}_{0}} \simeq 1+\left(1+\frac{2c_{s_{1}}^{2}\eta_{1}}{(1+\eta_{1})(3+\eta_{1})}e ^{-(3+\eta_{1})N_{3}}\right)\tau_{1}^{2}k^{2} \tag{44}\] \[+ \left(\frac{2\eta_{1}}{(1+\eta_{1})(3+\eta_{1})}e^{-(3+\eta_{1}) N_{3}}+\frac{c_{s_{1}}^{2}\eta_{1}^{2}}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}e^{-2(3+ \eta_{1})N_{3}}\right)c_{s_{1}}^{2}\tau_{1}^{4}k^{4}\] \[+ \frac{\eta_{1}^{2}}{(1+\eta_{1})^{2}(3+\eta_{1})^{2}}e^{-2(3+ \eta_{1})N_{3}}c_{s_{1}}^{4}\tau_{1}^{6}k^{6}\;.\] If \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{3}}\ll 1\), the wave numbers when the \(k^{n}\) and \(k^{n-2}\) terms become comparable happen at \[k_{1} \simeq c_{s_{1}}k_{c}\;,\] \[k_{2} \simeq \frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-2\eta_{1}}}e^{ \frac{1}{2}(3+\eta_{1})N_{3}}k_{c}\;,\] \[k_{3} \simeq \frac{\sqrt{2(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-\eta_{1}}}e^{ \frac{1}{2}(3+\eta_{1})N_{3}}k_{c}\;. \tag{45}\] Apparently, \(k_{3}<k_{c}\) can be satisfied easily since \(\eta_{1}<-3\) and thus the power spectrum can have a \(k^{6}\) growth. The dip will appear preceding the \(k^{6}\) growth since the \(k^{4}\) term has a negative coefficient. If \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{3}}\gg 1\), we obtain \[k_{1} \simeq \frac{\sqrt{(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-2\eta_{1}}}e^{ \frac{1}{2}(3+\eta_{1})N_{3}}k_{c}\;,\] \[k_{2} \simeq \frac{\sqrt{2(1+\eta_{1})(3+\eta_{1})}}{\sqrt{-\eta_{1}}}e^{ \frac{1}{2}(3+\eta_{1})N_{3}}k_{c}\;,\] Figure 10: The power spectrums in the \(\eta_{1}=-4\), \(-5\) and \(-6\) cases. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. \[k_{3}\,\simeq\,c_{s_{1}}k_{c}\;. \tag{46}\] It can be found that \(k_{3}<k_{c}\) and \(k_{1}\approx k_{2}\), which means that the power spectrum will enter \(k^{4}\) growth directly after the scale-invariant spectrum, and will enter finally the \(k^{6}\) growth. The power spectrum has the dip preceding the \(k^{4}\) growth due to the negative coefficient in the \(k^{2}\) term. These characters can be found in Fig. 11, where the power spectrums in the cases of \(\eta_{1}=-4,-5\) and \(-6\) for the \(N_{2}\ll N_{3}\) are plotted. Furthermore, when \(\eta_{1}=-1\) and \(-3\) there are singularities in Eq. (38) and thus these cases need to be studied separately. To avoid the problem, we first set the value of \(\eta_{1}\), and then expand Eq. (34) in the infrared limit to obtain \[\widetilde{\mathcal{R}}_{k}^{(2)}(\tau)=\left\{\begin{array}{ll}\frac{iHe^{ -ik\tau_{1}}}{2\sqrt{\epsilon_{0}k^{3}}}-\frac{H\tau_{1}e^{-ik\tau_{1}}}{2 \sqrt{\epsilon_{0}k}}-\frac{iHc_{s_{1}}^{2}e^{-ik\tau_{1}}k^{1/2}}{8\sqrt{ \epsilon_{0}}}\left[2\tau_{1}^{2}+\tau_{2}^{2}-(-\tau)^{2}\right]+\ldots&\eta_ {1}=-1\\ \frac{iHe^{-ik\tau_{1}}}{2\sqrt{\epsilon_{0}k^{3}}}-\frac{H\tau_{1}e^{-ik\tau_ {1}}}{2\sqrt{\epsilon_{0}k}}-\frac{iHc_{s_{1}}^{2}e^{-ik\tau_{1}}k^{1/2}}{8 \sqrt{\epsilon_{0}}}\left[2\tau_{1}^{2}-3\tau_{2}^{2}-6\tau_{2}^{2}\log\left( \frac{\tau}{\tau_{2}}\right)\right]+\ldots&\eta_{1}=-3\;.\end{array}\right.\] In the case of \(\eta_{1}=-1\), the solution consists of constant and decaying terms, and thus it is dominated by the constant terms. When \(\eta_{1}=-3\), except for the constant terms, the solution of the curvature perturbations contains a logarithmic-growing term. This character is different from that in the \(\eta_{1}<-3\) case, where the growth of the solution is power law. Since the coefficient of the logarithmic-growing term depends on \(c_{s_{1}}^{2}\), which is much less than Figure 11: The power spectrums for \(N_{2}\ll N_{3}\) in the \(\eta_{1}=-4,-5\) and \(-6\) cases. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. The first line is corresponding to the \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{3}}\ll 1\) case and the second line to the \(c_{s_{1}}^{2}e^{-(3+\eta_{1})N_{3}}\gg 1\) case. one in our analysis, the contribution of the growth term in the solution is negligible. Thus, from Eq. (47), we obtain that the power spectrum has the same expression \[\frac{\mathcal{P}_{\widehat{\mathcal{R}}_{k}^{(2)}}}{\mathcal{P}_{0}}\simeq 1+ \tau_{1}^{2}k^{2}+\frac{1}{4}c_{s_{1}}^{4}\tau_{1}^{4}k^{4} \tag{47}\] for \(\eta_{1}=-1\) and \(-3\). The steepest growth is \(k^{2}\) apparently. The corresponding numerical and approximate results of the power spectrum are shown in Fig. 12. ### Changes of slow-roll parameter \(\eta\) previous to the sound speed This scenario is shown in Fig. 13. The slow-roll parameter \(\eta\) changes from \(\sim 0\) to a negative constant \(\eta_{1}\) at \(\tau_{1}\), and the sound speed decreases from \(1\) to a small constant \(c_{s_{1}}\) at \(\tau_{2}\) (\(|\tau_{2}|<|\tau_{1}|\)). Figure 12: The power spectrums in the \(\eta_{1}=-1\) and \(\eta_{1}=-3\) cases. The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. Since the method for analytical solution in this case is similar to that of the preceding subsection, we do not give the details here. The general expression of the curvature perturbations in the infrared region \((-c_{s_{1}}k\tau\to 0,\,-c_{s_{1}}k\tau_{2}\to 0\) and \(-k\tau_{1}\to 0)\) is so complex, and we do not show it here. We only consider the integer \(\eta_{1}\) cases. Different from the results obtained in the preceding subsection, we find that the analytical results do not coincide with the numerical ones when \(\eta_{1}=-1\) and \(-2\). Therefore, we only give the infrared expressions \((-k\tau_{1}\to 0,-c_{s_{1}}k\tau_{2}\to 0\) and \(-c_{s_{1}}k\tau\to 0)\) of the curvature perturbations in the \(\eta_{1}\leq-3\) case: \[\left\{\begin{array}{ll}\frac{iH}{2\sqrt{\epsilon_{0}k^{3}}}\\ +\left(\frac{iH\left(3\tau_{1}^{2}-\left(1-c_{s_{1}}^{2}\right)\tau_{2}^{2} \right)}{8\sqrt{\epsilon_{0}}}+\frac{3iH\tau_{1}^{2}}{4\sqrt{\epsilon_{0}}} \left(c_{s_{1}}^{2}\log\left[\frac{\tau}{\tau_{2}}\right]-\log\left[\frac{ \tau_{1}}{\tau_{2}}\right]\right)\right)k^{1/2}\\ +\left(\frac{H\tau_{1}^{3}}{6\sqrt{\epsilon_{0}}}-\frac{H\tau_{1}^{3}}{2 \sqrt{\epsilon_{0}}}\log\left[\frac{\tau_{1}}{\tau_{2}}\right]+\frac{c_{s_{1 }}^{2}H\tau_{1}^{3}}{2\sqrt{\epsilon_{0}}}\log\left[\frac{\tau}{\tau_{2}} \right]\right)k^{3/2}+\ldots&\eta_{1}=-3\\ \frac{iH}{2\sqrt{\epsilon_{0}k^{3}}}\\ -\left(\frac{iH\left(\left(1-c_{s_{1}}^{2}\right)\left(8\tau_{1}^{3}+\tau_{2} ^{3}\right)-12\tau_{1}^{2}\tau_{2}\right)}{12\sqrt{\epsilon_{0}}\tau_{2}}- \frac{2ic_{s_{1}}^{2}H\tau_{1}^{3}}{3\sqrt{\epsilon_{0}}}(-\tau)^{-1}\right)k ^{1/2}.\\ -\left(\frac{H\tau_{1}^{3}\left(3\left(1-c_{s_{1}}^{2}\right)\tau_{1}-4\tau_{ 2}\right)}{6\sqrt{\epsilon_{0}}\tau_{2}}-\frac{c_{s_{1}}^{2}H\tau_{1}^{4}}{2 \sqrt{\epsilon_{0}}}\left(-\tau\right)^{-1}\right)k^{3/2}+\ldots&\eta_{1}=-4\\ \frac{iH}{2\sqrt{\epsilon_{0}k^{3}}}\\ -\left(\frac{iH\left(\left(1-c_{s_{1}}^{2}\right)\left(5\tau_{1}^{4}+\tau_{2} ^{4}\right)-10\tau_{1}^{2}\tau_{2}^{2}\right)}{16\sqrt{\epsilon_{0}}\tau_{2} ^{2}}+\frac{5ic_{s_{1}}^{2}H\tau_{1}^{4}}{16\sqrt{\epsilon_{0}}}(-\tau)^{-2} \right)k^{1/2}\\ -\left(\frac{H\tau_{1}^{3}\left(3\left(1-c_{s_{1}}^{2}\right)\tau_{1}^{2}-5 \tau_{2}^{2}\right)}{12\sqrt{\epsilon_{0}}\tau_{2}^{2}}+\frac{c_{s_{1}}^{2}H \tau_{1}^{5}}{4\sqrt{\epsilon_{0}}}\left(-\tau\right)^{-2}\right)k^{3/2}+ \ldots&\eta_{1}=-5\\ \frac{iH}{2\sqrt{\epsilon_{0}k^{3}}}\\ -\left(\frac{iH\left(\left(1-c_{s_{1}}^{2}\right)\left(4\tau_{1}^{5}+\tau_{2} ^{5}\right)-10\tau_{1}^{2}\tau_{2}^{3}\right)}{20\sqrt{\epsilon_{0}}\tau_{2} ^{3}}-\frac{5ic_{s_{1}}^{2}H\tau_{1}^{5}}{5\sqrt{\epsilon_{0}}}(-\tau)^{-3} \right)k^{1/2}\\ -\left(\frac{H\tau_{1}^{3}\left(\left(1-c_{s_{1}}^{2}\right)\tau_{1}^{3}-2 \tau_{2}^{3}\right)}{6\sqrt{\epsilon_{0}}\tau_{2}^{3}}-\frac{c_{s_{1}}^{2}H \tau_{1}^{6}}{6\sqrt{\epsilon_{0}}}\left(-\tau\right)^{-3}\right)k^{3/2}+ \ldots&\eta_{1}=-6\\ \end{array}\right.\] From the above expression, one can obtain the power spectrum: \[\frac{\mathcal{P}_{\bar{\mathbb{R}}_{k}^{(2)}}}{\mathcal{P}_{0}}\simeq\left\{ \begin{array}{ll}1+\frac{1}{2}\left(3-e^{-2N_{2}}-6N_{2}-6c_{s_{1}}^{2}N_{3} \right)\tau_{1}^{2}k^{2}&\\ \quad+\frac{1}{16}\left(3-e^{-2N_{2}}-6N_{2}-6c_{s_{1}}^{2}N_{3}\right)^{2} \tau_{1}^{4}k^{4}&\\ \quad+\frac{1}{9}\left(1-3N_{2}-3c_{s_{1}}^{2}N_{3}\right)^{2}\tau_{1}^{6}k^{ 6}&\eta_{1}=-3\\ 1+\frac{1}{3}\left(12-e^{-2N_{2}}-8e^{N_{2}}+8c_{s_{1}}^{2}e^{N_{2}}-8c_{s_{1}} ^{2}e^{N_{2}+N_{3}}\right)\tau_{1}^{2}k^{2}&\\ \quad+\frac{1}{36}\left(12-e^{-2N_{2}}-8e^{N_{2}}+8c_{s_{1}}^{2}e^{N_{2}}-8c_{ s_{1}}^{2}e^{N_{2}+N_{3}}\right)^{2}\tau_{1}^{4}k^{4}&\\ \quad+\frac{1}{9}\left(4-3e^{N_{2}}+3c_{s_{1}}^{2}e^{N_{2}}-3c_{s_{1}}^{2}e^{N _{2}+N_{3}}\right)^{2}\tau_{1}^{6}k^{6}&\eta_{1}=-4\\ 1+\frac{1}{4}\left(10-e^{-2N_{2}}-5e^{2N_{2}}+5c_{s_{1}}^{2}e^{2N_{2}}-5c_{s_{1 }}^{2}e^{2N_{2}+2N_{3}}\right)\tau_{1}^{2}k^{2}&\\ \quad+\frac{1}{64}\left(10-e^{-2N_{2}}-5e^{2N_{2}}+5c_{s_{1}}^{2}e^{2N_{2}}-5c_ {s_{1}}^{2}e^{2N_{2}+2N_{3}}\right)^{2}\tau_{1}^{4}k^{4}&\\ \quad+\frac{1}{36}\left(5-3e^{2N_{2}}+3c_{s_{1}}^{2}e^{2N_{2}}-3c_{s_{1}}^{2} e^{2N_{2}+2N_{3}}\right)^{2}\tau_{1}^{6}k^{6}&\eta_{1}=-5\\ 1+\frac{1}{5}\left(10-e^{-2N_{2}}-4e^{3N_{2}}+4c_{s_{1}}^{2}e^{3N_{2}}-4c_{s_{1 }}^{2}e^{3N_{2}+3N_{3}}\right)\tau_{1}^{2}k^{2}&\\ \quad+\frac{1}{100}\left(10-e^{-2N_{2}}-4e^{3N_{2}}+4c_{s_{1}}^{2}e^{3N_{2}}-4 c_{s_{1}}^{2}e^{3N_{2}+3N_{3}}\right)^{2}\tau_{1}^{4}k^{4}&\\ \quad+\frac{1}{9}\left(2-e^{3N_{2}}+c_{s_{1}}^{2}e^{3N_{2}}-c_{s_{1}}^{2}e^{3N _{2}+3N_{3}}\right)^{2}\tau_{1}^{6}k^{6}&\eta_{1}=-6\ \.\end{array}\right. \tag{48}\] Here \(N_{2}\) and \(N_{3}\) are the number of \(e\)-folds during the second and third phase, respectively. The maximum wave number in the infrared limit is \(k|\tau_{1}|\). So, in Eq. (48), the wave number must satisfy \(k\ll\bar{k}_{c}\equiv-1/\tau_{1}\). When the \(k^{6}\) dominant term becomes comparable to the \(k^{4}\) one, the wave number should be equal to about \[k_{3}\simeq\left\{\begin{array}{ll}-\frac{3}{2\tau_{1}}=\frac{3}{2}\bar{k}_ {c}&\eta_{1}=-3\\ -\frac{4}{3\tau_{1}}=\frac{4}{3}\bar{k}_{c}&\eta_{1}=-4\\ -\frac{5}{4\tau_{1}}=\frac{5}{4}\bar{k}_{c}&\eta_{1}=-5\\ -\frac{6}{5\tau_{1}}=\frac{6}{5}\bar{k}_{c}&\eta_{1}=-6\.\end{array}\right. \tag{49}\] It is obvious that all wave number \(k_{3}\) are larger than \(\bar{k}_{c}\) and thus are beyond the infrared region, which means that the steepest growth of the power spectrum is \(k^{4}\). Furthermore, we find that the power spectrum will dip before the \(k^{4}\) growth. The corresponding numerical and approximate results are shown in Fig. 14. ## IV Conclusions The generation of a significant amount of primordial black holes requires a sufficiently large power spectrum of the curvature perturbations of the order of about \({\cal O}(10^{-2})\) at the scales smaller than the CMB one. There are two natural ways to amplify the curvature perturbations. One is to reduce the rolling speed of the inflaton and the other to suppress the sound speed \(c_{s}\) of the curvature perturbations. In the ultraslow-roll inflation scenario, it has been found that the power spectrum of the curvature perturbations has the \(k^{4}\) growth. In this paper, we use the improved junction conditions to find that the power spectrum of the curvature perturbation has a \(k^{2}\) growth when the speed of sound decreases suddenly. Furthermore, by investigating the evolution of the power spectrum in the inflation model, which can realize decrease of both the sound speed and the rolling speed of the inflaton, we find that the power spectrum at the large scales is nearly scale invariant to satisfy the constraint from the CMB observations, and at the same time it will be enhanced at the small scales to achieve an abundant formation of primordial black holes. In the cases that the change of the slow-roll parameter \(\eta\) precedes that of the sound speed \(c_{s}\), the power spectrum of the curvature perturbations only has a \(k^{4}\) growth. While if \(\eta\) and \(c_{s}\) changes Figure 14: The power spectrums for different constant values of \(\eta_{1}\). The gray-solid and blue-dashed lines represent the numerical and approximate results, respectively. simultaneously or the change of \(c_{s}\) precedes that of \(\eta\), the power spectrum can possess a \(k^{6}\) growth under certain conditions, which is the steepest growth of the power spectrum reported so far. ###### Acknowledgements. We appreciate very much the insightful comments and helpful suggestions by the anonymous referee. This work is supported by the National Key Research and Development Program of China Grant No. 2020YFC2201502, and by the National Natural Science Foundation of China under Grants No. 12275080 and No. 12075084.
2309.00835
GENDIRECT: a GENeralized DIRECT-type algorithmic framework for derivative-free global optimization
Over the past three decades, numerous articles have been published discussing the renowned DIRECT algorithm (DIvididing RECTangles). These articles present innovative ideas to enhance its performance and adapt it to various types of optimization problems. A comprehensive collection of deterministic, derivative-free algorithmic implementations based on the DIRECT framework has recently been introduced as part of the DIRECTGO project. DIRECTGO empowers users to conveniently employ diverse DIRECT-type algorithms, enabling efficient solutions to practical optimization problems. Despite their variations, DIRECT-type algorithms share a common algorithmic structure and typically differ only at certain steps. Therefore, we propose GENDIRECT -- GENeralized DIRECT-type framework that encompasses and unifies DIRECT-type algorithms into a single, generalized framework within this paper. GENDIRECT offers a practical alternative to the creation of yet another ``new'' DIRECT-type algorithm that closely resembles existing ones. Instead, GENDIRECT allows the efficient generation of known or novel DIRECT-type optimization algorithms by assembling different algorithmic components. This approach provides considerably more flexibility compared to both the DIRECTGO toolbox and individual DIRECT-type algorithms. A few hundred thousand DIRECT-type algorithms can be combined using GENDIRECT, facilitating users' easy customization and the addition of new algorithmic components. By modifying specific components of five highly promising DIRECT-type algorithms found in the existing literature using GENDIRECT, the significant potential of GENDIRECT has been demonstrated. The resulting newly developed improved approaches exhibit greater efficiency and enhanced robustness in dealing with problems of varying complexity.
Linas Stripinis, Remigijus Paulavičius
2023-09-02T06:08:42Z
http://arxiv.org/abs/2309.00835v1
# GENDIRECT: a GENeralized DIRECT-type algorithmic framework for derivative-free global optimization ###### Abstract Over the past three decades, numerous articles have been published discussing the renowned DIRECT algorithm (DIvididing RECTangles). These articles present innovative ideas to enhance its performance and adapt it to various types of optimization problems. To consolidate and summarize this progress, we have recently introduced DIRECTGO--a comprehensive collection featuring more than fifty deterministic, derivative-free algorithmic implementations based on the DIRECT framework. DIRECTGO empowers users to conveniently employ diverse DIRECT-type algorithms, enabling efficient solutions to practical optimization problems. Despite their variations, DIRECT-type algorithms share a common algorithmic structure and typically differ only at certain steps. Recognizing this, we take further steps in generalization within this paper and propose GENDIRECT--GENeralized DIRECT-type framework that encompasses and unifies DIRECT-type algorithms under a single generalized approach. GENDIRECT offers a practical alternative to the creation of yet another "new" DIRECT-type algorithm that closely resembles existing ones. Instead, GENDIRECT allows the efficient generation of known or novel DIRECT-type optimization algorithms by assembling different algorithmic components. This approach provides considerably more flexibility compared to both the DIRECTGO toolbox and individual DIRECT-type algorithms. In general, GENDIRECT allows the creation of approximately a few hundred thousand combinations of DIRECT-type algorithms, facilitating user-friendly customization and the incorporation of new algorithmic components for further advancements. By modifying specific components of five highly promising DIRECT-type algorithms found in the existing literature using GENDIRECT, the significant potential of GENDIRECT has been demonstrated. The resulting newly developed improved approaches exhibit greater efficiency and enhanced robustness in dealing with problems of varying complexity. Keywords:Derivative-free global optimization DIRECT-type algorithms Optimization software Numerical benchmarking Msc: 90C26 65K10 + Footnote †: journal: Computer Science ## 1 Introduction Optimization problems encountered in scientific and engineering domains often involve objective functions that can only be obtained through "black-box" methods or simulations, lacking derivative information. For example, Google's internal services frequently employ black-box optimization techniques with automated parameter tuning engines [10]. Furthermore, objective function evaluations are becoming more computationally expensive as applications grow in size and complexity [22]. Consequently, calculating derivatives is often infeasible or impractical. As a result, there is a growing emphasis on the development of derivative-free global optimization (DFGO) methods. These methods are specifically designed to address the growing complexity and diversity of optimization problems, where derivative information is neither available nor practical to compute. This active development of DFGO methods addresses the need for efficient optimization techniques in scenarios where derivatives cannot be utilized. This paper considers a box-constrained single-objective optimization problem \[\min_{\mathbf{x}\in D}\quad f(\mathbf{x}), \tag{1}\] where \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a potentially "black-box" Lipschitz-continuous objective function with an unknown Lipschitz constant, and \(\mathbf{x}\in\mathbb{R}^{n}\) is the input vector of control variables. Moreover, \(f\) can be non-linear, multi-modal, non-convex, and non-differentiable. We assume that \(f\) can only be computed at any point of the feasible region, which is a \(n\)-dimensional hyper-rectangle \[D=[\mathbf{a},\mathbf{b}]=\{\mathbf{x}\in\mathbb{R}^{n}:a_{j}\leq x_{j}\leq b _{j},j=1,\ldots,n\}.\] However, there is no access to additional information on the objective function \(f(\mathbf{x})\), such as gradients and the Hessian, as is typical for a "black-box" case. Among the solution techniques available for a given problem (1), population-based meta-heuristic methods have gained widespread popularity. Numerous approaches have been proposed and developed within this category [2]. For global optimization problems that involve costly evaluations, model-based optimization algorithms are commonly employed. Among these algorithms, Bayesian optimization [18] and various surrogate models [21] stand out as the leading state-of-the-art methods for optimizing expensive "black-box" functions. DIRECT[17] presents an alternative specifically tailored for "black-box" global optimization by extending the classical Lipschitz optimization [36; 38; 39; 41], eliminating the requirement of knowing the Lipschitz constant. In contrast to the stochastic methods discussed above, the DIRECT-type algorithms adhere to a deterministic pattern. A recent comprehensive numerical benchmark study involving various derivative-free global optimization solvers [45] highlighted that particularly for problems with lower dimensions, DIRECT-type algorithms can significantly outperform stochastic approaches. Furthermore, certain combinations of hybrid local search algorithms based on DIRECT-type methods and finite differences [42] demonstrated exceptional efficiency in solving high-dimensional problems. Consequently, designing and developing efficient DIRECT-type algorithms is crucial and driven by practical needs. Inspired by these observations, we have recently introduced DIRECTGO, a MATLAB toolbox dedicated to DFGO. The latest release of DIRECTGO includes a comprehensive collection of 52 distinct algorithmic implementations based on the DIRECT framework. However, recent empirical studies [50; 51] have highlighted that even more efficient DIRECT-type algorithms can be achieved by innovatively combining existing algorithmic steps. It seems that many authors may not spend enough time exploring the most suitable algorithmic framework when developing and publishing new algorithms of type DIRECT. Therefore, this study introduces a novel framework called GENDIRECT, which offers a GENeralized DIRECT-type approach to derivative-free global optimization. GENDIRECT enables the construction of any known or previously unexplored DIRECT-type algorithm. Instead of developing yet another "new" DIRECT-type algorithm, GENDIRECT provides a rapid and effective way of combining different components to create customized DIRECT-type algorithms. Using GENDIRECT, users can identify and utilize the most suitable DIRECT-type algorithm for a given optimization problem based on the latest advances in the field. Compared to the DIRECTGO toolbox and individual DIRECT-type algorithms, the GENDIRECT framework offers a significantly higher level of flexibility. In fact, GENDIRECT allows the design of a few hundred thousand combinations of DIRECT-type algorithms and facilitates user-friendly experimentation with new algorithmic components. GENDIRECT is implemented as a separate extension of DIRECTGO, complemented by a dedicated graphical user interface (GUI). This GUI provides easy access to all the features and capabilities of GENDIRECT, ensuring a seamless user experience. The capability of GENDIRECT is showcased by selecting five highly promising DIRECT-type algorithms from the existing literature, as identified in [45; 52]. By leveraging GENDIRECT, specific components that were identified as weaknesses in these algorithms are modified. As a result, some of these algorithms demonstrate significantly improved efficiency, showcasing the potential of GENDIRECT in optimizing and refining DIRECT-type algorithms using the most recent DIRECTGOLib v2.0. This work makes several significant contributions, including 1. Introduction of a novel framework called GENDIRECT, which represents a GENeralized DIRECT-type algorithmic framework. 2. GENDIRECT provides an efficient and innovative approach to generate DIRECT-type optimization algorithms, whether they are existing algorithms or entirely novel ones, by combining different algorithmic components. 3. GENDIRECT allows for the creation of a few hundred thousand combinations of DIRECT-type algorithms, facilitating user-friendly experimentation and enabling new developments in optimization. 4. Description of the implementation of GENDIRECT as an evolution of DIRECTGO, complete with a separate graphical user interface (GUI) that ensures easy access to all its features. This implementation is free and open for anyone to use. 5. Demonstration of the potential of GENDIRECT by enhancing the efficiency of five chosen DIRECT-type algorithms through modifications. These modifications showcase the ability of GENDIRECT to improve algorithmic performance further. In summary, this work contributes to the derivative-free global optimization field by introducing GENDIRECT, a versatile framework that enables efficient algorithm generation, offers extensive customization options, and shows improved efficiency in established DIRECT-type algorithms. The remainder of the paper is organized as follows. Section 2 presents a concise overview of key advancements in the realm of DIRECT-type algorithms. Section 3 introduces and elaborates on the GENDIRECT framework. The experimental results of the newly developed algorithms and performance evaluation utilizing GENDIRECT are analyzed in Section 4. Lastly, Section 5 offers concluding remarks and outlines potential avenues for future exploration in this field. ## 2 Background for Gendirect ### General structure of DIRECT-type algorithms The DIRECT algorithm was originally designed to solve global optimization problems with box constraints (1). Despite numerous proposals, most follow a similar algorithmic structure and involve three primary steps: selection, sampling, and partitioning (see Algorithm 1). However, at first, DIRECT-type algorithms typically transform a feasible region \(D=[\mathbf{a},\mathbf{b}]\) into a unit hyper-rectangle \(\bar{D}=[0,1]^{n}\) referring to the original space (\(D\)) solely to evaluate the objective function \(f\) (as depicted in Algorithm 1, Lines 1-5). The selection, partitioning, and sampling operations are executed within a normalized search domain \(\bar{D}\). During each iteration, specific regions are identified as potentially optimal candidates (POC) and chosen for further investigation (see Algorithm 1, Line 7). In DIRECT-type algorithms, the objective function is sampled and evaluated at various points within each POC, which are then subdivided into smaller sub-regions (see Algorithm 1, Lines 9 and 10). This selection, sampling, and subdivision process continues until a predefined limit is reached. The subsequent subsections provide an overview of the primary techniques proposed for each step. Although the selection step precedes sampling and partitioning, we will initially focus on the latter because the selection step relies directly on the strategies employed in sampling and partitioning. ### Summary of sampling and partitioning schemes In this section, we present a brief summary of seven primary sampling and partitioning approaches that have been proposed in existing literature [16; 17; 33; 37; 40; 52] and implemented within the GENDIRECT framework. Table 1 provides an overview of these schemes, including illustrative examples from the initial iterations. Blue-colored sub-regions indicate the POCs in the current partition. Although each of the seven schemes possesses distinct characteristics, they demonstrate significant similarities. Particularly, these schemes involve sampling new points and subdividing larger regions into smaller, non-overlapping sub-regions. In cases where there is more than one longest side, two primary strategies for division emerge: * Subdivision along all dimensions with the maximum side length. * Subdivision along a single dimension with maximum side length. It is worth mentioning that the original DIRECT algorithm proposed subdividing along all dimensions. However, extensive experimentation has indicated that this approach does not consistently yield effective results. ### Selection schemes Initially, selecting POCs is straightforward since only one candidate is available, the entire feasible region. However, to introduce selection schemes in subsequent iterations, we must first establish the concept of the current partition (\(\mathcal{P}_{k}\)), which in iteration \(k\), is defined as \[\mathcal{P}_{k}=\{\bar{D}_{k}^{i}:i\in\mathbb{I}_{k}\},\] where \(\bar{D}_{k}^{i}\) are hyper-rectangles (or simplices) and \(\mathbb{I}_{k}\) is the index set identifying the current partition \(\mathcal{P}_{k}\). Then, the next partition, \(\mathcal{P}_{k+1}\), is obtained by subdividing the selected POCs from the current partition \(\mathcal{P}_{k}\). \begin{table} \begin{tabular}{p{85.4pt} p{113.8pt} p{113.8pt}} \hline **Notation \& Source** & Partitioning and sampling scheme & An example of the initialization and two subsequent iterations \\ \hline **DTC**[16] & A hyper-rectangular partition based on one-**D**imensional **T**risection, and sampling points located at **C**enters. & \\ \hline **DTDV**[40] & A hyper-rectangular partition based on one-**D**imensional **T**risection, and sampling points located at two **D**iagonal **V**ertices. & \\ \hline **DTCS**[37] & A simplicial partition based on one-**D**imensional **T**risection, and sampling points located at **C**enters of **S**implices. & \\ \hline **DBVS**[37] & A simplicial partition based on one-**D**imensional **B**isection, and sampling points located at **V**ertices of **S**implices. & \\ \hline **DBDP**[33] & A hyper-rectangular partition based on one-**D**imensional **B**isection, and sampling points located at two **D**iagonal **P**oints equidistant between themselves and a diagonal’s vertices. & \\ \hline **DBVD**[4] & A hyper-rectangular partition based on one-**D**imensional **B**isection, and sampling points located at one **V**ertice and one **D**iagonal point with a 2:3 diagonal distance from the sampling vertex. & \\ \hline **DBC**[52] & A hyper-rectangular partition based on one-**D**imensional **B**isection, and sampling points located at **C**enter points. & \\ \hline \end{tabular} \end{table} Table 1: Summary of sampling and partitioning schemes commonly utilized in DIRECT-type algorithms implemented within GENDIRECT (in ascending order of the year of publication) When identifying POCs, two crucial aspects come into play: the \(\bar{D}_{k}^{i}\) measure (\(\bar{\delta}_{k}^{i}\)) and the general quality based on the function values attained at the sample points. #### 2.3.1 Evaluating goodness of candidates The values of the objective function obtained from the sampled points \(\mathbb{H}_{k}^{i}\) are utilized to assess the overall quality of the candidate. We refer to this value as the aggregated function value (\(\mathcal{F}_{k}^{i}\)), which represents the goodness of \(\bar{D}_{k}^{i}\). In summary, four strategies have been presented to evaluate \(\mathcal{F}_{k}^{i}\) in the literature [52] as defined in Definition 1. Definition 1 (Aggregated function values): Let: * \(\bar{\delta}_{k}^{i}\) is a measure of \(\bar{D}_{k}^{i}\); * \(\mathbf{x}_{\text{m}}^{i}\) is a midpoint of \(\bar{D}_{k}^{i}\); * \(\mathbf{x}^{\text{min}}\) is a currently best found minimum point; * \(\mathbb{H}_{k}^{i}\) is a representative sampling index set of all sample points within \(\bar{D}_{k}^{i}\); * \(\text{card}(\mathbb{H}_{k}^{i})\) is the cardinality of a set \(\mathbb{H}_{k}^{i}\). Then: * _Midpoint value based aggregated function value_: \[\mathcal{F}_{k}^{i}=f(\mathbf{x}_{\text{m}}^{i}),\] (2) * _Minimum value based aggregated function value_: \[\mathcal{F}_{k}^{i}=\min_{j\in\mathbb{H}_{k}^{i}}f(\mathbf{x}^{j})\] (3) * _Mean value based aggregated function value_: \[\mathcal{F}_{k}^{i}=\frac{1}{\text{card}(\mathbb{H}_{k}^{i})}\sum_{j=1}^{\text {card}(\mathbb{H}_{k}^{i})}f(\mathbf{x}^{j})\] (4) * _Midpoint and minimum values based aggregated function value_: \[\mathcal{F}_{k}^{i}=\frac{1}{2}\left(\min_{j\in\mathbb{H}_{k}^{i}}f(\mathbf{x} ^{j})+f(\mathbf{x}_{\text{m}}^{i})\right)\] (5) The use of the \(\mathcal{F}_{k}^{i}\) evaluation strategy depends on the specific sampling strategy being utilized. For example, in the case of the DTC and DTCS schemes, the midpoint value-based \(\mathcal{F}_{k}^{i}\) is adopted since sampling is performed solely at a single midpoint. However, when there are multiple sampling points per candidate, alternative strategies have been shown to have a significant influence, as shown in previous work [47]. #### 2.3.2 Measuring candidates Depending on the sampling strategy, basically only two different ways have been proposed to measure POCs: \[\bar{\delta}_{k}^{i} =\lambda d_{k}^{i}, \tag{6}\] \[\bar{\delta}_{k}^{i} =\max_{j,l\in\mathbb{H}_{k}^{i}}\|\mathbf{x}^{j}-\mathbf{x}^{l}\|_ {2}. \tag{7}\] where \(\lambda\in[0,1]\) and \(d_{k}^{i}\) represents the Euclidean length of the diagonal of \(\bar{D}_{k}^{i}\) diagonal. A couple of significant points should be emphasized in this regard. First, instead of relying solely on the Euclidean norm, alternative norms (e.g., \(\|\cdot\|_{\infty}\)) have been observed to yield favorable results, as noted in the work [8]. Second, different partitioning schemes have employed various values for \(\lambda\). For example, some schemes use \(\lambda=1\), which corresponds to the full length of the diagonal, as seen in [52], while others adopt \(\lambda=2/3\), as demonstrated in [33]. However, since \(\lambda\) applies uniformly to all partition elements \(\bar{D}_{k}^{i}\) and serves solely to counterbalance the selection of POC, the choice of \(\lambda\) does not affect the performance of algorithms of type DIRECT. #### 2.3.3 Summary of POC selection schemes To address the identified limitations of DIRECT-type algorithms, various POC selection schemes have been proposed. Definition 2 defines the four most widely used and implemented selection schemes in GENDIRECT, while a summary of them is given in Table 1. Definition 2 (Selection schemes): Let: * \(\mathcal{F}_{k}^{i}\) denotes the aggregated function value for \(\bar{D}_{k}^{i}\); * \(\bar{\delta}_{k}^{i}\) is a measure of \(\bar{D}_{k}^{i}\); * \(\mathbb{I}_{k}^{i}\subseteq\mathbb{I}_{k}\) represents a subset of indices that correspond to elements of \(\mathcal{P}_{k}\) sharing the same measure (\(\bar{\delta}_{k}^{i}\)). Additionally, \(\mathbb{I}_{k}^{\min}\) contains the indices of elements with the smallest measure, \(\bar{\delta}_{k}^{\min}\), while \(\mathbb{I}_{k}^{\max}\) -- with the largest. Then: * _Original selection_: A candidate \(\bar{D}_{k}^{j},j\in\mathbb{I}_{k}\) is said to be potentially optimal if there exists some rate-of-change (Lipschitz) constant \(\tilde{L}>0\) such that \[\mathcal{F}_{k}^{j}-\tilde{L}\delta_{k}^{j}\leq\mathcal{F}_{k}^{i}-\tilde{L} \delta_{k}^{i},\quad\forall i\in\mathbb{I}_{k},\] (8) * _Aggressive selection_: For each \(\mathbb{I}_{k}^{i}\) (\(\min\leq i\leq\max\)) select \(\bar{D}_{k}^{j},j\in\mathbb{I}_{k}^{i}\) with the lowest \(\mathcal{F}_{k}^{j}\), i.e., \[\mathcal{F}_{k}^{j}\leq\mathcal{F}_{k}^{l},\quad\forall l\in\mathbb{I}_{k}^{i}.\] (9) * _Pareto selection_: Select all candidates \(\bar{D}_{k}^{i},i\in\mathbb{I}_{k}\) who are not dominated, which means that there is no other candidate \(\bar{D}_{k}^{j},j\in\mathbb{I}_{k}\) that satisfies the condition: \[(\delta_{k}^{j}\geq\delta_{k}^{i}\wedge\mathcal{F}_{k}^{j}<\mathcal{F}_{k}^{i })\vee(\delta_{k}^{j}>\delta_{k}^{i}\wedge\mathcal{F}_{k}^{j}\leq\mathcal{F}_ {k}^{i}).\] (10) * _Reduced Pareto selection_: Select \(\bar{D}_{k}^{i}\) with the lowest \(\mathcal{F}_{k}^{i}\) and \(\bar{D}_{k}^{j}\) with the most extensive measure \(\delta_{k}^{j}\), breaking ties in favor of a lower value of the aggregate function. In summary, aggressive selection aims to choose a comprehensive set of candidates, ensuring that at least one candidate is selected from each group with different diameters (\(\delta_{k}^{i}\)) while prioritizing candidates with the lowest aggregated function value. Then, the number of candidates selected through Pareto-based criteria tends to exceed the original selection strategy. However, this approach, which emphasizes exploring candidates of intermediate sizes, can lead to slower convergence, particularly when dealing with less complex optimization problems. Therefore, the primary motivation behind introducing a reduced set of Pareto-optimal candidates was to address this issue. It is important to note that when multiple equally good POC exist with the same \(\delta_{k}^{i}\) and \(\mathcal{F}_{k}^{i}\), two distinct selection strategies are available: * Select all equally good POC; * Select only one with the highest index number. Furthermore, selection schemes can integrate additional conditions to enhance the balance between local and global directions. The subsequent subsection provides a detailed examination of these conditions. #### 2.3.4 Additional approaches for improved local and global POC selection. **Excessive local refinement reduction techniques.** To protect the algorithm against excessive refinement around current local minima \(f^{\min}\), the authors in the DIRECT literature [6; 17; 25] proposed incorporating one of the following conditions along with Eq. (8) in the original selection scheme: \[\mathcal{F}_{k}^{j}-\tilde{L}\delta_{k}^{j} \leq f^{\min}-\varepsilon|f^{\min}|, \tag{11}\] \[\mathcal{F}_{k}^{j}-\tilde{L}\delta_{k}^{j} \leq f^{\min}-\varepsilon|f^{\min}-f^{\mathrm{median}}|,\] (12) \[\mathcal{F}_{k}^{j}-\tilde{L}\delta_{k}^{j} \leq f^{\min}-\varepsilon|f^{\min}-f^{\mathrm{average}}|. \tag{13}\] Therefore, the lower Lipschitz bound of the POC must be lower than the current minimum value (\(f^{\min}\)) to at least some extent. The parameter \(\varepsilon\) plays a crucial role in determining the adjustment of the lower Lipschitz bound. In the study conducted by [17], favorable results were achieved using values of \(\varepsilon\) ranging from \(10^{-3}\) to \(10^{-7}\), and a default value of \(\varepsilon=10^{-4}\) is suggested. To reduce the sensitivity of the objective function to additive scaling, subtraction of the median value (\(f^{\mathrm{median}}\)) or the average (\(f^{\mathrm{average}}\)) value (as shown in Eqs. (12) and (13)) was proposed. **Restart technique for the \(\varepsilon\) parameter**. In [5], an adaptive scheme is introduced for the parameter \(\varepsilon\) to prevent wasteful function evaluations in minor regions \(\bar{D}_{k}^{i}\) where negligible improvements are expected. The restart technique begins with \(\varepsilon=0\) and is maintained until an improvement is observed. However, if there is no improvement for five consecutive iterations, it suggests a potential stagnation at a local optimum. To address this, the algorithm switches to \(\varepsilon=0.01\). Within 50 iterations, the restart technique returns to \(\varepsilon=0\) if an improvement is found or no progress is made. If another 50 iterations pass without improvement, this indicates a possible discovery of the global minimum, requiring further refinement. **Multi-level candidate selection using different \(\varepsilon\) values**. In [24; 26], two alternative multi-level techniques are proposed for the candidate selection procedure, involving three different levels: * Level 2: The DIRECT-type algorithm is executed with the usual settings, employing \(\varepsilon=10^{-5}\). * Level 1: The selection is limited to 90% of \(\bar{D}_{i}^{k}\in\mathcal{P}^{k}\), excluding 10% of the candidates with the largest measure. In this level, \(\varepsilon=10^{-7}\) is used. * Level 0: The selection is limited to 10% of the candidates with the largest measure, disregarding those excluded at level 1. Here, \(\varepsilon=0\) is used. Both strategies are recommended in the study cycle through these levels using a combination of the "W-cycle": 21011012. One of the methods [26] employs a fixed \(\varepsilon=10^{-4}\) value at all levels, while the other [24] adheres to the rules mentioned above. \begin{table} \begin{tabular}{p{14.2pt} p{142.3pt} p{142.3pt}} \hline **Notation \& Source** & Description & Illustration of POCs selection (blue points) \\ \hline **Original**[17] & _Original selection strategy._ & _0.4_ \\ & Selects POCs based on the lower Lipschitz bound estimates for all possible Lipschitz constant values. & 0.3 \\ & & 0.1 \\ \hline **Aggressive**[3] & _Aggressive selection strategy._ & _0.4_ \\ & one candidate from each group of different diameters. & 0.3 \\ & & 0.1 \\ \hline **Pareto**[29] & _Pareto selection strategy._ & 0.4 \\ & Selects all candidates that are non-dominated on size (the higher, the better) and aggregated function value (the lower, the better). & 0.3 \\ \hline **Reduced Pareto**[30] & _Reduced Pareto selection strategy._ & 0.4 \\ & candidates, the first and the last point on the Pareto front. & 0.2 \\ \hline \end{tabular} \end{table} Table 2: Summary of selection schemes implemented in GENDIRECT **Globally-biased selection**. In the works [34; 35], a two-phase approach with global bias was introduced. The algorithm effectively determines the adequacy of exploring a local optimum by employing the globally biased scheme. It terminates the local phase (referred to as the "usual" phase) to prevent wasteful function evaluations by excessive local refinement. Upon stopping the usual phase, the algorithm seamlessly transitions into a global phase, wherein the hyper-rectangles chosen for further exploration must meet a minimum size requirement. This globally biased phase continues until a better minimum point is discovered or a maximum number of "global iterations" is reached. Subsequently, the algorithm reverts back to the usual phase. The search process alternates between these two phases, namely, the usual phase and the globally-biased phase, until a specified stopping condition is fulfilled. **Two-phase (Global-Local) selection**. In the work [46], a two-phase selection approach has been introduced. This approach expands the set of previously obtained POCs by incorporating additional candidates based on their proximity to the current best minimum point \(\mathbf{x}^{\min}\). This expansion is performed by conducting a selection process using calculated distances (instead of aggregated function values) between the current best minimum point and all other candidates: \[\mathcal{F}_{k}^{i}=\|\mathbf{x}_{\mathrm{m}}^{i}-\mathbf{x}^{\min}\|_{2}. \tag{14}\] By including candidates that are closer to the current minimum point, this step facilitates faster and more extensive exploration around the current minimum point. ### Acceleration through hybridization techniques To our knowledge, three hybridization strategies have been proposed for DIRECT-type algorithms [14; 16; 27; 35]. The first strategy, originally suggested by the author of the original DIRECT[16], was later refined and improved in a work [35]. The concept behind this strategy involves performing a local search only when the algorithm achieves an improvement in the best current solution value, denoted \(f^{\min}\). The best current solution \(f^{\min}\) can be updated using a local search method or a more suitable direct-type algorithm that enables faster local refinement. The second strategy [14] operates similarly to the first one. However, instead of performing a local search from a single starting point, this strategy employs a clustering algorithm to identify multiple appropriate starting points. The following steps are executed within this suggested method: * The DIRECT-type algorithm is run for a fixed number of function evaluations, typically set at \(100n+1\) as the default. * The sampled points are analyzed using an adaptive clustering algorithm to determine the optimal number of clusters. Subsequently, a local search is performed from the best point within each cluster. * Additionally, the DIRECT-type algorithm is run again. * If the DIRECT-type algorithm improves \(f^{\min}\), a final local search is performed from the best point. In the third, aggressive strategy [27], initiate a local search from the midpoint of each POC. However, this approach has faced significant criticism for potentially generating excessive local searches, as many starting points may converge to the same local optimum. ## 3 Gendirect optimization software This section describes the generalized algorithmic framework DIRECT. Fig. 1, illustrates the main architecture of the developed GENDIRECT. Specifically, there are three large boxes in Fig. 1, which represent the construction of the main DIRECT-type algorithmic steps within GENDIRECT: 1. The construction of partitioning and sampling scheme. 2. The construction of the selection scheme. 3. The construction of a hybridization scheme. The following subsections will provide a detailed exploration of how to effectively utilize GENDIRECT using the MATLAB command line interface and the dedicated graphical user interface (GUI). ### Utilizing GENDIRECT through the command line interface. With GENDIRECT, users can swiftly and effectively establish and solve global optimization problems by constructing a DIRECT-type algorithm via the MATLAB command line interface. All relevant problem information is consolidated into a unified MATLAB structure, which is then passed to the solver to extract the required data. For the GENDIRECT format, the solution process begins by generating the following structure: alg = GENDIRECT(); The algorithm takes in a structured input that includes the optimization problem, dimension, lower and upper bounds, and a target value (if applicable). Here is an example code snippet illustrating how these parameters can be set: alg.Problem.f = 'objfun'; % Objective function alg.Problem.n = n; % Dimension alg.Problem.x_L = zeros(n, 1); % Lower bounds alg.Problem.x_U = ones(n, 1); % Upper bounds alg.Problem.fgoal = 0.01; % Optimal value set as target alg.Problem.info = false; % Extract info from problem If the alg.Problem.info parameter is set to 'true', the algorithm retrieves all the relevant information about the objective function from the 'objfun' problem. As we utilize test problems provided by DIRECTGOLib v2.0 [49], the stored information encompasses both the problem structure and the objective function. Consequently, the algorithms automatically extract all essential details from the given problem, including: Figure 1: A flowchart for constructing DIRECT-type algorithm in GENDIRECT. * The dimensionality of the problem; * The lower and upper bounds for each variable; * The objective function value of the known solution; * The solution point. For further guidance on the utilization of DIRECTGOLib v2.0, additional information can be found in references [44; 47]. Users who want to customize the default algorithmic settings should utilize the optParam structure: alg.optParam.maxevals = 100; % Maximal number of evaluations alg.optParam.maxits = 100; % Maximal number of iterations alg.optParam.showits = true; % Show iteration status The next step involves constructing the algorithm using the procedures described in Table 3. After completing these steps, the algorithm is ready to solve the given problem using the following line of code: Results = alg.solve; Once the algorithm completes its computations, it returns the Results structure, which contains the optimization outcomes. The subsequent subsections will outline the process of constructing DIRECT-type algorithmic steps. #### 3.1.1 Designing partitioning and sampling scheme To create a combination of DIRECT-type algorithms, the user needs to integrate components that determine the division and sampling strategy of the optimization domain. The core framework for constructing the partitioning and sampling strategy is illustrated in the top block of Fig. 1. The subsequent command lines illustrate how to configure the partitioning strategy of the original DIRECT algorithm: alg.Partitioning.Strategy = 'DTC'; alg.Partitioning.SubSides = 'All'; As a result of the given partitioning and sampling scheme in Fig. 1, there are 14 possible combinations in GENDIRECT. #### 3.1.2 Designing the selection scheme Once the partitioning and sampling strategy has been established, the subsequent task is to determine the POC selection scheme. Here is an example that illustrates the parameter values required for performing POC selection introduced in the original DIRECT algorithm: alg.Selection.AggrFuncVal = 'Midpoint'; alg.Selection.CandMeasure = 'Diagonal'; alg.Selection.Strategy = 'Original'; alg.Selection.EqualCand = 'All'; alg.Selection.SolRefin = 'Min'; alg.Selection.Ep = 0.0001; alg.Selection.ControlEp = 'Off'; alg.Selection.GloballyBased = 'Off'; alg.Selection.TwoPhase = 'Off'; ``` When the two-phase selection step is enabled, as demonstrated in the following code snippet: \begin{table} \begin{tabular}{p{56.9pt} p{142.3pt} p{142.3pt}} \hline \hline Step & Parameter & Description \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Strategy & Specify partitioning and sampling scheme (see Table 1): DTC, DTDV, DTCS, DBVS, DBDP, DBVD, or DBC. \\ \cline{2-3} & SubSides & Specify subdivision strategy for multiple longest sides (see Section 2.2): One or ALL. \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & AggrFuncVal & Specify strategy for a aggregated function value: Midpoint (Eq. (2)), Minimum (Eq. (3)), Mean (Eq. (4)) or MidMin (Eq. (5)). \\ \cline{2-3} & CandMeasure & Specify strategy for a measure: Diagonal (Eq. (6)), or LongSide (Eq. (7)). \\ \cline{2-3} & Strategy & Specify selection scheme (see Table 2): Original, Aggressive, Pareto, or RedPareto. \\ \cline{2-3} & EqualCand & Specify behavior for equally good POC: All or One. \\ \cline{2-3} & SolRefin & Specify excessive local refinement reduction technique: Min (Eq. (11)), Median (Eq. (12)), Average (Eq. (13)) or Off. \\ \cline{2-3} & Ep & Specify the value for \(\varepsilon\) (Eqs. (11), (12), (13)): \(10^{-4}\). \\ \cline{2-3} & ControlEp & Specify control technique for \(\varepsilon\) (see Section 2.3.4): Off, Restart, MultiLevel1 or MultiLevel2. \\ \cline{2-3} & GloballyBiased & Enable globally-biased POC selection (see Section 2.3.4): Off or On. \\ \cline{2-3} & TwoPhase & Enable two-phase selection of POC using Distances (Eq. (14))(see Section 2.3.4): Off or On. \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & Strategy & Specify hybridization strategy (see Section 2.4): Off, Single, Clustering or Aggressive. \\ \cline{2-3} & LocalSearch & Specify derivative-free local search subroutine: interior-point, sqp, sqp-legacy or active-set. \\ \cline{2-3} & MaxIterations & Specify the maximum iteration limit for a single local search subroutine call: 1000. \\ \cline{2-3} & MaxEvaluations & Specify the maximum function evaluation limit for a single local search subroutine call: 3000. \\ \hline \hline \end{tabular} \end{table} Table 3: The parameters of GENDIRECT used to construct DIRECT-type algorithms, with default values highlighted in blue. the algorithm uses the designed selection scheme ('alg.Selection') to expand the set of promising candidate solutions (POC) based on the calculated distances obtained using Eq. (14). It is easy to calculate in Fig. 1, there are 4096 different combinations for the selection steps of POC in GENDIRECT. #### 3.1.3 Designing hybridization scheme In the third block of Fig. 1, users are required to select the desired hybridization technique. There are only 13 possible combinations available in this block. For example, to specify a hybridization scheme that utilizes a strategy calling an SQP local search (parameter sqp) subroutine only when an improvement in the best current solution is achieved (parameter Single), the following code can be used: alg.Hybridization.Strategy = 'Single'; alg.Hybridization.LocalSearch ='sqp'; ### Utilizing GENDIRECT through the graphical user interface GENDIRECT is also accessible through the graphical user interface (GUI) of DIRECTGO. This GUI enables users to use GENDIRECT without requiring prior programming or algorithmic knowledge. To access the GENDIRECT tool, users can navigate to the MATLAB APPS menu on the toolbar. Within DIRECTGO, the generalized DIRECT algorithm (GENDIRECT) can be selected from the algorithm drop-down menu. The graphical interface of the main toolbox window DIRECTGO is depicted in Fig. 2. The GENDIRECT window is centrally located in the GUI and facilitates the construction of the DIRECT algorithm by providing user-friendly functionalities. For more comprehensive details of DIRECTGO, see [47]. ### Remarks regarding the extension of GENDIRECT GENDIRECT comprises two primary components, as illustrated in Fig. 3. Firstly, a function block encompasses various implementations of the steps involved in DIRECT-type algorithms. Secondly, the control structure ensures the seamless connection of algorithm components, facilitating the execution of the algorithm. If a researcher intends to integrate a newly proposed step into GENDIRECT, the function should be added to the function block. Ensuring that the implemented function adheres to the existing code's style is important. Subsequently, in the control function of GENDIRECT, the newly created function should be incorporated accordingly, allowing GENDIRECT to utilize it effectively. ## 4 Simulation results and in-depth analysis This section presents an analysis of the experimental results for newly developed improved algorithms and their performance evaluation using GENDIRECT. ### An overview of benchmark test problems We employed a comprehensive set of 324 benchmark test functions to thoroughly evaluate the newly proposed GENDIRECT algorithm. These test problems were sourced from the latest version of the DIRECTGOLib v2.0 library [43], which is built within the MATLAB environment. The DIRECTGOLib v2.0 integrates ten libraries and collections of well-established and recently developed test problems. In Table 4, we present a summary of DIRECTGOLib v2.0 and its constituent libraries. The table provides essential details, including references, publication years, the pool of problems, and the counts of scalable, separable, and multi-modal problems. Specifically, the table comprises 136 test problems with fixed dimensions and 188 test benchmarks that can be adjusted to any dimension Figure 3: The framework of the generalized DIRECT algorithm system (GENDIRECT) Figure 2: A snapshot of the graphical user interface (GUI) of GENDIRECT in the DIRECTGO software package. size \((n)\). For these test problems, we consider instances with variables set at \(n=2,5,\) and \(10\). However, it is worth noting that some functions, such as certain CEC functions [23; 55], are not applicable in all dimensions. In our study, we thoroughly examined a total of 634 test problems available in DIRECTGOLib v2.0 to ensure comprehensive and robust evaluations of the proposed algorithmic framework GENDIRECT. In order to ensure that the global minimum point does not coincide with the initial sampling point in any tested algorithm, we employ shift operations. In other words, we randomly shift the solutions in the \(X\)-space. This involves transforming a given point \(\mathbf{x}\) into \(\hat{\mathbf{x}}\) using the following equation: \[\hat{x}_{j}=\min\left\{\max\left\{x_{j}-\rho_{j}\lambda\vec{x}_{j},a_{j} \right\},b_{j}\right\},\ j=1,...,n. \tag{15}\] Here, \(\vec{\mathbf{x}}\) is a randomly distributed random direction vector generated using the Mersenne-Twister pseudorandom generator, and \(\lambda\) is a step size that serves two important purposes: * It prevents the global optima from moving outside of the feasible region. * It allows for a more efficient placement of the solution within the problem domain, considering that different problems may have significantly different domain sizes. The value of \(\lambda\) is calculated by solving the following linear programming problem: \[\begin{split}&\max\ \lambda\\ &\text{s.t.}\ \mathbf{x}^{*}+\lambda\vec{\mathbf{x}}\geq\mathbf{a} \\ &\mathbf{x}^{*}+\lambda\vec{\mathbf{x}}\leq\mathbf{b}\end{split} \tag{16}\] The shift operation introduces the possibility of regions outside the original feasible range \([\mathbf{a},\mathbf{b}]\) where, in certain instances, points with lower function values than the global optimum within the original feasible range may exist. To tackle this issue, the transformed vector \(\hat{\mathbf{x}}\) (15) is restricted to lie within the range \([\mathbf{a},\mathbf{b}]\) using min-max functions. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Source} & \multirow{2}{*}{Year} & \multicolumn{5}{c}{Problems} \\ \cline{3-6} & & Total & Scalable & \multicolumn{1}{c}{Separable} & Multi-modal \\ \hline Hedar, [12] & 2005 & 31 & 17 & 8 & 23 \\ Hansen et al., [11] & 2009 & 24 & 24 & 5 & 14 \\ Jamil et al., [15] & 2013 & 167 & 69 & 49 & 127 \\ Gavana, [9] & 2013 & 193 & 76 & 64 & 156 \\ Surjanovic et al., [53] & 2013 & 50 & 23 & 11 & 40 \\ Liang et al., [23] & 2014 & 27 & 27 & 5 & 24 \\ Wu et al., [55] & 2017 & 20 & 20 & 0 & 19 \\ Oldenhuis, [32] & 2020 & 41 & 12 & 5 & 33 \\ Layeb, [1] & 2022 & 18 & 18 & 2 & 16 \\ Kudela et al., [20] & 2022 & 8 & 8 & 8 & 8 \\ \hline Stripinis et al., [43] & 2023 & 324 & 188 & 97 & 261 \\ \hline \hline \end{tabular} \end{table} Table 4: Compilation of test problems from various libraries in the latest version of the DIRECTGOLib v2.0 for box-constrained global optimization. Nevertheless, one drawback of this approach is that the functions become "flat" in areas where the min-max restriction is applied. These flat regions increase in size as the value of \(\lambda\) increases. To address this concern, we opted to limit the range of the randomly generated shift vector by assigning a uniformly distributed random multiplication rate \(\rho_{j}\in[0,0.1]\) to each dimension \(j=1,...,n\). For convenient access to all test problems utilized in this paper and to replicate the random shift vectors, we created a dedicated MATLAB script in the "Scripts/MPC" directory of the GitHub repository ([https://github.com/blockchain-group/DIRECTGO](https://github.com/blockchain-group/DIRECTGO)). These scripts serve as valuable tools for reproducing the findings presented in this investigation and for comparing and evaluating newly developed algorithms. ### Setup and fundamental basis for algorithm comparison All computations were executed on an Intel(R) Core\({}^{TM}\) i5-10400 @ 2.90GHz Processor running MATLAB R2023a. The algorithms' solutions were compared with the globally optimal solution for each problem, and we considered the solver successful when the objective function value of a solution was within 0.01% of the global optimum. For all analytical test cases with a known global optimum \(f^{*}\), we employed a stopping criterion based on the percent error (\(pe\)), as defined below: \[pe=100\times\begin{cases}\frac{f(\mathbf{x})-f^{*}}{|f^{*}|},&f^{*}\neq 0\\ f(\mathbf{x}),&f^{*}=0\end{cases} \tag{17}\] The algorithms were terminated under the following conditions: * When the (\(pe\)) became smaller than \(\varepsilon_{\text{pe}}=0.01\). * When the number of function evaluations exceeded the prescribed limit \(M_{\max}=n\times 10^{5}\). * When the execution time exceeded \(T_{\max}=30\) CPU minutes. In such cases, the final result was set to \(n\times 10^{5}\) to facilitate further processing of the result. ### Algorithm design in Gendirect Considering that the developed GENDIRECT software allows for a large number of combinations, identifying the most effective ones may require a substantial amount of time and effort. Therefore, we cannot guarantee that the algorithms presented are the most efficient within GENDIRECT. Furthermore, the benchmark set includes numerous distinct problems, such as discontinuous, non-differentiable, multi-modal, non-symmetric, and plateau functions. It is improbable that a single combination will be the most efficient for all of these diverse problem types. According to the no-free lunch theorem for optimization [54], there exists no universal optimization algorithm that performs optimally on all types of optimization problems. As a result, certain modifications and additions to specific algorithms may not enhance performance on all problems and could even lead to a decline in performance in certain cases. Therefore, the most optimal approach would involve leveraging machine learning-enhanced automated algorithm selection techniques [19] to generate algorithms tailored to specific problems. However, this avenue remains a part of our future work and has yet to be explored. To showcase the benefits of GENDIRECT software, we conducted an experiment involving five existing DIRECT-type algorithms: 1-DTC-GL[51], HALRECT-IA[45], MrDIRECT[26], BIRMIN[35], and DIRMIN[28]. Our aim was to improve their average performance across a designated set of test problems by introducing new algorithmic steps or substituting existing ones. In Table 5, we present five variants for each of the five selected algorithms, with their improved versions. For pure algorithms of DIRECT-type, which are characterized by slow solution refinement, enhancing their performance was achieved by incorporating local search techniques. On the other hand, for hybrid methods, we made different adjustments to improve their performance. Specifically, for the BIRMIN algorithm, our goal was to increase the number of evaluations per iteration through enhancements, while for the DIRMIN algorithm, we pursued the opposite approach. It is essential to note that the construction of the original algorithms within GENDIRECT may not always produce identical results to the implementations provided in DIRECTGO[48]. The discrepancy in the results can be attributed to the numerical tolerances used in the implementations, which play a critical role in the outcome. For instance, authors might employ rounding on hyper-rectangle measure sizes, enabling them to group extremely small hyper-rectangles together. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Original** algorithm parameters & 1-DTC-GL & HALRECT-IA & MrdDIRECT & BIRMIN & DIRMIN \\ \hline Partitioning\_Strategy & ’DTC’ & ’BBC’ & ’DTC’ & ’BBP’ & ’DTC’ \\ Partitioning\_SubSide & ’One’ & ’All’ & ’All’ & ’One’ & ’All’ \\ Selection\_AggreProCar’al & ’Midpoint’ & ’MidBin’ & ’Midpoint’ & ’Midpoint’ & ’Midpoint’ \\ Selection\_CandMeasure & ’Diagonal’ & ’Diagonal’ & ’Diagonal’ & ’Diagonal’ & ’Diagonal’ \\ Selection\_Strategy & ’Pareto’ & ‘Aggressive’ & ’Original’ & ’Original’ & ’Original’ \\ Selection\_EqualCand & ’One’ & ’One’ & ’All’ & ’One’ & ’All’ \\ Selection\_SolRefin & ’Off’ & ’Off’ & ’Mis’ & ’Mis’ & ’Mis’ \\ Selection\_Ep & – & – & ’0.0001 & 0.0001 & 0.0001 \\ Selection\_ControlEP & ’Off’ & ’Off’ & ’Off’ & ’Midin’ & ’Off’ \\ Selection\_GloballyBiased & ’Off’ & ’Off’ & ’Off’ & ’Off’ \\ Selection\_TwoPhase & ’On’ & ’Off’ & ’Off’ & ’Off’ & ’Off’ \\ Hybridization\_Strategy & ’Off’ & ’Off’ & ’Off’ & ’Off’ \\ Hybridization\_Strategy & ’Off’ & ’Off’ & ’Off’ & ’Single’ & ’Aggressive’ \\ Hybridization\_LocalSearch & – & – & – & ’interior-point’ & ’interior-point’ \\ Hybridization\_MaxIterations & – & – & – & 1000 & 1000 \\ Hybridization\_MaxIterations & – & – & – & 3000 & 3000 \\ \hline \hline \end{tabular} \end{table} Table 5: Description of used parameters for each selected algorithm and their improved versions in GENDIRECT. The blue color indicates the parameter that has been substituted or has been added. Additionally, they might consider two function values identical if their difference is below a certain threshold. These variations in the implementations can significantly impact the selection of POCs. ### Results and discussions In this section, we conduct a performance evaluation of ten DIRECT-type algorithms, five of which are newly generated with GENDIRECT. The experimental results presented in this evaluation can also be accessed digitally in the "Results/MPC" directory of the GitHub repository, available at [https://github.com/blockchain-group/DIRECTGO](https://github.com/blockchain-group/DIRECTGO). #### 4.4.1 Comparison of success rates and function evaluations utilization Table 6 provides an overview of the success rates achieved by the ten DIRECT-type approaches considered on various subsets of the DIRECTGOLib v2.0 test problems. In particular, improvements that effectively improve the performance of the original algorithm are highlighted in green, while those that lead to deteriorating results are marked in red. The most remarkable enhancements in success rates were observed in the case of the algorithm that performed worst in this study (MrDIRECT) after applying the improvements. Its enhanced version yielded a remarkable increase in the success rate of 15.78%. Moreover, the most significant improvements were evident in the resolution of uni-modal problems, where the pure MrDIRECT version failed to locate the desired solutions within the allocated evaluation budget in 32.41% fewer instances. Among the pure DIRECT-type algorithms, the 1-DTC-GL algorithm exhibited the lowest increase in success rates. When considering the allocated budget for function evaluations, the improved algorithm 1-DTC-GL failed to provide a solution to the 113 problems, while the original version struggled with the 123 problems. An \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Algorithm} & \multicolumn{6}{c}{Percentage of solved problems} \\ \cline{2-7} & Overall & \begin{tabular}{c} Separability \\ + \\ \end{tabular} & \begin{tabular}{c} Multi-modality \\ + \\ \end{tabular} & \begin{tabular}{c} Scalability \\ - \\ \end{tabular} \\ \hline Impr. 1-DTC-GL & 82.18 & 91.71 & 77.62 & 77.30 & 98.62 & 78.92 & 94.12 \\ Orig. 1-DTC-GL & 80.60 & 91.71 & 75.29 & 75.66 & 97.24 & 76.91 & 94.12 \\ \hline Impr. HALRECT-IA & 78.08 & 87.32 & 73.66 & 72.19 & 97.93 & 73.69 & 94.12 \\ Orig. HALRECT-IA & 67.35 & 76.59 & 62.94 & 62.58 & 83.45 & 62.45 & 85.29 \\ \hline Impr. MrDIRECT & 64.20 & 78.05 & 57.58 & 55.42 & 93.79 & 59.84 & 80.15 \\ Orig. MrDIRECT & 48.42 & 65.85 & 40.09 & 44.58 & 61.38 & 43.17 & 67.65 \\ \hline Impr. BIRMIN & 75.08 & 83.90 & 70.86 & 68.30 & 97.93 & 70.68 & 91.17 \\ Orig. BIRMIN & 70.66 & 82.43 & 65.03 & 63.60 & 94.48 & 65.06 & 91.17 \\ \hline Impr. DIRMIN & 76.34 & 84.39 & 72.49 & 70.34 & 96.55 & 71.28 & 94.85 \\ Orig. DIRMIN & 77.76 & 84.88 & 74.36 & 72.19 & 96.55 & 73.09 & 94.85 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of the success rates of different algorithms in solving test problems with various characteristics. important observation is that the original algorithm 1-DTC-GL performed quite well, surpassing the overall performance of the improved versions of other less efficient algorithms. The enhancements in the hybrid algorithms resulted in increased success rates only for the BIRMIN algorithm, whereas the success rates for the DIRMIN algorithms exhibited a slight deterioration in most of the subsets considered. Despite the improvement achieved in the BIRMIN algorithm, it still remained outperformed by both versions of the DIRMIN algorithm in almost all cases. Fig. 4 presents a box plot that compares algorithms based on function evaluations per dimension on all test problems. An important distinction between pure and hybrid algorithms is that pure algorithms generally require more function evaluations, even for relatively simple optimization problems. On the contrary, hybrid algorithms demonstrate the ability to solve such problems quickly and efficiently. Among the algorithms, almost all hybrid algorithms achieved similar lowest first-quartile values, indicating that these methods could solve at least 25% of the test problems faster than pure algorithms. Specifically, four algorithms (original and improved DIRMIN, improved BIRMIN, and improved 1-DTC-GL) were in the lowest first quartile. On the contrary, the original HALRECT-IA and original 1-DTC-GL algorithms exhibited the worst first-quartile performance, each requiring approximately nine and six times more function evaluations, respectively, than the best-performing algorithm, DIRMIN. The improved algorithm 1-DTC-GL demonstrated the best median value, while its pure counterpart, the original version 1-DTC-GL, had the third worst Figure 4: Box plot graphical comparison of algorithms performance based on function evaluations per dimension across all test problems. median value in these studies. The addition of the local search procedure to the 1-DTC-GL algorithm reduced the median value by nearly eight times, resulting in a significant improvement in its performance. Interestingly, the median value of the original algorithm MrDIRECT is equal to the maximum number of function evaluation budgets (\(M_{\max}\)), indicating that the algorithm could not solve more than half of the test problems. However, its improved version exhibited a significantly higher median value. When comparing the third-quartile values, four algorithms reached the \(M_{\max}\) value in the third quartile, suggesting that these algorithms could not solve more than 25% of the problems. Only six algorithms achieved values lower than the maximum evaluation budget. Among these, the improved algorithm 1-DTC-GL achieved the lowest third-quartile value, approximately half that of the second-best pure algorithm, the original algorithm 1-DTC-GL. #### 4.4.2 Analysis of results across different subsets of problems The data profiles [31] depicted in Fig. 5 showcase how all algorithms perform on test problems with various properties of DIRECTGOLib v2.0. These profiles provide a comprehensive view of algorithm performance across different types of problems. Meanwhile, the data profiles in Fig. 6 offer an overall ranking of the algorithms on all test problems, providing a more focused perspective on their performance in a broader context. Hybridization of pure DIRECT-type algorithms significantly impacts the results, particularly when dealing with straightforward uni-modal or separable test problems. The inclusion of a local search procedure proves to be particularly advantageous for uni-modal problems, as it accelerates the convergence speed to reach optimal solutions more efficiently. On the other hand, pure DIRECT-type algorithms might prioritize the global search and exhaust the evaluation budget without locating the solution within the prescribed accuracy. As a result, the curves of the improved versions of 1-DTC-GL, HALRECT-IA, and MrDIRECT demonstrate significantly better performance than the original versions, especially for small evaluation budgets (\(\leq\)1000\(\times n\)). However, it is worth noting that the most successful pure DIRECT-type algorithm, 1-DTC-GL, eventually achieves nearly identical performance within the maximum evaluation budget, regardless of whether the problems are separable or uni-modal. The improved hybrid algorithm BIRMIN exhibits slightly lower performance within a small evaluation budget (\(M_{\max}\)\(\leq\)\(n\)\(\times\)10\({}^{2}\)). However, as the evaluation budget increases (\(M_{\max}\)\(\geq\)\(n\)\(\times\)10\({}^{4}\)), the improved version outperforms the original version. This difference in performance becomes particularly evident when the algorithm is applied to non-separable or multi-modal test problems. On the other hand, the curves of the two versions of the hybrid algorithm DIRMIN are almost indistinguishable within a smaller evaluation budget (\(M_{\max}\)\(\leq\)\(2n\)\(\times\)10\({}^{4}\)). However, within a larger evaluation budget, the original algorithm DIRMIN exhibits slightly better performance. Based on the four graphs in Fig. 5 and the overall ranking of the algorithms in Fig. 6, a consistent conclusion can be drawn: the performance of the improved and original 1-DTC-GL algorithm is the most efficient or at least comparable to the best-performing algorithm. Analyzing the curves, it is evident that the improved algorithm 1-DTC-GL exhibits the highest efficiency rates across all graphs compared to other algorithms within any evaluation budget in \([0,n{\times}10^{5}]\). Although the original performance of 1-DTC-GL becomes competitive, it requires a significant budget for the evaluations of functions (\(M_{\max}{\geq}n{\times}10^{4}\)). Overall, the improved performance of the 1-DTC-GL algorithm remains competitive, requiring fewer function evaluations to achieve the desired optimal value in most test functions. #### 4.4.3 Statistical analysis of the results To validate the results and comparisons between algorithms, as well as to evaluate the significance of improvements achieved by GENDIRECT, we conducted the Friedman mean rank test [7] and the non-parametric Wilcoxon signed test [13] at a significance level of 5%. A \(p\)-value greater than 0.05 indicates that the difference in results between methods is statistically insignificant. Figure 5: Data profiles: The horizontal axis represents the number of function evaluations per dimension, while the vertical axis represents the fraction of solved problems Table 7 displays the Friedman mean rank values for considered algorithms, utilizing the founded solution values within different evaluation budgets for all test problems. The results reveal that the improved versions consistently outperform their original counterparts in all budgets, except for the algorithm DIRMIN, where the original version obtained a higher rank in one specific evaluation budget (\(M_{\max}\)=\(n\)\(\times\)\(10^{5}\)). The improvements made to the algorithms have resulted in performance gains ranging from small to significant, as indicated by the higher mean rank values. Table 8 presents the \(p\)-values obtained by comparing the improved algorithms with their original counterparts, using the solutions found within four evaluation budgets on all test problems. For the improved algorithm 1-DTC-GL, there is strong statistical evidence that the improved version significantly outperforms \begin{table} \begin{tabular}{l c c c c} \hline \hline Algorithm & \multicolumn{4}{c}{Function evaluation budget (\(M_{\max}\))} \\ \cline{2-5} & \(n\)\(\times\)\(10^{2}\) & \(n\)\(\times\)\(10^{3}\) & \(n\)\(\times\)\(10^{4}\) & \(n\)\(\times\)\(10^{5}\) \\ \hline Impr. 1-DTC-GL & 4.7610 & 4.6128 & 4.6128 & 4.6601 \\ Orig. 1-DTC-GL & 6.0095 & 5.3375 & 5.3375 & 4.7831 \\ \hline Impr. HALRECT-IA & 5.5670 & 5.5229 & 5.5229 & 5.3249 \\ Orig. HALRECT-IA & 7.2752 & 7.1372 & 7.1372 & 6.1672 \\ \hline Impr. MrDIRECT & 5.3730 & 5.2500 & 5.2500 & 5.8028 \\ Orig. MrDIRECT & 5.3730 & 6.8099 & 6.8099 & 7.1435 \\ \hline Impr. BIRMIN & 4.6151 & 4.6562 & 4.6562 & 5.1333 \\ Orig. BIRMIN & 5.0103 & 5.0765 & 5.0765 & 5.6356 \\ \hline Impr. DIRMIN & 5.4219 & 5.2886 & 5.2886 & 5.2492 \\ Orig. DIRMIN & 5.5938 & 5.3084 & 5.3084 & 5.1002 \\ \hline \hline \end{tabular} \end{table} Table 7: Friedmann mean rank values with different objective function evaluation budgets. Figure 6: Data profiles: The horizontal axis represents the number of function evaluations per dimension, while the vertical axis represents the fraction of solved problems the original version within a small evaluation budget (\(M_{\rm max}\)\(\leq\)\(n\)\(\times\)\(10^{3}\)). However, as the evaluation budget increases (\(M_{\rm max}\)\(>\)\(n\)\(\times\)\(10^{3}\)), the higher \(p\)-values suggest that the significance of the improvement decreases and the difference between the improved and original versions becomes less statistically significant. In contrast, the situation is different for the other two pure DIRECT-type algorithms. The improved versions of HALRECT-IA and MrDIRECT show no significant improvement compared to the original versions at \(M_{\rm max}\)=\(n\)\(\times\)\(10^{2}\). However, for larger evaluation budgets, the \(p\)-values are low, indicating that the improvements are statistically significant. For the BIRMIN algorithm, the \(p\)-values are low, indicating that the improvement of the improved version of BIRMIN compared to the original version is statistically significant in all these budgets. Regarding DIRMIN algorithm, we can conclude that the improved version of the algorithm is statistically better if the evaluation budgets are \(M_{\rm max}\)=\(n\)\(\times\)\(10^{2}\) and \(M_{\rm max}\)=\(n\)\(\times\)\(10^{4}\). ## 5 Conclusions and future works This study introduces a novel generalized DIRECT-type algorithmic framework known as GENDIRECT, for derivative-free global optimization. The proposed framework empowers users to construct a wide range of DIRECT-type algorithms. Such innovative work can foster the development of new DIRECT-type algorithms and help identify the most suitable algorithm for various practical applications. To demonstrate the efficiency of GENDIRECT, we enhanced five selected DIRECT-type algorithms with the goal of improving their performance and solving global optimization problems more effectively. Evaluation of these constructed algorithms was carried out using benchmark test functions from DIRECTGOLib v2.0. The results were analyzed both graphically and statistically to gain insight into the algorithms' performance. The findings concluded that the newly developed versions of the DIRECT-type algorithms significantly outperformed their original counterparts in most cases. In conclusion, this paper has focused on box-constrained global optimization problems, but the generalized DIRECT-type algorithmic framework (GENDIRECT) could potentially be extended to handle constrained cases as well. Furthermore, due to the numerous combinations of algorithms within GENDIRECT, manually testing all of them becomes impractical. Therefore, future research should \begin{table} \begin{tabular}{l l l l l} \hline \multirow{2}{*}{Algorithm} & \multicolumn{4}{c}{Function evaluation budget (\(M_{\rm max}\))} \\ \cline{2-5} & \(n\)\(\times\)\(10^{2}\) & \(n\)\(\times\)\(10^{3}\) & \(n\)\(\times\)\(10^{4}\) & \(n\)\(\times\)\(10^{5}\) \\ \hline 1-DTC-GL & 6.3325\(\times\)\(10^{-3}\) & 2.4241\(\times\)\(10^{-8}\) & 2.6994\(\times\)\(10^{-1}\) & 5.8529\(\times\)\(10^{-1}\) \\ HALRECT-IA & 2.3254\(\times\)\(10^{-1}\) & 4.0565\(\times\)\(10^{-13}\) & 5.3640\(\times\)\(10^{-10}\) & 3.3265\(\times\)\(10^{-13}\) \\ MrDIRECT & 1.0000\(\times\)\(10^{0}\) & 2.0141\(\times\)\(10^{-34}\) & 4.5374\(\times\)\(10^{-40}\) & 5.9712\(\times\)\(10^{-34}\) \\ BIRMIN & 2.5863\(\times\)\(10^{-2}\) & 2.8668\(\times\)\(10^{-9}\) & 1.0548\(\times\)\(10^{-12}\) & 1.5552\(\times\)\(10^{-15}\) \\ DIRMIN & 3.0296\(\times\)\(10^{-8}\) & 4.6683\(\times\)\(10^{-1}\) & 8.2874\(\times\)\(10^{-3}\) & 4.4600\(\times\)\(10^{-4}\) \\ \hline \end{tabular} \end{table} Table 8: Wilcoxon signed test \(p\)-values at 5% significance level, comparing improved vs. original algorithms across various objective function evaluation budgets. explore the automation of these processes using advanced machine-learning techniques. By automating the algorithmic components process, optimization can become more efficient and effective. ## Data statement **DIRECTGOLib** - **DIRECT G**lobal **O**ptimization test problems **Library** is designed as a continuously-growing open-source GitHub repository to which anyone can easily contribute. The exact data underlying this article from DIRECTGOLib v2.0 can be accessed on GitHub: * [https://github.com/blockchain-group/DIRECTGOLib](https://github.com/blockchain-group/DIRECTGOLib), and used under the MIT license. We welcome contributions and corrections to it.
2306.01376
DSHGT: Dual-Supervisors Heterogeneous Graph Transformer -- A pioneer study of using heterogeneous graph learning for detecting software vulnerabilities
Vulnerability detection is a critical problem in software security and attracts growing attention both from academia and industry. Traditionally, software security is safeguarded by designated rule-based detectors that heavily rely on empirical expertise, requiring tremendous effort from software experts to generate rule repositories for large code corpus. Recent advances in deep learning, especially Graph Neural Networks (GNN), have uncovered the feasibility of automatic detection of a wide range of software vulnerabilities. However, prior learning-based works only break programs down into a sequence of word tokens for extracting contextual features of codes, or apply GNN largely on homogeneous graph representation (e.g., AST) without discerning complex types of underlying program entities (e.g., methods, variables). In this work, we are one of the first to explore heterogeneous graph representation in the form of Code Property Graph and adapt a well-known heterogeneous graph network with a dual-supervisor structure for the corresponding graph learning task. Using the prototype built, we have conducted extensive experiments on both synthetic datasets and real-world projects. Compared with the state-of-the-art baselines, the results demonstrate promising effectiveness in this research direction in terms of vulnerability detection performance (average F1 improvements over 10\% in real-world projects) and transferability from C/C++ to other programming languages (average F1 improvements over 11%).
Tiehua Zhang, Rui Xu, Jianping Zhang, Yuze Liu, Xin Chen, Jun Yin, Xi Zheng
2023-06-02T08:57:13Z
http://arxiv.org/abs/2306.01376v3
# _DSHGT_: Dual-Supervisors Heterogeneous Graph Transformer ###### Abstract A unified framework for multi-agent learning is to design a unified framework for multi-agent learning. The proposed framework is to design a unified framework for multi-agent learning. ###### Abstract. Software vulnerabilities are considered a major threat to system robustness and operability. The number of vulnerabilities reported and registered has been increasing significantly over the last decade owing to the growth of software practitioners and complex codebases (Kumar et al., 2018). As a result, numerous methods and techniques have been developed to identify software vulnerabilities especially at the early stage of development. Many vulnerability detection tools have also been developed by big tech companies such as Meta (Getafix (2018)), and Google (Tricorder (2018)). The underlying techniques can be broadly divided into two categories. Rule-based methods (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018) take a set of expert-written rules to capture undesired code behaviours. Learning-based methods (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018), on the other hand, intend to learn the underlying latent semantic and syntactic information and use the abnormal code corpus as the training samples. It has been shown in these studies (_e.g._(Kumar et al., 2018), (Kumar et al., 2018) and (Kumar et al., 2018)) that the learning-based methods excel in detecting common code vulnerabilities or bugs than that of expert-crafted rules, especially with the recent advancements in deep learning techniques. Many deep learning-based approaches model the source codes by capturing the shallow, contextual code token structures, which mainly use the recurrent neural network (RNN) and its variant language models (Kumar et al., 2018). These models are designed to split the code base and its abstract syntax tree into the sequence format, and thus ill-suited for encompassing the well-structured control dependency and data flows of programs. The use of graph neural networks (GNN) has recently emerged for solving code understanding tasks, owing to its potential to generalize both semantic and syntactic information. For instance, a gated GNN is first used in (Kumar et al., 2018) to represent the syntactic and semantic structure of a program snippet, which tends to solve the variable misuse and renaming problems rather than detecting general code vulnerabilities. Following that, a line of research started to explore the feasibility of using GNN to detect bugs in the programs. While it is tempting to unleash the power of GNNs to accomplish vulnerability detection tasks, encoding the code logic into a reasonable graph structure is non-trivial. Many works only expedite single relationships such as syntax hierarchy, data flow, and control dependency without concerning the heterogeneous attributes in the generated code graph, losing the generality of complex nodes and relation types in the graph (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). The GNN models trained in this case are proven to perform undesirably in different tasks. Apart from that, another drawback of most GNN works is that the trained program model is constrained to only one type of programming language, thus failing to verify how transferable the model is in other programming languages. Also, the auxiliary information such as method-level code comments in the program often provides an extra dimension of code features which is rarely explored and could be helpful for improving the expressiveness of the model. In this paper, we conduct a pioneer study to explore whether using heterogeneous graph as the code representation and adapting a promising graph neural network for the representation learning can improve the performance of detecting software vulnerabilities especially across different language platforms and targeting real-world software projects. For this purpose, we implement a novel heterogeneous graph learning-based framework, namely _DSHGT_. _DSHGT_ uses and adapts a Heterogeneous Graph Transformer (HGT) (Kumar et al., 2018), reporting a state-of-the-art performance on modelling the heterogeneous graph. _DSHGT_ also uses Code Property Graph (CPG) to represent software programs, which was first proposed in (Steintein et al., 2017) as the static vulnerability detection tool. CPG merges elements of abstract syntax trees, control flow graphs and program dependence graphs into a joint graph structure. The rich intra-program relations and logical syntactic flows in the CPG serve iteself an ideal candidate for the heterogeneous graph representation in our study. Using _DSHGT_ for the intended heterogeneous graph learning, edge-driven weight matrices (e.g., for relationships) and node-driven attentions (e.g., for entities) derived from the initial CPG node embeddings can be parameterized specifically for the underlying heterogeneity. In such a way, nodes and edges of different types in the CPG are able to maintain their specific representation, and _DSHGT_ is able to generate diverse embedding representations of the program suitable for the vulnerability detection task. Additionally, we leverage the annotation information in the code to enhance the embedding capability of _DSHGT_. The word token in the human-written code comments often contains supplementary semantic information about the program apart from code graph representations. To incorporate such information, _DSHGT_ introduces a multi-task learning mechanism, in which the trainable parameters in the model are updated by gradients with respect to both vulnerability detection loss and code comment generation loss, which we named dual-supervisors. In summary, the contributions of this paper are summarized as follows: **Pioneer Study:**: We present a pioneer study of Heterogeneous Graph Learning for vulnerability detection by proposing and implementing _DSHGT_, which embeds both semantic and heterogeneity properties of code representations (CPG) for improved vulnerability detection. **Dual-supervisors Learning:**: We design a multi-task learning framework with dual supervisors to utilize annotation information of codes to enhance the encoding capability of _DSHGT_, which enables _DSHGT_ to generalize well to diverse programming languages and real-world software projects. **Extensive Experiments:**: We conduct extensive experiments on both synthetic vulnerability datasets across different programming languages and real-world projects to verify our hypothesis that using heterogeneous graph learning, especially with dual-supervisor architecture, can improve the state-of-the-art in software vulnerability detection and point out some interesting research directions to follow by the community. The remainder of this paper is structured as follows. we first review the related work in section 2,. In section 3, we provide prerequisite backgrounds for our proposed _DSHGT_. In section 4, we introduce the detailed methodology of _DSHGT_. In section 5, we present the empirical study results and our discussion. In section 6, we discuss the validity of our proposed method. We draw the conclusion and point out our future research direction in section 7. ## 2. Related Work We take an overview of related works in software vulnerabilities detection from three different categories: traditional rule-based approach, deep learning-based approach, and graph learning-based approach. For the traditional approach, early works on vulnerability detection are heavily reliant on human-crafted rules from domain experts. The work in (Dwork et al., 2018) is the first of this kind to implement rules to identify software bugs and vulnerabilities automatically. Following that, many static analysis tools (Beng et al., 2016; Chen et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019) are developed to cover some well-known security issues, all of which share the same principle that if the scanned code base fails to conform to the re-defined rules, relevant vulnerabilities could occur. It is infeasible to craft rules that cover all possible code vulnerabilities, not to mention the required efforts to cope with the ever-changing code bases. The rapid development of machine learning, especially deep learning techniques, unleashes the great potential in enabling the automated learning of implicit vulnerable programming patterns. Many early works focus on extracting the features from lines of codes to facilitate vulnerability detection/prediction [5, 6, 29]. For instance, VulDeePecker [22] is the first deep learning-based binary vulnerability detector, which slices the program into code gadgets and utilizes BiLSTM to capture the semantic relations in the data dependence within the code gadgets. Similarly, \(\mu\)VulDeePecker [37] uses both BiLSTM and code attentions to capture more "localized" information within a code statement, and control dependence among method calls. LIN _et al._[25] designs a framework that uses data sources of different types for learning unified high-level representations of code snippets. It also uses BiLSTM as the core component of the learning process. DeepBugs [27] uses a feedforward network as the classifier for name-based bug detection, which reasons about names based on semantic representations. However, only the natural code sequences are considered in these works, and the intra-program flow logic and dependency information are omitted. Neural networks on graphs have drawn increasing attention in recent years, which focus on learning the model based on graph-structured input [16, 19, 35]. Researchers have put efforts into exploring the feasibility of using code graph representations such as Abstract Syntax Trees (AST), Program Dependency Graphs (PDG), and Control Flow Graphs (CFG) for vulnerability detection tasks. The work in [1] first presents how to construct graphs from source code using AST and additional control and data flows, and then uses Gated Graph Neural Networks (GGNN) to detect variable misuse bugs. Afterwards, Devign [36] starts to apply GNN on CPG. It extracts AST, program control and data dependency from CPG to create the joint graph as the composite code representation, from which a GGNN is designed to learn the graph-level embeddings for the downstream vulnerability detection task. FUNDED [32] integrates data and control flow into the AST as the code graph representation and starts to distinguish multiple code relationships when training the GGNN, which is achieved by representing the input program as multiple relation graphs. DeepWukong [4] combines the CFG and PDG to generate a refined subgraph called XFG for the program, and adopts three different GNNs to test the performance for bug prediction. Existing research mainly relies on adopting homogeneous graph learning techniques, from which types of nodes and edges are discarded, making it infeasible to represent heterogeneous structures. However, we argue that the graph representations of codes convey rich semantic and logical information reflected in a variety of node/edge types and are intrinsic to the characteristics of heterogeneous graphs [33]. This motivates us to conduct this pioneering study of heterogeneous graph learning which is shown later to improve over these state-of-the-art. Figure 1: Sample code and corresponding generated CPG. ## 3. Preliminary ### Heterogeneous Graph In this section, we provide a formal definition of the heterogeneous graph (H In HGT, a triplet \(<\tau(s),\phi(e),\tau(t)>\) can be used to denote the relationships among source node (\(s\)), directed edge (\(e\)), and target node (\(t\)). The function \(\tau(\cdot)\) is node type mapping, which outputs the type of input node. Similarly, function \(\phi(\cdot)\) denotes the edge type mapping, which outputs the type of input edge. This triplet and the original node embedding for the source and target nodes are the input for HGT. The embedding of nodes is updated through multiple iterations, each of which includes three steps: _1)_ Attention calculation (Section 4.2.2); _2)_ Message calculation (Section 4.2.3); and _3)_ Aggregation of attention and message (Section 4.2.4). We will detail our adaptation of HGT in Section 4.2. ## 4. Heterogeneous Graph Learning In our proposed heterogeneous graph learning procedure, we have the following two main steps. The first step is Graph Construction (shown in **Fig. 2**). We analyze all the source code and generate the initial CPG (**Fig. 2A** & **Fig. 2B**). As we focus on method-level vulnerability analysis in this pioneering study, we extract method-level CPGs from the initial CPG (**Fig. 2C**). Meanwhile, symbolization is also performed on method-level CPGs to reduce the noise introduced by personalized function/variable naming conventions (**Fig. 2D**). We then perform the embedding method for each node within the method-level CPGs for the next step (**Fig. 2E**). The second step is our adaptation of HGT - Dual-Supervisors HGT Learning (shown in **Fig. 3**). In Dual-Supervisors HGT learning, we use initial node features as the input of HGT to learn and extract the graph-level information. HGT can effectively encode the heterogeneity of CPG, which helps improve the generalization ability of the model. We then leverage dual-supervisors learning for both vulnerability prediction and code comment generation. We introduce the code comment as the second supervisor to align the latent features learned from the HGT with the underlying semantic meaning through the back-propagation process. ### Graph Construction We first analyze all the source code files and generate the initial corresponding Code Property Graphs (CPGs). In our case, all the related files (_e.g._ source code files and dependency library files) are within the same directory. By inputting this directory to an open source code parser _Joern1_, these source code files are then iterated automatically to generate the corresponding CPG. Footnote 1: [https://github.com/joernio/joern](https://github.com/joernio/joern) In this paper, we concentrate on method-level vulnerability analysis, and Algorithm 1 demonstrates how we construct the method-level CPGs. As indicated in the pseudo-code, the generated CPG is denoted as \(c\), which contains all relationships of source codes within one leaf directory. The set of all directory-level CPGs is denoted as \(C\). Instead of using original CPG \(c\) that contains much redundant information in the graph, we perform forward and backward traversal to generate the method-level CPG \(m\). Specifically, both traversals are based on Depth-First Search (DFS) for each _Method_ node within \(c\), and the set of all method-level CPGs is denoted as \(M\). Taking **Fig 2C** as an example, node \(3\) is a method node, from which we traverse forward through nodes _6,7,8,11,13,14,15_, while traversing backward through the node \(1\). Thus all the nodes be traversed are _1,3,6,7,8,11,13,14,15_ including itself. Thus, the corresponding method-level CPG (Method_CPG) could be generated by slicing this traversed set out of the original CPG. In each CPG (**Fig 2B**), we construct the heterogeneous graph by mapping the original entities (e.g., Method name) and relationships (e.g., method call) to different types of nodes (e.g., _METHOD_) and edges (e.g., _CALL_). For a full list of nodes and edges we generated for CPG, please refer to Table 8 and Table 9. ``` 0: Source code root directory \(S\), \(Joern\) parser \(\mathcal{J}\) 0: Method-level CPGs set \(M\). 1:\(C\leftarrow\emptyset\) 2:for each leaf directory \(l\in\operatorname{dir}(S)\)do 3: generate CPG \(c\) through \(\mathcal{J}(l)\), and add to set \(C\) 4:endfor 5:\(M\leftarrow\emptyset\) 6:for each \(c\in C\)do 7:\(N\leftarrow\)all \(method\) type nodes within \(c\) 8:for each \(n\in N\)do 9: start at \(n\), perform DFS forward traverse 10: start at \(n\), perform DFS backward traverse 11: generate method-level CPG \(m\) for method \(n\) 12: add \(m\) to set \(M\) 13:endfor 14:endfor ``` **Algorithm 1** Generating Method-level Code Property Graph Meanwhile, to alleviate the noise introduced by personalized naming conventions for functions and variables and better preserve the original code semantics (Beng et al., 2017), we then perform symbolization on method-level CPGs (shown in **Fig 2D**). Following that, different function and variable names defined by users will be unified to _METHOD'N'( )_ and _VAR'N'_, where \(N\in\mathbb{Z}^{+}\). For example, the function names _readData( )_ and _writeData( )_ and variable names \(x\) and \(y\) will be mapped to _METHOD1( )_, _METHOD2( )_, _VAR1_, and _VAR2_, respectively. The actual numbers \(N\) used in the symbolization may vary. As shown in **Fig 2E**, we then perform Doc2Vec embedding (Grover et al., 2017) for each node within the method-level CPGs. This embedding serves as the initial node feature and will be refined during the Dual-Supervisors HGT learning. ### Dual-Supervisors HGT Learning The overall architecture of the Dual-Supervisors HGT (_DSHGT_) is shown in **Fig 3**. For each target node in a given CPG, we consider all its connected neighbors as source nodes, and for each source node/target node pair we define \(<\tau(s),\phi(e),\tau(t)>\) triplet as the relationship of this pair (shown in **Fig 3A**). For each triplet, we then calculate the attention score between the source node and the Figure 3. _DSHGT_ Learning target node (shown in **Fig 3B**), calculate the messaging passing score of each source node (shown in **Fig 3C**), and aggregate the information from the above two steps and update the target node embedding (shown in **Fig 3D**). To improve the robustness of the learned features (aggregated target node embedding), we use the existing code comment for each method as the additional supervisor in the multi-task learning (shown in **Fig 3E**). We will walk through each step in more details below. Note, we use \(H^{\alpha}[\beta]\) to denote node \(\beta\)'s embedding in the \(\alpha\)-th layer, and \(H^{\alpha-1}[\beta]\) to denote node \(\beta\)'s embedding in the \((\alpha-1)\)-th layer, through the whole section. #### 4.2.1. Constructing DSHGT Input Triplet (Fig 3A) For each method CPG, we iteratively walk through the CPG using the depth-first search algorithm and construct the triplet from the root level all the way to the leaf nodes. For instance, when we walk to the node of \(\{11\}\) (_i.e._ node \(t\)) in **Fig 3A**, we treat the node as the current target node. We found its neighbor nodes are \([8]\), \([15]\), \([14]\) and \([13]\) (_i.e._ node \(s_{1}\), \(s_{2}\), \(s_{3}\) and \(s_{4}\), respectively). Then we construct the following triplets \(<\tau(s_{1}),\phi(e),\tau(t)>\), \(<\tau(s_{2}),\phi(e),\tau(t)>\), \(<\tau(s_{3}),\phi(e),\tau(t)>\) and \(<\tau(s_{4}),\phi(e),\tau(t)>\). We then feed them all to the _DSHGT_. Note, to simplify the figure, we only present the embedding for node \(t\), \(s_{1}\) and \(s_{2}\) in **Fig 3**. #### 4.2.2. Heterogeneous Attention Calculation (Fig 3B) Firstly, we calculate the attention between \(s_{1}\) and \(t\), where \(s_{1}\) is one of the neighbor nodes of \(t\). The calculation involves five equations (Eq. 1 to Eq. 5). \[Q^{i}(t)=H^{(I-1)}[t]\cdot Q\text{-}\mathit{Linear}^{i}_{\tau(t)} \tag{1}\] where the dimension of \(H^{(I-1)}[t]\) (_i.e._ the embedding of \(t\)) is \(R^{1\times d}\), the dimension of \(Q\text{-}\mathit{Linear}^{i}_{\tau(t)}\) is \(R^{d\times\frac{d}{h}}\), \(i\) (\(i\in[1,h]\)) represent the \(i\)-th head of attention, and the dimension of \(Q^{i}(t)\) is \(R^{1\times\frac{d}{h}}\). We project \(H^{(I-1)}[t]\) to \(Q^{i}(t)\) through \(Q\text{-}\mathit{Linear}^{i}_{\tau(t)}\) (_i.e._ the query matrix), and each \(\tau(\cdot)\) has its own matrix \(Q\text{-}\mathit{Linear}^{i}_{\tau(t)}\) on the \(i\)-th head. \[K^{i}(s_{1})=H^{(I-1)}[s_{1}]\cdot K\text{-}\mathit{Linear}^{i}_{\tau(s_{1})} \tag{2}\] In Eq. 2, the dimension of \(H^{(I-1)}[s_{1}]\) (_i.e._ the embedding of \(s_{1}\)) is \(R^{1\times d}\), the dimension of \(K\text{-}\mathit{Linear}^{i}_{\tau(s_{1})}\) is \(R^{d\times\frac{d}{h}}\), where \(i\) (\(i\in[1,h]\)) represent the \(i\)-th head of attention, and the dimension of \(K^{i}(s_{1})\) is \(R^{1\times\frac{d}{h}}\). We project \(H^{(I-1)}[s_{1}]\) to \(K^{i}(t)\) through \(K\text{-}\mathit{Linear}^{i}_{\tau(s_{1})}\) (_i.e._ the key matrix), and each \(\tau(\cdot)\) has its own matrix \(K\text{-}\mathit{Linear}^{i}_{\tau(s_{1})}\) on the \(i\)-th head. \[\mathit{ATT\text{-}head}^{i}(s_{1},e,t)=(K^{i}(s_{1})\times W^{ATT}_{\phi(e) }\times Q^{i}(t)^{T})\times\frac{\mu_{<\tau(s_{1}),\phi(e),\tau(t)>}}{\sqrt{d}} \tag{3}\] Eq. 3 is for calculating the attention value of the \(i\)-th head from \(s_{1}\) to \(t\). The \(W^{ATT}_{\phi(e)}\in R^{\frac{d}{h}\times\frac{d}{h}}\) stands for a learnable parameter matrix for edge type \(\phi(e)\), which represents the learnable semantic information for each edge type. The \((K^{i}(s_{1})\times W^{ATT}_{\phi(e)}\times Q^{i}(t)^{T})\) is the original attention value of the \(i\)-th head. The dimension of it is \(R^{1\times 1}\) (_i.e._\(R^{1\times\frac{d}{h}}\times R^{\frac{d}{h}\times R^{\frac{d}{h}}}\times R^{ \frac{d}{h}\times 1}\to R^{1\times 1}\)). The \(\mu\) is a matrix related with the triplet \(<\tau(s_{1}),\phi(e),\tau(t)>\), which acts as a scaling factor for this triplet relationship. Its dimension is \(R^{A\times E\times A}\), where \(A=|\tau(\cdot)|\) and \(E=|\phi(\cdot)|\). It is worth noting that, the magnitude of \(K\) and \(Q\) dot product increases significantly and will lead _Softmax_ function to small gradient values. Thus, we divide the original value by \(\sqrt{d}\) to maintain the gradient values after _Softmax_, which could help the training. \[\mathbf{Attention}_{DSHGT}(s_{1},e,t)=\underset{i\in[1,h]}{\parallel}\mathit{ ATT\text{-}head}^{i}(s_{1},e,t) \tag{4}\] Eq. 4 is for calculating the attention value from \(s_{1}\) to \(t\). The \(h\) heads attention values from Eq. 3 will be concatenated together to a vector with \(R^{h\times 1}\) dimensions. Note the attention calculation will be the same for all the triplets, \(<\tau(s_{2}),\phi(e),\tau(t)>\), \(<\tau(s_{3}),\phi(e),\tau(t)>\), _etc_. To yield the final attention value for each head, we gather attention matrices from all neighbors of \(N(t)\) and conduct _Softmax_ as shown in Eq. 5. \[\mathbf{Attention}_{DSHGT}(s,e,t)=\underset{\forall s\in N(t)}{Softmax}\left( \mathbf{Attention}_{DSHGT}(s,e,t)\right) \tag{5}\] #### 4.2.3. Heterogeneous Message Calculation (Fig 3C) Secondly, we shows how to calculate the message from \(s_{1}\) to \(t\), which involves three equations (Eq. 6 to Eq. 8). \[V^{i}(s_{1})=H^{(l-1)}[s_{1}]\cdot V\text{-}Linear^{i}_{\tau(s_{1})} \tag{6}\] In Eq. 6, the dimension of \(H^{(l-1)}[s_{1}]\) is \(R^{1\times d}\), the dimension of \(V\text{-}Linear^{i}_{\tau(s_{1})}\) is \(R^{d\times\frac{d}{h}}\), where \(i\) (\(i\in[1,h]\)) represent the \(i\)-th head of message, and the dimension of \(V^{i}(s_{1})\) is \(R^{1\times\frac{d}{h}}\). We project \(H^{(l-1)}[s_{1}]\) to \(V^{i}(s_{1})\) through \(V\text{-}Linear^{i}_{\tau(s_{1})}\) (_i.e._ the value dimension), and each \(\tau(\cdot)\) has its own parameter \(V\text{-}Linear^{i}_{\tau(s_{1})}\) on the \(i\)-th head. \[\text{\emph{MSG-head}}^{i}(s_{1},e,t)=V^{i}(s_{1})\times W^{\text{\emph{MSG}}}_{\phi(e)} \tag{7}\] Eq. 7 calculates the message value of the \(i\)-th head from \(s_{1}\) to \(t\). The \(W^{\text{\emph{MSG}}}_{\phi(e)}\in R^{\frac{d}{h}\times\frac{d}{h}}\) stands for a learnable parameter matrix for edge type \(\phi(e)\), and each \(\phi(\cdot)\) has its own \(W^{\text{\emph{MSG}}}_{\phi(e)}\) matrix. The dimension of \(\text{\emph{MSG-head}}^{i}(s_{1},e,t)\) is \(R^{1\times\frac{d}{h}}\) (_i.e._\(R^{1\times\frac{d}{h}}\times R^{\frac{d}{h}\times\frac{d}{h}}\to R^{1 \times\frac{d}{h}}\)). \[\mathbf{Message}_{\text{\emph{DBHGT}}}(s_{1},e,t)=\underset{i\in[1,h]}{\text{ \emph{MSG-head}}^{i}(s_{1},e,t)} \tag{8}\] The message value from \(s_{1}\) to \(t\) is calculated in Eq. 8, in which the \(h\) heads message values from Eq. 7 will be concatenated together to a matrix with \(R^{h\times\frac{d}{h}}\) dimensions. Note the message calculation will be the same for all the triplets, \(<\tau(s_{2}),\phi(e),\tau(t)>\), \(<\tau(s_{3}),\phi(e),\tau(t)>\), _etc_. #### 4.2.4. Heterogeneous Node Embedding Aggregation(Fig 3D) Thirdly, we calculate the aggregation of attention and message from \(s_{1}\) to \(t\). \[\tilde{H}^{l}[t_{s_{1}}]=\mathbf{Attention}_{DSHGT}(s_{1},e,t)\otimes\mathbf{ Message}_{\text{\emph{DBHGT}}}(s_{1},e,t) \tag{9}\] The weighted message from \(s_{1}\) is shown in Eq. 9, where \(\otimes\) is element-wise multiplication. The dimension of \(\mathbf{Attention}_{DSHGT}(s_{1},e,t))\) is \(R^{h\times 1}\), and the dimension of \(\mathbf{Message}_{\text{\emph{DBHGT}}}(s_{1},e,t)\) is \(R^{h\times\frac{d}{h}}\). Note the weighted message calculation will be the same for all the triplets. After calculating the weighted message for all neighbors of \(N(t)\), we could update the target node \(t\) embedding based on messages from its neighbors. \[\tilde{H}^{l}[t]=\underset{\forall s\in N(t)}{\oplus}\tilde{H}^{l}[t_{s}] \tag{10}\] We then reshape the vector to \(\tilde{H}^{l}[t]\in R^{1\times d}\). \[H^{l}[t]=\alpha\left(\tilde{H}^{l}[t]\cdot A\text{-}Linear_{\tau(t)}\right)+ H^{l-1}[t] \tag{11}\] In Eq. 11, the \(\tilde{H}^{l}[t]\) stands for the node \(t\) embedding for current layer, the \(H^{l-1}[t]\) stands for the node \(t\) embedding from previous layer, the \(\alpha\) is the activation function (_i.e._ ReLU). We project \(\tilde{H}^{l}[t]\) through \(A\)-\(\mathit{Linear}_{\tau(t)}\in R^{d\times d}\). Note each \(\tau(\cdot)\) has its own parameter in \(A\)-\(\mathit{Linear}_{\tau(t)}\). The projection (_i.e._\(A\)-\(\mathit{Linear}_{\tau(t)}\left(\tilde{H}^{l}[t]\right)\)) will then go through the activation function before adding up the node \(t\) embedding from the previous layer as residual, and yield the final node \(t\) embedding for the current layer. In one iteration, we update all the nodes' embedding within the heterogeneous graph following the same procedure. We iterate this calculation for every nodes within the method-level CPG for \(L\) layers. The \(L\)-th layer output \(H^{L}[t]\) **(Fig 3D)** will be used for downstream tasks. #### 4.2.5. Dual-Supervisors Learning (Fig 3E) The _DSHGT_ node embedding procedure will go through \(L\) times (_i.e._\(L\) layers of _DSHGT_), and each layer will use the previous layer's embedding as input (the initial layer's input is based on the CPG embedding, details in Section 4.1). In our experiments, we perform empirical study and set \(L=3\) (details analyzed in Section 5.4). As the output form _DSHGT_ (_i.e._\(H^{L}[t]\)) is node-based embedding, we construct a _Readout_ layer for graph-level embedding output: \[z^{\mathcal{G}}=\mathit{MEAN}\left(\mathit{MLP}\left(\mathcal{X}\oplus H^{L} \right)\right) \tag{12}\] Instead of directly taking out the embeddings, we concatenate them with the initial node embedding, pass through a shallow multi-layer perceptron (MLP), and follow with a row-wise _MEAN_ operation. \(\mathcal{X}\) is defined in Section 3.1, which represents the initial node embedding. This output then goes through a dual-supervisors structure(_i.e._ MLP and Decoder) for multi-task purposes (**Fig 3E**). For detecting vulnerabilities within the source code (**Fig 3E(_b_)), we use 1 layer MLP for **0/1** classification, where **0** stands for no vulnerability while **1** means the source code segment contains vulnerabilities. We only test whether the code snippets contain vulnerabilities, and do not classify them into a specific type of CWE. On the other hand, We consider the graph-level embedding as an _Encoder_ for the source code and design the corresponding _Decoder_ (1-layer LSTM) to summarize the corresponding source code comments sequence-to-sequence (**Fig 3E(_c_)). Then we compare the generated code comments with comments within source code (_i.e._ the ground truth) and yield cross entropy loss for multi-tasks. To leverage the _loss_ from the two supervisors, we implement the following equation for _loss_ fusion: \[loss=(1-\lambda)\times loss_{main}+\lambda\times loss_{sup} \tag{13}\] In Eq. 13, \(loss_{main}\) is the _loss_ of \(0/1\) classification and \(loss_{sup}\) is the _loss_ of code comments prediction. The \(\lambda\) is the parameter for adjusting the weight of \(loss_{sup}\) in \(loss\). ## 5. Experiment We evaluate the performance of our framework on different datasets against a number of state-of-the-art graph-based or traditional vulnerability detection models. We aim to answer the following research questions. **RQ1**: How well our proposed framework performs compared with other baselines in public C/C++ vulnerability datasets? **RQ2**: Can the framework achieve a consistently higher vulnerability detection capability when applied to other programming languages? **RQ3**: How to balance the contribution from the two supervisors (i.e., vulnerability and code comment oracles) to improve the performance? **RQ4**: How much can the CPG input representation and HGT backbone improve the performance? **RQ5**: How effective is our proposed method when applied to detect vulnerabilities in real-world open-source projects? ### Experimental Setup We describe the experimental setup in this section, including the environment, baselines, evaluation metrics, and the preparation of the dataset. The parameter statistics of models are shown in Table 1. #### 5.1.1. Environment We implemented our heterogeneous graph-based vulnerability detection model using Python v3.7 and Pytorch v1.11.0. As mentioned in Section 4.1, we leveraged _Joern_ to generate the initial CPG of different programming languages. We trained and tested our model on a computer with an 8-core 3.8 GHz Intel Xeon CPU and an NVIDIA 3080Ti GPU. The hyperparameter setup could be found in Table 2. #### 5.1.2. Baselines We compared our _DSHGT_ with the following state-of-the-art baselines and reported the comparison statistics. **LIN**: _et al._[25] designs a framework that uses data sources of different types for learning unified high-level representations of code snippets. It uses BiLSTM as the core component of the learning process. **DEVIGN**: [36] combines AST, program control and data dependency as the joint graph to represent the composite code representation, from which a gated graph neural network model is designed to learn the graph-level embeddings for the vulnerability detection task. **FUNDED**: [32] integrates data and control flow into the AST as the code graph representation, which is then used as the input for the gated graph neural network (GGNN) to train the vulnerability detection model. It uses Word2Vec [26] to generate the initial node embedding. **DeepWukong**: [4] is also a graph learning-based approach that encodes both textual and structured information of code into code representations. It is designed specifically to detect C/C++ vulnerabilities. It uses Doc2Vec [18] to generate initial node embeddings from PDG. Note that for all baseline models, we used the default hyperparameters as reported in the respective literature. #### 5.1.3. Evaluation Metrics We used **Accuracy**, **Precision**, **Recall** and **F1** scores to evaluate the vulnerabilities detected by a model, which are widely used in the machine learning community to verify the generalization ability of a predictive model [24]. #### 5.1.4. Dataset Preparation We used several vulnerability datasets to verify our model and compared the performance with baseline models. For **RQ1** to **RQ4**, we chose the _Software Assurance Reference Dataset_ (**SARD**), which is a widely used vulnerability database with a large set of synthetic programs [4, 24, 32]. In **SARD**, a program is labelled as good (not vulnerable), bad (vulnerable) or mix (vulnerable with patched updates). For vulnerable programs, **SARD** describes the vulnerability and the vulnerability type in **CWEID** (**C**ommon **W**eakness **E**numeration **ID**entifier) formats. It also contains the human-crafted annotations in the program as supplementary information of the codes. We used a total of 22 categories for C/C++, Java and PHP, of which are the 10 most common types in 2022 **CWE** Top 25 Most Dangerous Software Weakness2 and the rest 12 are most typical of other types. Regarding \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model Information & HGT & Decoder & MLP & Total \\ \hline Number of Parameters & 5,091,894 & 46,280 & 12,456 & 5,150,720 \\ \hline Size & 5.09 MB & 0.05 MB & 0.01MB & 5.15MB \\ \hline \end{tabular} \end{table} Table 1. Information of models parameters the sample numbers of each type in **CWE** we present and explain in detail in Table 10. In each type, we selected 80% of the sample size as the training dataset and 20% as the test dataset. These categories are harvested from **SARD**, which is comprehensive and covers most of the vulnerability types. For **RQ5**, we leveraged two real-world open-source projects, **FFmpeg3** and **QEMU4**. These two large open-source projects are written in C, involving many contributions and code commits from software developers. The labels of **FFmpeg** and **QEMU** are based on vulnerability-fix commits or non-vulnerability fix commits of these projects. The Vulnerability fix commits (VFCs) are the code commits that fix a potential vulnerability of a function/method, while the non-vulnerability-fix commits (non-VFCs) are commits considered less relevant to fix vulnerabilities. The detailed statistics and descriptions of the datasets can be found in Table 10. Footnote 3: [https://ffmpeg.org/](https://ffmpeg.org/) Footnote 4: [https://www.qemu.org/](https://www.qemu.org/) ### Performance Analysis on SARD (RQ1) We first verified the effectiveness of the proposed model and other baselines on a number of CWE vulnerability types, from which the models are trained and tested on **SARD** synthetic code samples. Note that the training and testing data ratio is 80% and 20%, respectively. Table 3 reports the evaluation metrics on each vulnerability type. In general, even without incorporating the semantic meaning of code comments into the model (_DSHGTnoAnno_), our proposed model achieves promising results on almost all vulnerability types. _DSHGTnoAnno_ is the variant of _DSHGT_ only training on code representation graph using vulnerability oracle. Devign, Lin et al., DeepWukong, and _DSHGTnoAnno_ give low F1 and accuracy score on the CWE-834 dataset, which describes an "_Excessive Iteration_" vulnerability that leads to the over-consumption of resources and can be adversely affected by attackers. This type of vulnerability presents no clear sign of vulnerable code patterns and is hardly identified by learning solely on code graph representations or code tokens. Note that the method in DeepWuKong of constructing Extracted Flow Graphs (XFG) is based on prior knowledge of vulnerable code locations, but this is infeasible to retrieve that in real-world scenarios. In order to facilitate comparisons on an equal footing, we created a dataset for each method individually, which could lead to slightly different results compared to those presented in the original paper of DeepWukong. _DSHGT_, on the other hand, also leverages the semantic code comment information of the programs to enhance the robustness of the detection ability, thus achieving much better results compared with _DSHGTnoAnno_ and other baselines. It can also be observed that both Devign and Lin et al. perform undesirable on CWE-469, in which the vulnerability is caused by the misuse of the pointer variable. _DSHGT_ is the best-performing \begin{table} \begin{tabular}{c||c} \hline Name & Setup \\ \hline \hline Readout func (HGT) & 2 linear layers, 1 output layer \\ Layer depth (HGT) & 3 \\ Attention head (HGT) & 4 \\ Loss function & Cross entropy loss \\ Optimizer & Adam [(15)] \\ Learning rate & 2e-3 \\ Dropout rate & 0.5 \\ Batch size & 64 \\ Epochs & 50 \\ Weight initializer & Xavier [(11)] \\ \hline \hline \end{tabular} \end{table} Table 2. Hyperparameter setup model on this dataset, indicating that the intra-program dependence in the CPG provides sufficient information when modelling the code graph in this case. FUNDED achieves marginal performance gain compared with _DSHGT_ and DeepWukong on CWE-676. We discover that this dataset relates to "Use of Potentially Dangerous Function" vulnerability, which can be identified by models capable of encoding control flow information. Therefore, FUNDED, DeepWukong and _DSHGT_ achieve similar results on this dataset. Lin et al. achieves the best Accuracy score on CWE-78, indicating high True Positive and Negative numbers. Yet it reports a low F1 score with poor results on Precision and Recall. Overall, _DSHGT_ achieves the best results on most of the tested datasets, with an average 88.05% on accuracy and 88.35% on F1. In order to more clearly and accurately demonstrate the advantages of the _DSHGT_, we also performed a quantitative analysis of the experiment data (Table 3), using independent samples t-test. We tested _DSHGT_ against DEVIGN, LIN et al., FUNDED, DeepWukong, and _DSHGTnoAnno_ separately. In each test the comparisons were made separately for Accuracy and F1. The quantitative analysis is used to show whether _DSHGT_ is due to the comparison schemes. We use _DSHGT_ACC_ to denote a set of experiment results. Use \(\tilde{X}_{DSHGT\_ACC}\) to denote the mean of _DSHGT_ACC_, \(S_{DSHGT\_ACC}\) \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Metric DatasetModel} & \multirow{2}{*}{DEVIGN} & \multicolumn{2}{c|}{LIN et al.} & \multicolumn{2}{c|}{FUNDED} & \multicolumn{2}{c|}{DeepWukong} & \multicolumn{2}{c|}{DSHGTnoAnno} & \multicolumn{2}{c|}{DSHGT} \\ \cline{2-11} & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 \\ \hline CWE-119 & 0.79 & 0.81 & **0.85** & 0.78 & 0.83 & 0.85 & 0.82 & 0.84 & 0.80 & 0.86 & 0.84 & **0.87** \\ \hline CWE-400 & **0.84** & 0.81 & 0.79 & 0.76 & **0.84** & 0.80 & **0.84** & 0.79 & 0.82 & 0.83 & 0.83 & **0.85** \\ \hline CWE-404 & 0.84 & 0.82 & 0.83 & 0.74 & 0.81 & 0.85 & 0.83 & 0.84 & 0.83 & 0.88 & **0.88** & **0.90** \\ \hline CWE-369 & 0.83 & 0.78 & 0.82 & 0.80 & 0.86 & 0.84 & **0.91** & 0.87 & 0.90 & **0.91** & 0.89 & 0.88 \\ \hline CWE-191 & 0.82 & 0.76 & 0.80 & 0.73 & **0.87** & 0.90 & 0.75 & 0.81 & 0.81 & 0.87 & 0.85 & **0.91** \\ \hline CWE-476 & 0.91 & 0.87 & 0.89 & 0.83 & 0.83 & 0.87 & **0.90** & 0.86 & 0.85 & 0.84 & 0.86 & **0.89** \\ \hline CWE-467 & 0.79 & 0.84 & 0.87 & 0.81 & 0.85 & 0.86 & 0.88 & 0.86 & 0.86 & 0.83 & **0.90** & **0.87** \\ \hline CWE-78 & 0.82 & 0.84 & **0.89** & 0.79 & 0.83 & **0.86** & 0.84 & 0.83 & 0.84 & 0.86 & 0.85 & 0.84 \\ \hline CWE-772 & 0.83 & 0.77 & 0.85 & 0.81 & 0.86 & 0.87 & 0.86 & 0.83 & 0.86 & 0.88 & **0.90** & **0.88** \\ \hline CWE-190 & 0.86 & 0.83 & 0.83 & 0.79 & 0.86 & 0.84 & 0.87 & 0.83 & 0.85 & 0.82 & **0.92** & **0.87** \\ \hline CWE-770 & 0.87 & 0.84 & 0.89 & 0.80 & 0.85 & 0.87 & 0.86 & 0.87 & 0.85 & 0.86 & **0.90** & **0.89** \\ \hline CWE-666 & 0.85 & 0.84 & 0.88 & 0.86 & 0.89 & 0.90 & 0.87 & 0.92 & 0.86 & 0.91 & **0.90** & **0.93** \\ \hline CWE-665 & 0.83 & 0.87 & 0.90 & 0.79 & 0.93 & 0.88 & 0.92 & 0.89 & 0.92 & 0.92 & **0.94** & 0.92 \\ \hline CWE-758 & 0.84 & 0.87 & 0.86 & 0.83 & 0.84 & 0.88 & 0.87 & 0.92 & 0.85 & 0.89 & **0.91** & 0.93 \\ \hline CWE-469 & 0.75 & 0.79 & 0.78 & 0.76 & **0.86** & 0.83 & 0.76 & 0.79 & **0.83** & 0.84 & 0.83 & **0.86** \\ \hline CWE-676 & 0.84 & 0.80 & 0.84 & 0.75 & **0.92** & **0.91** & 0.89 & 0.83 & 0.86 & 0.89 & 0.90 & **0.91** \\ \hline CWE-834 & 0.70 & 0.62 & 0.76 & 0.74 & 0.84 & 0.79 & 0.74 & 0.72 & 0.83 & 0.76 & **0.87** & **0.82** \\ \hline CWE-79 & 0.82 & 0.84 & 0.84 & 0.85 & 0.85 & 0.83 & 0.86 & 0.88 & 0.86 & 0.87 & **0.88** & **0.90** \\ \hline CWE-89 & 0.85 & 0.82 & 0.76 & 0.80 & 0.83 & 0.85 & 0.81 & 0.84 & **0.87** & 0.85 & **0.87** & **0.87** \\ \hline CWE-416 & 0.83 & 0.85 & 0.80 & 0.81 & 0.84 & 0.87 & 0.79 & 0.84 & 0.85 & 0.88 & **0.88** & **0.90** \\ \hline CWE-20 & 0.84 & 0.88 & 0.80 & 0.83 & 0.87 & 0.89 & 0.86 & 0.87 & 0.89 & 0.90 & **0.90** & **0.92** \\ \hline CWE-125 & 0.81 & 0.85 & 0.78 & 0.83 & 0.82 & 0.87 & 0.84 & 0.84 & 0.85 & 0.86 & **0.86** & **0.89** \\ \hline \end{tabular} \end{table} Table 3. Results of the comparison with different baselines on SARD to denote the standard deviation, and \(N_{DSHGT\_ACC}\) to denote the number of samples: \[\begin{split}&\tilde{X}_{DSHGT\_ACC}=\frac{1}{N_{DSHGT\_ACC}}\sum_{i= 1}^{N_{DSHGT\_ACC}}\tilde{X}_{DSHGT\_ACC}^{i}\\ & S_{DSHGT\_ACC}=\sqrt{\frac{1}{N_{DSHGT\_ACC}-1}\sum_{i=1}^{N_{ DSHGT\_ACC}}(\tilde{X}_{DSHGT\_ACC}^{i}-\tilde{X}_{DSHGT\_ACC})^{2}}\end{split} \tag{14}\] When we compare \(DEVIGN\_ACC\) and \(DSHGT\_ACC\), we can construct from the definition of the t-distribution: \[\frac{\tilde{X}_{DSHGT\_ACC}-\tilde{X}_{DEVIGN\_ACC}}{\sqrt{\frac{S_{DSHGT\_ACC }^{2}}{N_{DSHGT\_ACC}}+\frac{S_{DEVIGN\_ACC}^{2}}{N_{DEVIGN\_ACC}}}}\sim t(V) \tag{15}\] The degrees of freedom \(V\) of this t-distribution are related to the degrees of freedom \(V_{DSHGT\_ACC}\) and \(V_{DEVIGN\_ACC}\): \[V\approx\frac{(\frac{S_{DSHGT\_ACC}^{2}}{N_{DSHGT\_ACC}^{2}}+\frac{S_{DEVIGN \_ACC}^{2}}{N_{DEVIGN\_ACC}^{2}})^{2}}{\frac{S_{DSHGT\_ACC}^{4}}{N_{DSHGT\_ACC }^{2}}+\frac{S_{DEVIGN\_ACC}^{2}}{N_{DEVIGN\_ACC}^{2}\cdot V_{DEVIGN\_ACC }}} \tag{16}\] We performed a quantitative analysis of the experimental results based on the aforementioned \(\tilde{X}\), \(S\), \(V\) for the T-test, and the results are presented in Table 4. We first set a common significance level of 0.05 and degrees of freedom as 42. Then we formulated two hypotheses and set a common significance level of 0.05: * Null Hypothesis (H0): There is no significant difference between the means of the two samples, i.e., \(\mu 1=\mu 2\). * Alternative Hypothesis (H1): There is a significant difference between the means of the two samples, i.e., \(\mu 1\neq\mu 2\). It can be observed that we can reject the Null Hypothesis and accept the Alternative Hypothesis except for F1 in comparison of _DSHGT_ and FUNDED, F1 in comparsion of _DSHGT_ and DeepWukong and F1 in comparsion of _DSHGT_ and DSHGTnoAnno. In all three of above comparisons, we cannot quantitatively determine that _DSHGT_ is better at a significance level of 0.05. Nevertheless, upon a direct and close examination of the results, it becomes evident that _DSHGT_ outperforms the comparison programs in all three metrics. While the enhancement of the proposed method is modest in comparison to the baselines, our primary objective is to enhance the generalizability of our detection model, as evidenced by the subsequent experiments. ### Transferability on Other Programming Languages (RQ2) We experimented with the transferring capability of our proposed model against other baselines. To achieve that, we adopted the transfer learning technique by keeping the model structure and weights trained on C/C++ dataset, and fine-tuned that on other programming languages for vulnerability detection tasks. It is proven feasible in [(32)] that the model trained in one programming languages domain can preserve the knowledge of vulnerable code patterns and thus be used to detect the similar vulnerabilities in other programming languages. In this case, we first trained our model and other baselines on CWE-78 of C/C++ datasets. To transfer the knowledge learned from C/C++, we only fine-tuned the final MLP classifier of each model (2 linear layers and 1 output layer) for 10 epochs. We consider two cases that use transfer learning, C/C++ to Java and C/C++ to PHP. Figure 5 shows the result of the models' transferability on CWE-78, CWE-79 and CWE-89. It can be observed that our pre-trained model can better capture the prior vulnerable code patterns and achieve promising results when applied to other programming languages, with 84% accuracy from C to Java and 88% accuracy from C to PHP. We further explored the rationale behind the effectiveness of using transfer learning. It can be observed from Figure 4 that both C/C++ and Java code samples in CWE-78 construct an OS command (highlighted in red) using externally-influenced input from an upstream component without validating the special element that could harm the system, thus are under threat of command inject attacks. In the CPG of both code samples, a control flow edge should exist if this command variable is validated before the command is validated. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Comparison\(\begin{array}{c}\text{T-test data}\\ \end{array}\) & \multirow{2}{*}{Acc} & \multirow{2}{*}{Mean} & \multirow{2}{*}{Standard deviation} & \multirow{2}{*}{T-value} & \multirow{2}{*}{Null Hypothesis(H0)} & \multirow{2}{*}{Alternative Hypothesis(H1)} \\ \cline{5-6} & & 0.8786 & 0.0329 & & \\ \cline{3-6} \multirow{3}{*}{_DSHGT_-DEVIGN} & & 0.8205 & 0.0489 & & \\ \cline{2-6} & \multirow{2}{*}{F1} & 0.8791 & 0.0333 & & \\ \cline{3-6} & & 0.7909 & 0.0385 & & \\ \hline \multirow{3}{*}{_DSHGT_-LIN et al.} & \multirow{2}{*}{ACC} & 0.8795 & 0.0337 & & \\ \cline{3-6} & & 0.8518 & 0.0274 & & \\ \cline{3-6} & \multirow{2}{*}{F1} & 0.8845 & 0.0302 & & \\ \cline{3-6} & & 0.8586 & 0.0322 & & \\ \hline \multirow{3}{*}{_DSHGT_-FUNDED} & \multirow{2}{*}{ACC} & 0.8827 & 0.0361 & & \\ \cline{3-6} & & 0.8331 & 0.0546 & & \\ \cline{3-6} & & 0.8841 & 0.0367 & & \\ \cline{3-6} & & 0.8573 & 0.0388 & & \\ \hline \multirow{3}{*}{_DSHGT_-DeepWukong} & \multirow{2}{*}{ACC} & 0.8645 & 0.0184 & & \\ \cline{3-6} & & 0.8486 & 0.0281 & & \\ \cline{3-6} & & 0.8841 & 0.0367 & & \\ \cline{3-6} & & 0.8361 & 0.0517 & & \\ \hline \multirow{3}{*}{_DSHGT_-DSHGTnoAnno} & \multirow{2}{*}{ACC} & 0.8682 & 0.018 & & \\ \cline{3-6} & & 0.8536 & 0.0197 & & \\ \cline{3-6} & & 0.8800 & 0.012 & & \\ \cline{3-6} & & 0.8636 & 0.0194 & & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{1} & \multirow{2}{*}{...} & \multirow{2}{*}{...} & \multirow{2}{*}{...} & \multirow{2}{*}{...} & \multirow{2}{*}{...} \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \hline \end{tabular} \end{table} Table 4. Results of T-test Figure 4. Both C/C++ and Java code samples with “OS Command Injection” vulnerability on CWE-78 being used by other threads, thus presenting a similar code pattern regardless of language syntax. The result verifies that our model can better capture both contextual semantics and underlying structure (syntax, control- and data-flow) of the codes with the help of CPG and the corresponding heterogeneous graph learning, thus able to preserve the language-agnostic vulnerable code patterns. Figure 5. Knowledge transferring capability from C/C++ to other programming languages ### Impact of Dual-supervisors (RQ3) To answer RQ3, we studied the necessity of designing multi-task learning with dual supervisors to enhance the performance of _DSHGT_ as well as how to balance between two supervisors responsible for dealing with the vulnerability label oracle and the code comment oracle, in which the textual code comments are used as supplementary information. Note the use of auxiliary annotations of code snippets is proven helpful for tasks like code completion [(7)], code summarization [(23)], and code retrival [(31)]. We thus hypothesize that the graph code embedding should contain rich semantic information capable of summarizing the code snippets in code comment formats. In the experiment, the sensitivity of \(\lambda\) is explored to control the level of impact caused by contextual code comments when training the model, in which 0 indicates only training the model based on vulnerability labels, while 1 means only optimizing the model towards generating correct code comment summaries of codes (reflected in Eq.13). We first experiment on the HGT layer depth and compare the change of layer depth against detection performance and mode training cost. It is shown in Table 5 that the time cost increases with the increase of HGT layer depths, and F1 reaches the highest when layer depth is 3. We thus choose 3 as the layer depth setup for HGT. Regarding the sensitivity of \(\lambda\), we conducted experiments on CWE-119, CWE-190, CWE-476. Figure 6 reveals the change of Accuracy and F1 scores along with \(\lambda\). Both scores on three datasets reach the highest when \(\lambda=0.2\) and decrease afterwards on, indicating that over-reliance on code comments aggravates the vulnerability detection ability subject to the quality and number of code comments in the programs, while using semantic information in code comments is beneficial to \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Metric} & \multicolumn{6}{c|}{HGT Layer Depths} \\ \cline{2-6} & 1 & 2 & 3 & 4 & 5 \\ \hline F1 & 0.79 & 0.87 & 0.89 & 0.88 & 0.85 \\ \hline Training Time Cost/h & 4.83 & 9.45 & 14.88 & 19.81 & 23.64 \\ \hline \end{tabular} \end{table} Table 6. Ablation study on graph learning and non-graph learning on CPG \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Metric} & \multicolumn{2}{c|}{ACCURACY} & \multicolumn{2}{c|}{PRECISION} & \multicolumn{1}{c|}{RECALL} & \multicolumn{1}{c|}{F1} \\ \hline w/o graph-based learning & BiLSTM & 0.72 & 0.74 & 0.65 & 0.69 \\ \hline \multirow{2}{*}{graph-based learning} & \(HGT_{\textit{homo}}\) & 0.79 & 0.80 & 0.68 & 0.74 \\ \cline{2-6} & \(HGT_{\textit{heter}}\) & 0.83 & 0.92 & 0.79 & 0.86 \\ \hline \end{tabular} \end{table} Table 6. Ablation study on graph learning and non-graph learning on CPG Figure 6. Change of Acc and F1 along with \(\lambda\) on CWE-119, CWE-190 and CWE-476 some extent. It is not uncommon to have a mixture of good and bad code comments in the program. Taking CWE-119 as an example, we discover that some code comments like "_fixes the problem of not freeing the data in the source_" have specific meanings or descriptions for the method, the semantics of which are then comprehended into the graph-level embedding through the training process and become helpful for determining whether the method is vulnerable, while there exist some code comments like "_use goodsource and badsink by changing the first GLOBAL_CONST_TRUE to GLOBAL_CONST_FALSE_", which are not helpful. Over-reliance on such code comments thus (large \(\lambda\)) deteriorates the vulnerability detection performance. ### Ablation Study (RQ4) We first investigate the use of AST, PDG, AST+CFG, and AST+PDG as different code graph representations, aiming to check the effectiveness of using CPG. As shown Figure 7, the model trained by AST-based graphs generates the worst results, meaning the sole syntax information is insufficient when detecting the code's vulnerabilities. In general, using heterogeneous code graph representations such as AST+CFG and AST+PDG produces better results than using AST and PDG separately, and CPG achieves the best results among all as it combines properties of AST, CFG and PDG. Additionally, we explore the importance of heterogeneous graph learning when it comes to encoding the CPG. As pointed out in the previous section, the core component of enabling heterogeneous graph learning is HGT, which presents good performance when incorporating the large number neighbors of different types, _i.e._, complex node and edge heterogeneity in the graph structure (Gupta et al., 2018). To answer this, we implemented \(\text{HGT}_{homo}\), which is the variant of HGT and only maintains a single set of parameters for all relations (e.g., \(Q\)-_Linear_, \(K\)-_Linear_ and \(V\)-_Linear_ matrices share same parameters regardless of types). By doing so, \(\text{HGT}_{homo}\) preserves the graph learning ability while ignoring the various node/edge types in the CPG. It can be seen clearly that \(\text{HGT}_{heter}\) demonstrates a significant performance gain across all metrics, proving the great importance of encoding the structural heterogeneity of the program graphs. Specifically, \(\text{HGT}_{heter}\) keeps distinct edge-based metrics \(W^{ATT}_{\phi(e)}\) and \(W^{MSG}_{\phi(e)}\) for each edge type \(\phi(e)\), which enables the model to distinguish the semantic differences of relations even between the same node types. Additionally, the edge-driven attention mechanism allows the target node to measure the importance of each connected source node with different edge types. For instance, in the case of buffer overflow vulnerability in CWE-119, the falsely converted _unsign_ length variable (source node) by _atoi_ contributes more to the _memcpy_ method (target node) with a larger attention value over the data dependency edge type. Figure 7. Results on different code graph representations The importance of this _unsign_ variable's node embedding will eventually reflect in the graph-level embedding through readout operation, which then helps downstream vulnerability detection task. Apart from that, we also study the role of graph learning in the experiment. Similar to (Krizhevsky et al., 2014; Krizhevsky et al., 2014), we use BiLSTM as the alternative to encode the code snippets, which tokenizes the code representation (i.e., CPG in this case) as input while ignoring the syntax, semantics, and flows structure information in the graphs. The experimental results show that \(\text{HGT}_{heter}\) outperforms BiLSTM with a greater 10% performance gain, manifesting the significance of incorporating structure information when modelling the code snippets. ### Results on Real-world Open Source Projects (RQ5) We verify the effectiveness of our proposed framework and other baselines on real-world programs extracted from C/C++ open-source projects _FFmpeg_ and _QEMU_. The detailed statistics of these projects are shown in Table 10. Table 7 records the experimental results on these projects. In general, we observe a drop in performance for both our proposed framework and other baselines on real-world open projects, as there are much fewer labeled vulnerable samples in these two projects than synthetic code samples in **SARD**. This attributes to the existence of vulnerability label noise in both FFmpeg and QEMU since the function-level vulnerabilities are labelled based on determining vulnerability-fix commits or non-vulnerability fix commits of these projects. Additionally, Lin et al. underperform for all metrics, especially for **Recall** (high volumes of false-negative predictions), suggesting that only using code tokens in the program to train the detection model is insufficient nor applicable to real-world projects. Other baselines present results with marginal differences, while _DSHGT_ outperforms all of them. We discover that both projects contain a large number of fixes for vulnerabilities such as _Memory Leak_ and _Buffer Overflow_, which require to better encode the information of different edge types (control flow and dependency information) into the graph-level embedding for better detection results. _DSHGT_ learns on the CPG without losing the heterogeneity attributes, thus is more generalized even on real-world projects. Overall, _DSHGT_ delivers the best performance for **Accuracy** (7.9% improvement on average), **Precision** (7.65% improvement on average), **Recall** (7.5% improvement on average) and **F1** (10.6% improvement on average). A promising way to improve the performance on real-world projects is to adapt state-of-the-art zero-shot or few-shot learning specifically for heterogeneous graph networks. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Metric & Model & DEVIGN & LIN et al. & DeepWukong & FUNDED & DSHGT \\ \hline \multirow{4}{*}{FFmpeg} & ACCURACY & 0.74 & 0.64 & 0.75 & 0.75 & 0.79 \\ \cline{2-7} & PRECISION & 0.81 & 0.72 & 0.79 & 0.83 & 0.86 \\ \cline{2-7} & RECALL & 0.68 & 0.58 & 0.71 & 0.67 & 0.73 \\ \cline{2-7} & F1 & 0.70 & 0.66 & 0.78 & 0.79 & 0.84 \\ \hline \multirow{4}{*}{QEMU} & ACCURACY & 0.73 & 0.60 & 0.72 & 0.72 & 0.78 \\ \cline{2-7} & PRECISION & 0.78 & 0.70 & 0.80 & 0.76 & 0.84 \\ \cline{1-1} \cline{2-7} & RECALL & 0.69 & 0.54 & 0.69 & 0.68 & 0.73 \\ \cline{1-1} \cline{2-7} & F1 & 0.74 & 0.64 & 0.75 & 0.73 & 0.82 \\ \hline \end{tabular} \end{table} Table 7. Results of the comparison with different baselines on real-world projects ## 6. Threat to Validity **External validity**: In our experiments, we chose 4 baselines: **LIN**_et al._**(**Li et al.**, 2018), **DEVIGN**(**Zhou et al.**, 2018), **FUNDED**(**Zhou et al.**, 2018) and **DeepWukong**(**Zhou et al.**, 2018). We believe these 4 baselines could represent the state-of-the-art research outputs, including non-graph learning methods for code tokens and graph learning methods for code graph representations. For the datasets used, we used 17 categories of C/C++, Java and PHP from **SARD**, which shall cover the most commonly seen vulnerability types. Due to the lack of labeled vulnerabilities datasets from real-world projects, we could only evaluate _DSHGT_ on the real-world open-source projects (_i.e.__FFmpeg_ and **QEMU**, which has also been used by (Krizhevsky et al., 2015), (Li et al., 2018) and (Zhou et al., 2018)). As these two projects also allow code contributions, we believe they are good representations of state-of-the-art real-world projects. Even though our proposed _DSHGT_ has shown the transferability to detect Java and PHP vulnerabilities after being trained on the C/C++ dataset, the vulnerability datasets for programming languages other than C/C++ are still limited. We will continue to work on this issue and build up related datasets for future usage in this community. **Internal validity**: The quality of code comments used for training might not align with the ground truth perfectly due to the semantic complexity of codes and human errors. As discussed in Section 5.4, the hand-written code comment highly relies on the programmers' personal habits, and different programmers could provide different code comments for the same source code segment. Thus, we do not rely heavily on the code comment supervisor, and our empirical study shows that our _DSHGT_ reaches the highest performance when \(\lambda=0.2\). We plan to use _DSHGT_ in the future for those real-world software projects where we have finer-granularity control of the code comment quality to remove the confounding variables such as human errors or other noises. **Construct validity** The layer depth of _HGT_ in _DSHGT_ is set to 3 based on the empirical study result (as shown in Section 5.4). The target node embedding is updated through the attentive message passing from neighbor nodes. As the layer numbers increase, the _HGT_ will eventually face the oversmoothing problem (Beng et al., 2017), where distinct information conveyed by neighbors become hard to distinguish owing to the over-iterative message passing, and leads to performance degradation. We leverage a simple 1 layer LSTM as the decoder for the code comment supervisor part, which already demonstrates promising results in the experiment. It suggests the potential of using auxiliary code comments as supplementary information to assist the vulnerability detection task. As a future task, we can leverage more calibrated semantic extraction models such as those pretrained NLP models (Krizhevsky et al., 2015) as the code comment decoder. ## 7. Conclusion In this paper, we present our pioneer study of using heterogeneous graph learning for detecting software vulnerabilities. In this work, we have chosen to use CPG as the heterogeneous graph and customized a dual-supervisor graph learning model. The extensive experiments on synthetic and real-world software projects show promising results. We are one of the first to explore the importance of conducting research in heterogeneous graph learning to detect software vulnerability for the software engineering community. Besides good results in different programming languages and real-world projects, software engineers can leverage the largely open-sourced new results from graph learning and NLP communities to further enhance this line of research to have better heterogeneous graph representation other than CPG, leverage more robust learning models for latent features in the underlying heterogeneous graph, and utilize more reliable and controllable supervisors in addition to the binary vulnerability oracle.
2306.06419
Optimal Racing of an Energy-Limited Vehicle
We consider the problem of controlling a vehicle to arrive at a fixed destination while minimizing a combination of energy consumption and travel time. Our model includes vehicle speed and accelaration limits, aerodynamic drag, rolling resistance, nonlinear engine losses, and internal energy limits. The naive problem formulation is not convex; however, we show that a simple convex relaxation is tight. We provide a numerical example, and discuss extensions to vehicles with unconventional drivetrains, such as hybrid vehicles and solar cars.
Nicholas Moehle
2023-06-10T12:09:27Z
http://arxiv.org/abs/2306.06419v1
# Optimal Racing of an Energy-Limited Vehicle ###### Abstract We consider the problem of controlling a vehicle to arrive at a fixed destination while minimizing a combination of energy consumption and travel time. Our model includes vehicle speed and accelaration limits, aerodynamic drag, rolling resistance, nonlinear engine losses, and internal energy limits. The naive problem formulation is not convex; however, we show that a simple convex relaxation is tight. We provide a numerical example, and discuss extensions to vehicles with unconventional drivetrains, such as hybrid vehicles and solar cars. ## 1 Introduction In this paper we consider how to control a vehicle's longitudinal dynamics to arrive at a fixed destination while minimizing the energy consumption and travel time. For traveling long distances, a good strategy is to maintain a constant speed. In many cases, however, this is not practical. Such cases might include a hybrid vehicle encountering obstacles, such as traffic signals or stop-and-go traffic, or a racing vehicle that must decelerate to make sharp turns. For a solar car, changing speed might be _desirable_, as the predicted availability of solar power changes. In all these cases, it is useful for the vehicle to quickly plan a dynamic speed profile, that meets both the dynamic and energy requirements of the vehicle. The goal of this paper is to show how to do this. We first focus on the specific problem of minimizing the energy required to reach a destination with a fixed travel time. We then use this problem as a building block to solve related problems, such as the minimum-travel-time problem (without regard to energy consumption), and the minimum-energy problem (without a fixed travel time). We note that although our focus in this paper is on ground vehicles, the same principles could certainly be applied to aircraft or ships. Our approach is a combination of vehicle longitudinal control, which involves planning the vehicle position and velocity, and drivetrain control, which involves planning the internal energy and power usage. In our view, previous attempts to link these domains have been hampered by the nonlinearity of the equation \((1/2)mv^{2}=K\) relating the velocity and the kinetic energy, which makes it difficult to link the vehicle dynamics (related to \(v\)) and the drivetrain energy dynamics (related to \(K\)). One key observation is that the convex relaxation \((1/2)mv^{2}\leq K\) is tight under the assumption that "more velocity is better". Speed and acceleration limits (which seem to contradict the "more velocity is better" assumption) can still be handled indirectly as constraints on the kinetic energy instead of the velocity. We conclude with a numerical example. We show the Pareto trade-off curve between energy consumption and travel time; our convex reformulation is the main tool for exploring this trade-off curve. We then mention some simple extensions, including hybrid vehicles and solar cars. ### Previous work Vehicle drivetrain control.The past few years has seen an explosion of research related to drivetrain control for hybrid vehicles, and providing an overview is beyond the scope of this paper. For a good review, see [14]. For an early approach based on convex optimization, see [15]. Fewer papers consider controlling the drivetrain and the vehicle dynamics simultaneously; see [16] and references therein. Control along a fixed path.Minimum-time longitudinal vehicle control is a special case of minimum-time trajectory generation over a fixed path. A well-known convex formulation of this problem is reviewed in [13]. This technique is applicable to very general vehicle models, and can include constraints on the speed and acceleration along the path. Unfortunately, these constraints must hold pointwise; formulations involving integrals of the speed and acceleration (such as those required to limit energy consumption) result in nonconvex constraints in general. Convex optimization.Convex optimization problems can be solved efficiently and reliably using standard techniques [1]. Recently, much work has been devoted to solving moderately-sized convex optimization problems quickly (_i.e._, in milliseconds or microseconds), possibly on embedded platforms, which enables convex-optimization-based control policies to be implemented at kilohertz rates [17, 18]. In addition, recent advances in automatic code generation for convex optimization [17, 19] can significantly reduce the cost and complexity of developing and verifying an embedded solver. ## 2 Model We propose the following model of an energy-limited vehicle. The vehicle operates over the time interval \([0,T]\), along a fixed path. Vehicle dynamics.The position \(x_{t}\) (measured along the path) and velocity \(v_{t}\) of the vehicle at time \(t\) are related by \[\dot{x}_{t}=v_{t}. \tag{1}\] The initial conditions are \(x_{0}=x^{\text{init}}\) and \(v_{0}=v^{\text{init}}\). Speed constraints.The velocity has upper and lower limits, _i.e._, \[v_{t}^{\text{min}}\leq v_{t}\leq v_{t}^{\text{max}}. \tag{2}\] These bounds may depend on time. We assume that \(v_{t}^{\text{min}}\geq 0\), _i.e._, the vehicle cannot move backward. Acceleration limits.The acceleration at time \(t\) cannot exceed \(a_{t}^{\text{max}}\): \[\dot{v}_{t}\leq a_{t}^{\text{max}}. \tag{3}\] This can be used to model tire traction limits. These could change over time, as the vehicle performs lateral maneuvers or encounters varying road conditions. Kinetic energy.The kinetic energy of the vehicle at time \(t\) is \(K_{t}\), which is defined as \[K_{t}=\frac{1}{2}mv_{t}^{2}, \tag{4}\] where \(m\) is the mass of the vehicle, which is positive. The kinetic energy changes according to \[\dot{K}_{t}=P_{t}^{\text{drv}}-P_{t}^{\text{drag}}-P_{t}^{\text{rr}}-P_{t}^{ \text{brk}}, \tag{5}\] where \(P_{t}^{\text{drv}}\) is the power delivered by the vehicle drivetrain and \(P_{t}^{\text{brk}}\) is the brake power, which must be nonnegative. The power lost to drag is \[P_{t}^{\text{drag}}=\frac{1}{2}\rho AC_{D}v_{t}^{3}.\] Here \(\rho\) is the density of the air, \(A\) is the frontal area of the vehicle, and \(C_{D}\) is the drag coefficient, all of which are positive. The power lost to rolling resistance is \[P_{t}^{\text{rr}}=C^{\text{rr}}v_{t}^{2},\] where \(C^{\text{rr}}\) is the (positive) rolling resistance constant. (This corresponds to a rolling resistance force linear in the vehicle velocity.) Drivetrain.The drive power comes from an on-board energy source with internal energy \(E_{t}\). This value could represent the state of charge of a battery, or the quantity of combustible fuel remaining. (In the sequel we will refer to it as the battery energy.) This value changes according to \[\dot{E}_{t}=-f^{\rm eng}(P_{t}^{\rm drv}), \tag{6}\] where \(f^{\rm eng}\) is the engine characteristic, which encodes the motor efficiency at different operating points. The domain of this function is the interval \([P^{\rm drv,min},P^{\rm drv,max}]\), where \(P^{\rm drv,min}\) and \(P^{\rm drv,max}\) are the minimum and maximum drive powers. We assume this function is increasing, which encodes the fact that increasing drivetrain power requires increasing energy consumption. We also assume it is convex, which encodes decreasing incremental efficiency. The battery energy has initial condition \(E_{0}=E^{\rm init}\). It must also respect the energy limits \[E^{\rm min}\leq E_{t}\leq E^{\rm max}, \tag{7}\] where \(E^{\rm min}\) and \(E^{\rm max}\) are constants. ## 3 Optimal control We would like to control the vehicle to reach (or exceed) a desired position \(x^{\rm end}\) by time \(T\), and to do so while minimizing the energy consumed. This is formalized as an optimal control problem: \[\begin{array}{ll}\mbox{maximize}&E_{T}\\ \mbox{subject to}&x_{T}\geq x^{\rm end}\\ &\mbox{displacement dynamics (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: ### Convex relaxation Although (8) is not convex as stated, a simple relaxation yields a convex problem; in SS3.2, we show that this relaxation is tight. Before forming the relaxation, we first reformulate some of the existing constraints. (Note that the reformulations in this paragraph are lossless, _i.e._, they are not relaxations of the original constraints.) The drag power and the rolling resistance power can both be expressed in terms of the kinetic energy as \[P_{t}^{\rm drag}=(1/2)\rho AC_{D}(2K_{t}/m)^{3/2},\] and \[P_{t}^{\rm rr}=2C^{\rm rr}K_{t}/m,\] respectively. We can then eliminate \(P_{t}^{\rm drag}\), \(P_{t}^{\rm rr}\), and \(P_{t}^{\rm brk}\) in the energy dynamics equation (5) to obtain \[\dot{K}_{t}\leq P_{t}^{\rm drv}-(1/2)\rho AC_{D}(2K_{t}/m)^{3/2}-2C^{\rm rr}K_{ t}/m.\] Similarly, the maximum speed constraint and the acceleration limit can both be written using the kinetic energy as \[K_{t}\leq(1/2)m(v_{t}^{\rm max})^{2}\] and \[\dot{K}_{t}\leq\sqrt{2mK_{t}}a_{t}^{\rm max},\] respectively. (The latter follows from the fact that \(\dot{v}_{t}=\dot{K}_{t}/\sqrt{2mK_{t}}\).) With these reformulations in mind, we begin relaxing some constraints of problem (8). We start by relaxing the energy definition (4) to inequality: \[K_{t}\geq\frac{1}{2}mv_{t}^{2}. \tag{9}\] We keep the initial condition \(K_{0}=(1/2)m(v^{\rm init})^{2}\) as an equality constraint. We also relax the internal energy dynamics to \[\dot{E}_{t}\leq-f^{\rm eng}(P_{t}^{\rm drv}). \tag{10}\] and enforce the bounds \[f^{\rm eng}(P^{\rm drv,min})\leq-\dot{E}_{t}\leq f^{\rm eng}(P^{\rm drv,max}). \tag{11}\] These bounds simply state that \(-\dot{E}\) is in the range of \(f^{\rm eng}\), and are implied by (6). The relaxed problem is \[\begin{array}{ll}\mbox{maximize}&E_{T}\\ \mbox{subject to}&x_{T}\geq x^{\rm end}\\ &\dot{x}_{t}=v_{t}\\ &x_{0}=x^{\rm init}\\ &K_{0}=(1/2)m(v^{\rm init})^{2}\\ &v_{t}\geq v_{t}^{\rm min}\\ &K_{t}\leq(1/2)m(v_{t}^{\rm max})^{2}\\ &\dot{K}_{t}\leq\sqrt{2mK_{t}}a_{t}^{\rm max}\\ &K_{t}\geq(1/2)mv_{t}^{2}\\ &\dot{K}_{t}\leq P_{t}^{\rm drv}-(1/2)\rho AC_{D}(2K_{t}/m)^{3/2}-2C^{\rm rr}K_ {t}/m\\ &\dot{E}_{t}\leq-f^{\rm eng}(P_{t}^{\rm drv})\\ &E_{0}=E^{\rm init}\\ &E^{\rm min}\leq E_{t}\leq E^{\rm max}\\ &f^{\rm eng}(P^{\rm drv,min})\leq-\dot{E}_{t}\leq f^{\rm eng}(P^{\rm drv,max} ).\end{array} \tag{12}\] The variables in some of the constraints are indexed by \(t\); these constraints must hold for all \(t\in[0,T]\). The decision variables are \(x\), \(v\), \(K\), \(P^{\rm drv}\), and \(E\), which are scalar-valued functions defined on \([0,T]\). ### Tightness of relaxation The relaxation given above is in fact tight, in the following sense: Given a solution \(z=(x,v,K,P^{\rm drv},E)\) to (12), a solution for (8) is given by \(\tilde{z}=(\tilde{x},\tilde{v},K,\tilde{P}^{\rm drv},\tilde{P}^{\rm brk},E)\), which is defined as \[\tilde{x}_{t}=\int_{0}^{t}\tilde{v}_{\tau}\;d\tau,\qquad\tilde{v}_{t}=\sqrt{2 K_{t}/m},\qquad\tilde{P}_{t}^{\rm drv}=(f^{\rm eng})^{-1}(-\dot{E}_{t}),\] \[\tilde{P}_{t}^{\rm brk}=\tilde{P}_{t}^{\rm drv}-(1/2)\rho AC_{D}(2K_{t}/m)^{ 3/2}-2C^{\rm rr}K_{t}/m-\dot{K}_{t}.\] Here, \((f^{\rm eng})^{-1}\) is the inverse of \(f^{\rm eng}\), which exists because \(f^{\rm eng}\) is increasing on its domain. Note that \(\tilde{P}_{t}^{\rm drv}\) is well defined, as \(-\dot{E}_{t}\) is in the domain of the inverse function (due to the constraint (11)). Note that \(\tilde{v}_{t}\) and \(\tilde{P}_{t}^{\rm brk}\) are both well defined, as \(K_{t}\geq(1/2)mv_{t}^{2}\) ensures that \(K_{t}\) is nonnegative. Proof of tightness.Here we show that the new point is optimal for (8). Recall that \(z\) is optimal for (12), which is a relaxation of (8). Because \(z\) and \(\tilde{z}\) generate identical objective values for (12) and (8), respectively, then if \(\tilde{z}\) is in fact _feasible_ for (8), it is optimal as well. We show feasibility of \(\tilde{z}\) below. Due to (9), we have \(\tilde{v}_{t}=\sqrt{2K_{t}/m}\geq v_{t}\geq v_{t}^{\rm min}\). Because \(\tilde{v}_{t}\geq v_{t}\), we also have \(\tilde{x}_{T}=\int_{0}^{T}\tilde{v}_{\tau}\;d\tau\geq\int_{0}^{T}v_{\tau}\;d \tau=x_{T}\geq x^{\rm end}\). We also have \(f^{\rm eng}(\tilde{P}_{t}^{\rm drv})=-\dot{E}_{t}\), as required. To verify that \(\dot{K}_{t}\leq\tilde{P}_{t}^{\rm drv}-(1/2)\rho AC_{D}(2K_{t}/m)^{3/2}-2C^{\rm rr }K_{t}/m\) holds, we need only show that \(\tilde{P}_{t}^{\mathrm{drv}}\geq P_{t}^{\mathrm{drv}}\). This follows from \(f^{\mathrm{eng}}(P_{t}^{\mathrm{drv}})\leq-\dot{E}_{t}\). In particular, because \(f^{\mathrm{eng}}\) is increasing, we can invert this relation to obtain \[P_{t}^{\mathrm{drv}} \leq(f^{\mathrm{eng}})^{-1}(-\dot{E}_{t})\] \[=\tilde{P}_{t}^{\mathrm{drv}}.\] Nonnegativity of the brake power follows from \(\dot{K}_{t}\leq\tilde{P}_{t}^{\mathrm{drv}}-(1/2)\rho AC_{D}(2K_{t}/m)^{3/2}-2C ^{\mathrm{rr}}K_{t}/m\). The other constraints, such as the maximum velocity limit, the acceleration limit, the kinetic energy definition, and the energy limits, are true by definition of the constructed point \(\tilde{z}\), or follow trivially from feasibility of the original point \(z\). ## 4 Example We now present a simple numerical example. The values of all scalar parameters are shown in table 1. The engine characteristic is \(f^{\mathrm{eng}}(p)=\alpha p^{2}+\beta p+\gamma\) over the interval \([0,\infty)\). The parameters \(\alpha\), \(\beta\), and \(\gamma\) are \(0.005\) kW\({}^{-1}\), \(1\), and \(5\) kW, respectively. The velocity limits, which are functions of time, are shown in figure 1. The maximum acceleration \(a^{\mathrm{max}}\) was constant, with value \(1\) m/s\({}^{2}\). Several values of \(T\) were considered, to explore the trade-off between energy consumption and travel time. To obtain a numerical solution, problem (8) was first discretized in time by dividing the interval \([0,T]\) into \(1001\) discrete points. It was solved using CVXPY[1], with backend solver ECOS[1]. Pareto curve.The trade-off between the total energy consumed and the travel time is depicted in figure 2. The shaded region is the set of possible pairs \((E^{\mathrm{init}}-E_{T},T)\) of energy consumption and travel time corresponding to feasible trajectories of (8). The Pareto curve is shown as a bold line. Trajectories corresponding to any point on this line can be computed by fixing \(T\) and solving (8). The three colored crosses are specific Pareto-optimal trajectories, which are shown in detail in figure 3. \begin{table} \begin{tabular}{c c} \hline parameter & value \\ \hline \(m\) & 1500 kg \\ \(E^{\rm init}\) & 4000 kJ \\ \(E^{\rm min}\) & 0 kJ \\ \(E^{\rm max}\) & 4000 kJ \\ \(x^{\rm init}\) & 0 m \\ \(v^{\rm init}\) & 0 m \\ \(\rho\) & 1.22 kg/m\({}^{3}\) \\ \(C_{D}\) & 0.35 \\ \(A\) & 2.3 m\({}^{2}\) \\ \(C^{\rm rr}\) & 0.005 kN/(m/s) \\ \(x^{\rm end}\) & 5000 m \\ \hline \end{tabular} \end{table} Table 1: Parameter values. Figure 2: The Pareto optimal curve trading off energy consumption and travel time. The blue and red crosses correspond to the minimum-time and minimum-energy trajectories, and the green cross corresponds to a compromise between these two. Sample trajectories.We now describe in detail the three trajectories shown in figure 3. We begin by describing the trajectory shown in green, found by solving (8) with a travel time of \(T=280\) seconds. The trajectory begins with acceleration at the maximal rate \(a_{t}^{\max}\); the drive power is then decreased, until it reaches zero, and the vehicle coasts until the upper speed limit is reached during time period \(t=50\) seconds. At this point the vehicle brakes, then supplies constant power to maintain speed at the speed limit. After the speed limit ends (\(t=100\) s), the vehicle accelerates to a roughly constant "cruising speed" of around 80 km/h. Of particular interest is the existence of a coasting interval during the last 50 seconds, during which the drive power is zero as the vehicle coasts to the desired displacement. This is a solution to the apparent dilemma that a positive vehicle speed is required to reach the desired displacement, yet any leftover positive kinetic energy at this point can be considered wasted energy. (One might argue that this leftover kinetic energy could have been put to better use accelerating the vehicle earlier on; evidently, this argument is false.) The trajectory shown in blue is the minimum-time trajectory, _i.e._, it is the smallest value of \(T\) for which (8) is feasible. Note that this control depletes the internal energy exactly as the desired displacement is reached. The trajectory often accelerates aggressively, using a substantial amount of power to get to a high speed quickly. Even so, the minimum-time trajectory has a coasting period that begins around \(t=210\) seconds and lasts until the end of the time period. Finally, in red, we see the minimum-energy trajectory, which is found by computing the value of \(T\) that maximizes the optimal value of problem (8). As one might expect, the speed is kept lower than for the previous two trajectories, which both decreases the amount of energy required to accelerate the vehicle, and the power lost to drag. However, the minimum-energy trajectory still reaches the desired displacement in a finite amount of time. This is because our model assumes that the engine is turned on during the interval \([0,T]\), and idling incurs a power loss of \(\gamma\). Therefore, the minimum-energy control is motivated not to waste any time in reaching the goal, _i.e._, being fast also helps reduce wasted energy. ## 5 Conclusions We used a simple optimization model to capture the trade-off between vehicle energy consumption and travel time. Several interesting extensions are possible, especially for different drivetrain architectures, and our formulation easily accommodates drivetrain models formulated in terms of energy and power. Modeling a solar car is a particularly simple extension, which involves adding a time-varying prediction of generated solar power into the internal energy dynamics equation (5). Another extension involves adding a time-dependent disturbance to the kinetic energy dynamics (5), which could model the predicted power loss (or gain) from traversing hilly terrain. Figure 3: Three optimal trajectories, obtained by solving (8) for different values of \(T\). ## Acknowledgement The author would like to thank Stephen Boyd for useful input, as well as Ashe Magalhaes and Gawan Fiore for their insightful discussions of solar car racing.
2305.07257
A Central Asian Food Dataset for Personalized Dietary Interventions, Extended Abstract
Nowadays, it is common for people to take photographs of every beverage, snack, or meal they eat and then post these photographs on social media platforms. Leveraging these social trends, real-time food recognition and reliable classification of these captured food images can potentially help replace some of the tedious recording and coding of food diaries to enable personalized dietary interventions. Although Central Asian cuisine is culturally and historically distinct, there has been little published data on the food and dietary habits of people in this region. To fill this gap, we aim to create a reliable dataset of regional foods that is easily accessible to both public consumers and researchers. To the best of our knowledge, this is the first work on creating a Central Asian Food Dataset (CAFD). The final dataset contains 42 food categories and over 16,000 images of national dishes unique to this region. We achieved a classification accuracy of 88.70\% (42 classes) on the CAFD using the ResNet152 neural network model. The food recognition models trained on the CAFD demonstrate computer vision's effectiveness and high accuracy for dietary assessment.
Aknur Karabay, Arman Bolatov, Huseyin Atakan Varol, Mei-Yen Chan
2023-05-12T05:26:55Z
http://arxiv.org/abs/2305.07257v1
# A Central Asian Food Dataset for Personalized Dietary Interventions, Extended Abstract # A Central Asian Food Dataset for Personalized Dietary Interventions, Extended Abstract Aknur Karabay1, Arman Bolatov1, Huseyin Atakan Varo1, and Mei-Yen Chan2 1 Institute of Smart Systems and Artificial Intelligence, Nazarbayev University. 2 School of Medicine, Nazarbayev University. ###### Abstract Nowadays, it is common for people to take photographs of every beverage, snack, or meal they eat and then post these photographs on social media platforms. Leveraging these social trends, real-time food recognition and reliable classification of these captured food images can potentially help replace some of the tedious recording and coding of food diaries to enable personalized dietary interventions. Although Central Asian cuisine is culturally and historically distinct, there has been little published data on the food and dietary habits of people in this region. To fill this gap, we aim to create a reliable dataset of regional foods that is easily accessible to both public consumers and researchers. To the best of our knowledge, this is the first work on creating a Central Asian Food Dataset (CAFD). The final dataset contains 42 food categories and over 16,000 images of national dishes unique to this region. We achieved a classification accuracy of 88.70% (42 classes) on the CAFD using the ResNet152 neural network model. The food recognition models trained on the CAFD demonstrate computer vision's effectiveness and high accuracy for dietary assessment. ## I Introduction This manuscript is an extended abstract of our previously published article, A Central Asian Food Dataset for Personalized Dietary Interventions, which appeared in the Nutrients, MDPI in 2023 [1]. Food computation from visual data has gained prominence due to computer vision advancements and increased smartphone, and social media usage [2]. These platforms provide access to food-related information, which can be utilized for various tasks, including medical, gastronomic, and agronomic research. Deep learning-based food image recognition systems have been developed for applications in dietary assessment, smart restaurants, food safety inspection, and agriculture. Automatic food image recognition can improve the accuracy of nutritional records and assist visually impaired individuals [2]. Existing food classification datasets mostly include Western, European, Chinese, and other Asian cuisines [6, 10]. Examples of such datasets are presented in Table I. However, FoodAI is not open source, and Food2K is not publicly available. Food1K, a food recognition challenge dataset, has been released with 400,000 images and 1,000 food classes. Most food datasets predominantly contain Western and Asian dishes, lacking specific national dishes like those in Central Asia. To address this, we aim to develop a unique food recognition system for our region, considering local preferences, specialties, and cuisines. In this paper, we describe the development of our dataset, food recognition models, and their performance and conclude with a summary of our findings. ## II Central Asian Food Dataset This paper introduces the Central Asian Food Dataset (CAFD), consisting of 16,499 images across 42 classes representing popular Central Asian cuisine. We ensured the dataset's high quality through extensive data cleaning, iterative annotation, and multiple inspections. The CAFD can also be a food image representation learning benchmark. We followed a five-step process to create a diverse and high-quality dataset. First, we listed popular Central Asian food items. Second, we scraped images from search engines and social media using a Python script with Selenium. We extracted images from recipe videos using Roboflow [11] to increase underrepresented classes. Third, we removed duplicates using the HashImage Python library. Fourth, two annotators created bounding boxes for each food item using Roboflow software. Fifth, we cropped the food items based on bounding box coordinates and stored them in separate directories by class. The final dataset has an imbalanced number of images per class, ranging from 99 to 922. ## III Food Recognition Models Image classification is a computer vision task that extracts a single descriptor from an image. State-of-the-art models are based on CNNs and have improved due to large datasets. Transfer learning is often used when sufficient training data is unavailable, as it leverages knowledge from pre-trained models to solve similar problems in different domains [12]. In this work, we applied transfer learning to food classification using model weights pre-trained on ImageNet, a large dataset with over 14 million images [13], \begin{table} \begin{tabular}{c|c|c|c|c|c} **Dataset** & **Year** & **\# class** & **\# images** & **Cusine** & **Public** \\ \hline Food-101 [3] & 2014 & 101 & 101,000 & European & yes \\ Viroefood-172 [4] & 2016 & 172 & 110,241 & Chinese/Asian & yes \\ TurkishFoods-15 [5] & 2017 & 15 & 7,500 & Turkish & yes \\ Food41 [6] & 2019 & 756 & 400,000 & International & no \\ Viefood-251 [7] & 2020 & 251 & 169,673 & Chinese/Asian & yes \\ ISIA Food-500 [8] & 2020 & 500 & 399,726 & Chinese/Internet. & yes \\ Food25 [9] & 2021 & 2,000 & 1036,564 & Chinese/Internet. & no \\ Food1K [9] & 2021 & 1,000 & 400,000 & Chinese/Internet. & yes \\ **CAFD**[1] & **2022** & **42** & **16,499** & **Central Asian** & **yes** \\ \end{tabular} \end{table} TABLE I: Summary of food classification datasets. Fig. 1: Sample images for Central Asian Food Dataset classes. We selected 10 models of different architectures and complexity to evaluate their performance on the CAFD. These models include VGG-16, Squeezenet1, and five models with ResNet architecture [14, 15, 16, 17, 18]. DenseNet-121 and EfficientNet-b4 have similar architectures to ResNets but introduce different scaling methods [19, 20]. Then we trained the models on the Food1K dataset and tested the combination of CAFD and Food1K. We carefully split the datasets into training, validation, and test sets to avoid bias and data leakage. Table II shows the number of images in each set for three different datasets. Also, we performed transfer learning on Pytorch using pre-trained models on ImageNet. Models were trained for 40 epochs with a learning rate of 0.001, batch size of 64, and a categorical cross-entropy loss. The input size of images varied depending on the model. We used Top-5 accuracy and Top-1 accuracy as evaluation metrics and precision, recall, and \(F_{1}\)-score metrics to identify and analyze the best and worst-classified food classes. ## IV Results and Discussion Table III summarizes the classification models' results. All models performed better on the CAFD than on Food1K and CAFD+Food1K, indicating the accuracy and cleanness of the CAFD. VGG-16 achieved 86.03% Top-1 and 98.33% Top-5 accuracies on the CAFD, while Squeezenet1 had a lower performance. ResNet architectures achieved around 88% Top-1 and 98% Top-5 accuracy on the CAFD, with accuracy increasing as network depth increases. Wide ResNet-50 improved accuracy compared to ResNet50, and EfficientNet-b4 achieved the best results on Food1K and CAFD+Food1K. Tables IV and V list the 10 best and worst detected CAFD classes by ResNet152 and EfficientNet-b4. Large classes with distinct features performed best, while fine-grained or similar-looking classes, presented in Figure 2, caused confusion and deteriorated model performance. ## V Conclusion The Central Asian Food Dataset (CAFD) offers a unique advantage in automating and improving dietary assessment accuracy. It has potential applications in creating or modifying recipes, helping restaurants and food service providers plan menus, optimizing food production, and combating fraudulent food practices. It can be used to improve food quality, develop new recipes and personalized dietary plans, optimize production processes, increase food safety, and integrate with other food recognition systems. Comprising 16,499 images of 42 food classes, the CAFD demonstrates the effectiveness of computer vision models for food recognition. Our models achieved a Top-5 accuracy of 98.59% and 98.01% for the CAFD and CAFD+Food1K, respectively. The dataset, source code, and pre-trained models are available on GitHub 1 repository. Footnote 1: [https://github.com/IS2AI/Central-Asian-Food-Dataset](https://github.com/IS2AI/Central-Asian-Food-Dataset) Future work includes exploring different neural network architectures, data augmentation methods, and utilizing the CAFD for other dietary-related tasks. We also plan to develop food scene recognition datasets with multiple food items per image and extend the current food categories based on additional food classes. ## Author contributions MYC and HAV conceived and designed the study. AK, HAV, and MYC contributed to defining the research scope and objectives. AK and AB collected and prepared the dataset and trained the models. AB created a pipeline for processing images in Roboflow. HAV provided guidelines for the project experiments. AK performed the final check of the dataset and finalized the experimental results. PI of the project: MYC. AK, MYC,and AB wrote the article, and all the authors contributed to the manuscript revision and approved the submitted version. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{5}{c}{**Best detected classes**} & \multicolumn{5}{c}{**Wworst detected classes**} \\ **Class** & **Prec.** & **Rec.** & \(F_{1}\) & **Class** & **Prec.** & **Rec.** & \(F_{1}\) \\ \hline Sushiki & 0.91 & 1 & 0.96 & Lagman without soup & 0.6 & 0.27 & 0.37 \\ Achichich & 1 & 0.95 & 0.97 & Augj & 0.83 & 0.38 & 0.53 \\ Sheed head & 0.94 & 0.94 & 0.94 & Talkman-when & 0.86 & 0.53 & 0.66 \\ Aim-kurky & 0.83 & 0.93 & 0.88 & Domer luxash & 0.75 & 0.6 & 0.67 \\ Piv & 0.97 & 0.90 & 0.93 & Shashly chicken with v/ v. & 0.88 & 0.64 & 0.74 \\ Cheburce & 0.92 & 0.90 & 0.91 & Lagman fitted & 0.96 & 0.68 & 0.8 \\ Himshuk & 0.93 & 0.88 & 0.91 & Domer nau & 1 & 0.68 & 0.81 \\ Samas & 0.93 & 0.88 & 0.90 & Shashly chicken & 0.61 & 0.69 & 0.65 \\ Naryn & 0.97 & 0.87 & 0.92 & Shashlyck beef & 0.67 & 0.69 & 0.68 \\ Chuck-chak & 0.9 & 0.87 & 0.92 & Kary-korta & 0.8 & 0.7 & 0.74 \\ \end{tabular} \end{table} TABLE V: Ten CAFD and Food1K classes best and worst detected by the EfficientNet-b4 model. \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{2}{c|}{**Best detected classes**} & \multicolumn{2}{c}{**Wworst detected classes**} \\ **Class** & **Prec.** & **Rec.** & \(F_{1}\) & **Class** & **Prec.** & **Rec.** & \(F_{1}\) \\ \hline Sushiki & 0.96 & 1 & 0.98 & Shashlyk chicken w/ v. & 0.71 & 0.67 & 0.69 \\ Achichich & 0.95 & 1 & 0.98 & Shashlyk beef w/ v. & 0.66 & 0.72 & 0.69 \\ Sheep head & 0.94 & 1 & 0.97 & Shashly chicken & 0.67 & 0.74 & 0.7 \\ Naryn & 0.96 & 0.98 & 0.97 & Shashlyk mixed meat & 0.79 & 0.64 & 0.71 \\ Pivov & 0.93 & 0.99 & 0.96 & Aip & 0.85 & 0.62 & 0.72 \\ Tusburca w/ s. & 0.93 & 0.97 & 0.95 & Shashlyk beef & 0.74 & 0.69 & 0.72 \\ Somp plain & 0.97 & 0.93 & 0.95 & Lagman without soup & 0.83 & 0.68 & 0.75 \\ Samas & 0.94 & 0.96 & 0.95 & Kay-korta & 0.83 & 0.74 & 0.78 \\ Hovston & 0.98 & 0.91 & 0.95 & Benshmark with kinky & 0.78 & 0.8 & 0.79 \\ Marry & 0.92 & 0.95 & 0.94 & Tusburca fixed & 0.88 & 0.76 & 0.81 \\ \end{tabular} \end{table} TABLE IV: Ten CAFD classes best and worst detected by the ResNet152 model. \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{2}{c|}{**Best detected classes**} & \multicolumn{2}{c}{**Wworst detected classes**} \\ **Class** & **Prec.** & **Rec.** & \(F_{1}\) & **Class** & **Prec.** & **Rec.** & \(F_{1}\) \\ \hline Sushiki & 0.96 & 1 & 0.98 & Shashlyk chicken w/ v. & 0.71 & 0.67 & 0.69 \\ Achichich & 0.95 & 1 & 0.98 & Shashlyk beef w/ v. & 0.66 & 0.72 & 0.69 \\ Sheep head & 0.94 & 1 & 0.97 & Shashly chicken & 0.67 & 0.74 & 0.7 \\ Naryn & 0.96 & 0.98 & 0.97 & Shashlyk mixed meat & 0.79 & 0.64 & 0.71 \\ Pivov & 0.93 & 0.99 & 0.96 & Aip & 0.85 & 0.62 & 0.72 \\ Tusburca w/ s. & 0.93 & 0.97 & 0.95 & Shashlyk beef & 0.74 & 0.69 & 0.72 \\ Somp plain & 0.97 & 0.93 & 0.95 & Lagman without soup & 0.83 & 0.68 & 0.75 \\ Samas & 0.94 & 0.96 & 0.95 & Kay-korta & 0.83 & 0.74 & 0.78 \\ Hovston & 0.98 & 0.91 & 0.95 & Benshmark with kinky & 0.78 & 0.8 & 0.79 \\ Marry & 0.92 & 0.95 & 0.94 & Tusburca fixed & 0.88 & 0.76 & 0.81 \\ \end{tabular} \end{table} TABLE II: Image distribution across the training (train), validation (valid), and test sets. \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{2}{c|}{**Best detected classes**} & \multicolumn{2}{c}{**Wworst detected classes**} \\ **Class** & **Prec.** & **Rec.** & \(F_{1}\) \\ \hline Sushiki & 0.96 & 1 & 0.98 & Shashlyk chicken w/ v. & 0.71 & 0.67 & 0.69 \\ Achichich & 0.95 & 1 & 0.98 & Shashlyk beef w/ v. & 0.66 & 0.72 & 0.69 \\ Sheep head & 0.94 & 1 & 0.97 & Shashlyk chicken & 0.67 & 0.74 & 0.7 \\ Naryn & 0.96 & 0.98 & 0.97 & Shashlyk mixed meat & 0.79 & 0.64 & 0.71 \\ Pivov & 0.93 & 0.99 & 0.96 & Aip & 0.85 & 0.62 & 0.72 \\ Tusburca w/ s. & 0.93 & 0.97 & 0.95 & Shashlyk beef & 0.74 & 0.69 & 0.72 \\ Somp plain & 0.97 & 0.93 & 0.95 & Lagman without soup & 0.83 & 0.68 & 0.75 \\ Samas & 0.94 & 0.96 & 0.95 & Kay-korta & 0.83 & 0.74 & 0.78 \\ Hovston & 0.98 & 0.91 & 0.95 & Benshmark with kinky & 0.78 & 0.8 & 0
2306.11165
Bayesian Variable Selection in Double Generalized Linear Tweedie Spatial Process Models
Double generalized linear models provide a flexible framework for modeling data by allowing the mean and the dispersion to vary across observations. Common members of the exponential dispersion family including the Gaussian, Poisson, compound Poisson-gamma (CP-g), Gamma and inverse-Gaussian are known to admit such models. The lack of their use can be attributed to ambiguities that exist in model specification under a large number of covariates and complications that arise when data display complex spatial dependence. In this work we consider a hierarchical specification for the CP-g model with a spatial random effect. The spatial effect is targeted at performing uncertainty quantification by modeling dependence within the data arising from location based indexing of the response. We focus on a Gaussian process specification for the spatial effect. Simultaneously, we tackle the problem of model specification for such models using Bayesian variable selection. It is effected through a continuous spike and slab prior on the model parameters, specifically the fixed effects. The novelty of our contribution lies in the Bayesian frameworks developed for such models. We perform various synthetic experiments to showcase the accuracy of our frameworks. They are then applied to analyze automobile insurance premiums in Connecticut, for the year of 2008.
Aritra Halder, Shariq Mohammed, Dipak K. Dey
2023-06-19T21:11:12Z
http://arxiv.org/abs/2306.11165v1
# Bayesian Variable Selection in Double Generalized Linear Twoedie Spatial Process Models ###### Abstract Double generalized linear models provide a flexible framework for modeling data by allowing the mean and the dispersion to vary across observations. Common members of the exponential dispersion family including Gaussian, compound Poisson-gamma, Gamma, inverse-Gaussian are known to admit such models. However, the lack of their use can be attributed to ambiguities that exist in model specification under a large number of covariates and, complications that arise when data from a chosen application display complex spatial dependence. In this work we consider a hierarchical specification for these models with a spatial random effect. The spatial effect is targeted at performing uncertainty quantification by modeling dependence within the data arising from location based indexing of the response. We focus on a Gaussian process specification for the spatial effect. Simultaneously, we tackle the problem of model specification for such hierarchical spatial process models using Bayesian variable selection. It is effected through a continuous spike and slab prior placed on the model parameters (or fixed effects). The novelty of our contribution lies in the Bayesian frameworks developed for such models, which have not been explored previously. We perform various synthetic experiments to showcase the accuracy of our frameworks. These developed frameworks are then applied to analyse automobile insurance premiums in Connecticut. _Keywords:_ Bayesian Modeling, Gaussian Process, Hierarchical Spatial Models, Spike and Slab Priors, Tweedie Double Generalized Linear Models. Introduction Spatial processes have occupied center stage in statistical theory and applications for the last few decades. Their voracious use can largely be explained by geographically tagged data becoming increasingly commonplace in modern applications. Such data are often composed of complex variables which are no longer amenable to a Gaussian assumption. For example, spatially indexed counts (see for e.g., Wolpert and Ickstadt, 1998; Best et al., 2000; Agarwal et al., 2002; Lawson, 2018), proportions (see for e.g., Gelfand, 2000; Gelfand et al., 2005; Finley et al., 2009; Eidsvik et al., 2012), time to event or, survival outcomes (see for e.g. Banerjee and Carlin, 2004; Martino et al., 2011; Zhou and Hanson, 2015) are some frequently occurring variables where spatial processes have proved invaluable in performing uncertainty quantification. The purpose being to quantify unobserved dependence introduced within the variable of interest due to varying location. The cornerstone for such studies is a spatially indexed process variable of interest, often termed as a response process and denoted by, \(y(\mathbf{s})\). This is accompanied by covariate information \(\mathbf{X}(\mathbf{s})=[\mathbf{x}_{1}(\mathbf{s}),\mathbf{x}_{2}(\mathbf{s} ),\ldots,\mathbf{x}_{p}(\mathbf{s})]\). Here \(\mathbf{s}\in\mathcal{S}=\{\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{L}\}\) is the spatial indexing and, \(\mathcal{S}\) is a finite set of indices or, locations over which the response variable and covariates are observed. The investigator often encounters observation-level covariates that account for response specific characteristics when learning such processes. It becomes important to understand which of these covariates are important contributors to variation in the data. From a model parsimony standpoint, model choice becomes an important issue to the investigator. In statistical theory, this problem is often addressed by performing shrinkage or, variable selection on the model coefficients. Moreover, performing spatial uncertainty quantification produces accurate inference for model coefficients, which also raises the concern regarding a more "honest" subset of covariates within \(\mathbf{X}(\mathbf{s})\) that primarily determine the variation in \(y(\mathbf{s})\). The crown jewel of our contribution is Bayesian methodology for performing spatial uncertainty quantification and model choice simultaneously. Spatial process modeling generally requires a hierarchical specification of an unobserved random effect within the model (Clark and Gelfand, 2006). Maintaining a hierarchical approach allows for exploration of the effect of covariates, \(\mathbf{X}(\mathbf{s})\) and, the random effect \(\mathbf{w}(\mathbf{s})\) jointly on the response process \(y(\mathbf{s})\). Particularly, considering generalized linear spatial process modeling, it is assumed that \(y(\mathbf{s})\mid\boldsymbol{\beta},\mathbf{w}(\mathbf{s})\) arise in a _conditionally_ independent fashion from a member of the exponential family with mean \(\mu(\mathbf{s})\) such that \(g(\mu(\mathbf{s}))=\mathbf{X}(\mathbf{s})\boldsymbol{\beta}+\mathbf{F}(\mathbf{ s})\mathbf{w}(\mathbf{s})\), where \(g\) is a monotonic link function, \(\boldsymbol{\beta}\) are model coefficients, or fixed effects and \(\mathbf{F(s)}\) is a spatial incidence matrix. In contrast to Gaussian response processes where a direct hierarchical specification on the response is feasible, modeling a non-Gaussian spatial process leverages the generalized linear model framework to employ a latent process specification (see for e.g., Zeger and Karim, 1991; Diggle et al., 1998; Dey et al., 2000; Zhang, 2002; Banerjee et al., 2014). This is facilitated by the existence of a valid joint probability distribution, \(\pi\left(y(\mathbf{s}_{1}),y(\mathbf{s}_{2}),\ldots,y(\mathbf{s}_{L})\mid \boldsymbol{\beta},\boldsymbol{\theta}_{pr}\right)\), where \(\boldsymbol{\theta}_{pr}\) denotes process parameters required for specification of \(\mathbf{w(s)}\) (see discussion in section 6.2, Banerjee et al., 2014). This hierarchical specification gives us a natural way to perform variable selection by incorporating shrinkage priors into the hierarchical prior formulation. There are several choices of shrinkage priors that differ in their prior specification such as: the discrete spike-and-slab prior (see, for e.g, Mitchell and Beauchamp, 1988; George and McCulloch, 1993), other priors based on the Gaussian-gamma family in linear Gaussian models (see, for e.g, Raftery et al., 1997; Berger et al., 2001b), the continuous spike-and-slab prior (see, for e.g, Ishwaran and Rao, 2005), the Bayesian counterparts of LASSO and elastic net (see, for e.g, Park and Casella, 2008; Li and Lin, 2010), mixtures of \(g\)-priors (see, for e.g, Liang et al., 2008), the horseshoe priors and its variants (see, for e.g, Carvalho et al., 2010), among several others. We use the continuous spike-and-slab prior to effect shrinkage on model coefficients. We focus on a subset of probability distributions within the exponential family, termed as the exponential dispersion family (see, for e.g., Jorgensen, 1986, 1987, 1992; Jorgensen, 1997). It allows the dispersion along with the mean to vary across observations, suppressing the need for having a constant dispersion across observations. We focus on a particular member of the family, the Tweedie compound Poisson-gamma (CP-g), more commonly referred to as the Tweedie (probability) distribution (see, Tweedie et al., 1984). The corresponding random variable is constructively defined as a Poisson sum of independently distributed Gamma random variables. Allowing for a varying dispersion across observations enables exploration of the effect of covariates \(\mathbf{X(s)}\) on the mean and the dispersion separately, by employing two separate generalized linear models (GLMs). This gives rise to the double generalized linear model (DGLM) (see, for e.g., Smyth, 1989; Verbyla, 1993; Smyth and Verbyla, 1999). Hierarchical frameworks for specifications of DGLMs were first developed in Lee and Nelder (2006). Although not mandatory, it is customary to use the same covariates, \(\mathbf{X(s)}\) for both the mean and dispersion GLMs to avoid ambiguities in model specification. Previous work (see for e.g., Halder et al., 2021, 2022) uses this approach and considers developing inference for DGLMs under a frequentist framework. Inference on spatial effects is obtained through penalizing the graph Laplacian. In this paper we adopt a Bayesian discourse by supplementing the DGLM framework with the continuous version of the spike-and-slab prior to effect shrinkage and thereby achieve better model specification. We integrate the spike and slab prior into our hierarchical prior formulation for both mean and dispersion models. We show that these priors provide a natural way of incorporating sparsity into the model, while offering straightforward posterior sampling in the context of our spatial DGLMs. The scale for spatial indexing is assumed to be point-referenced. For example, latitude-longitude or easting-northing. Generally, \(\mathbf{s}\in\mathbb{R}^{2}\). Specification of a neighborhood structure or, proximity is naturally important when attempting to quantifying the behavior of response in locations that are _near_ each other. We select the Euclidean distance between locations. This results in a Gaussian process (see, for e.g., Williams and Rasmussen, 2006) prior on the spatial process, \(\mathbf{w}(\mathbf{s})\). Other choices exist for specifying such spatial process, \(\mathbf{w}(\mathbf{s})\), for e.g. log-Gamma (see Bradley et al., 2018, 2020) etc. We particularly focus on a Gaussian process specification on \(\mathbf{w}(\mathbf{s})\sim GP(\mathbf{0},K(\cdot))\), where \(K\) is a covariance function. For arbitrary locations, \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\), dependence between \(y(\mathbf{s})\) and \(y(\mathbf{s}^{\prime})\) is specified through \(K(\mathbf{s},\mathbf{s}^{\prime})\), which governs the covariance between \(w(\mathbf{s})\) and \(w(\mathbf{s}^{\prime})\). For point-referenced data, the Matern family (see, for e.g., Matern, 2013) provides the most generic and widely adopted covariance specification. Next, we address Bayesian model specification. In the absence of such concerns for the hierarchical process models discussed above, prior specification follows the generic framework, \([\text{data }|\text{ process},\widetilde{\mathbf{\theta}}]\times[\text{ process }|\;\widetilde{\mathbf{\theta}}]\times[\mathbf{\theta}_{m}]\times[\mathbf{\theta}_{pr}]\). Here, \(\widetilde{\mathbf{\theta}}=\{\mathbf{\theta}_{m},\,\mathbf{\theta}_{pr}\}\) denote model parameters (see for e.g. Berliner, 2000; Gelfand and Banerjee, 2017; Banerjee et al., 2014, Chapter 6, p. 125). In particular, \(\mathbf{\theta}_{pr}\) constitute parameters instrumental in specification of the process, while \(\mathbf{\theta}_{m}\) are other model parameters. We adopt a proper prior on \(\mathbf{\theta}_{pr}\) to avoid the risk of generating improper posteriors (see, for e.g., Berger et al., 2001a). Building a Bayesian variable selection framework that facilitates model specification for \(\mathbf{\theta}_{m}\) requires an additional layer of hierarchical prior specification, appending the latter framework with variable selection parameters, \(\mathbf{\theta}_{vs}\) and, thereby producing \[[\text{data }|\text{ process},\mathbf{\theta}]\times[\text{process }|\;\mathbf{\theta}]\times[\mathbf{\theta}_{pr}]\times[\mathbf{\theta}_{m}]\times[\mathbf{ \theta}_{vs}], \tag{1}\] where \(\mathbf{\theta}=\{\widetilde{\mathbf{\theta}},\mathbf{\theta}_{vs}\}\). We resort to Markov Chain Monte Carlo (MCMC) sampling (see, for e.g. Carlin and Louis, 2008; Girolami and Calderhead, 2011) for performing posterior inference on \(\mathbf{\theta}\). The novelty of our approach lies in the simple Bayesian computation devised--employing only Gibbs sampling updates for \(\mathbf{\theta}_{vs}\). To the best of our knowledge _hierarchical Bayesian frameworks_ for fitting (a) Tweedie DGLMs, (b) spatial Tweedie DGLMs with (or without) variable selection, do not exist in the statistical literature. Evidently, proposed methodology in (1) remedies that. The ensuing developments in the paper are organized as follows: In Section 2 we detail the proposed statistical framework outlining Tweedie distributions--the likelihood and parameterization, model formulation and the hierarchical prior specification. Section 3 provides comprehensive synthetic experiments that document the efficacy of our proposed statistical framework for Bayesian variable selection in spatial DGLMs. Section 5 considers application of the developed framework to automobile insurance premiums in Connecticut, USA during 2008. Additional synthetic experiments capturing various performance aspects for the models are provided in the Supplementary Material. ## 2 Statistical Framework The Tweedie distribution produces observations composed of exact zeros with a continuous Gamma tail. Their ability to model mixed data types featuring exact zeros and continuous measurements jointly makes them suitable for modeling response arising from a variety of domains. Some of the current applications include, actuarial science (see Smyth and Jorgensen, 2002; Yang et al., 2018; Halder et al., 2021, 2022), ecology (see for e.g., Swallow et al., 2016), public health (Ye et al., 2021), environment (see for e.g., Kokonendji et al., 2021), ecology (Shono, 2008), gene expression studies (Mallick et al., 2022). As evidenced by these applications, the presence of unobserved dependence between observations, affecting the quality of inference, is not unlikely. In the following subsections we provide more details on Tweedie distributions, followed by the model formulation and hierarchical prior specification. ### The Exponential Dispersion Family: Tweedie Distributions The Tweedie family of distributions belong to the exponential dispersion (ED) family of models whose probability density/mass function has the generic form, \[\pi(y\mid\theta,\phi)=a(y,\phi)\exp\left\{\phi^{-1}(y\theta-\kappa(\theta)) \right\}, \tag{2}\] where \(\theta\) is the natural or canonical parameter, \(\phi\) is the dispersion parameter, and \(\kappa(\theta)\) is the cumulant function. Characterizing the Tweedie family is an index parameter \(\xi\), varying values of which produce different members of the family. For e.g., the CP-g is obtained with \(\xi\in(1,2)\), for \(\xi=1\) we obtain a Poisson and \(\xi=2\) produces a Gamma distribution, for \(\xi\in(0,1)\) they do not exist (for further details see Table 1, Halder et al., 2021). We are particularly interested in the CP-g distributions in this work. In general, for the ED family we have the mean, \(\mu=E(y)=\kappa^{\prime}(\theta)\) and the variance, \(Var(y)=\phi\kappa^{\prime\prime}(\theta)\). For the CP-g we have \(\kappa(\theta)=(2-\xi)^{-1}\{(1-\xi)\theta\}^{2-\xi/1-\xi}\). Using the relation, \(\kappa^{\prime}(\theta)=\mu\), some straightforward algebra yields, \(\kappa(\theta)=(2-\xi)^{-1}\mu^{2-\xi}\) and \(\kappa^{\prime\prime}(\theta)=\mu^{\xi}\), implying \(Var(y)=\phi\mu^{\xi}\) and denoting \(\alpha=(1-\xi)^{-1}(2-\xi)\) we have, \[a(y,\phi)=1\cdot I(y=0)+y^{-1}\sum_{j=1}^{\infty}\left[\frac{y^{-\alpha}(\xi-1 )^{\alpha}}{\phi^{1-\alpha}(2-\xi)}\right]^{j}\frac{1}{j!\Gamma(-j\alpha)}I(y >0).\] Evidently, \(\pi(0\mid\theta,\phi)=\exp\{-\phi^{-1}\kappa(\theta)\}\). We introduce some notation. Let \(y_{ij}(\mathbf{s}_{i})\) denote the \(j\)-th response at the \(i\)-th location \(\mathbf{s}_{i}\in\mathcal{S}\), where \(j=1,2,\ldots,n_{i}\) and \(i=1,2,\ldots,L\) with \(\sum_{i=1}^{L}n_{i}=N\). Together we denote, \(\mathbf{y}=\mathbf{y}(\mathbf{s})=\{\{y_{ij}(\mathbf{s}_{i})\}_{j=1}^{n_{i}} \}_{i=1}^{L}\) as the \(N\times 1\) response. Similarly, \(\boldsymbol{\mu}=\boldsymbol{\mu}(\mathbf{s})=\{\{\mu_{ij}(\mathbf{s}_{i})\}_ {j=1}^{n_{i}}\}_{i=1}^{L}\) and \(\boldsymbol{\phi}=\{\{\phi_{ij}\}_{j=1}^{n_{i}}\}_{i=1}^{L}\) denotes the mean and dispersion vectors respectively. If \(\mathbf{y}\mid\boldsymbol{\mu},\boldsymbol{\phi},\xi\) arises independently from a CP-g distribution, then the likelihood is given by \[\pi(\mathbf{y}\mid\boldsymbol{\mu},\boldsymbol{\phi},\xi)=\prod_{i=1}^{L} \prod_{j=1}^{n_{i}}a_{ij}(y_{ij}(\mathbf{s}_{i})\mid\phi_{ij})\times\exp\left[ \phi_{ij}^{-1}\left(\frac{y_{ij}(\mathbf{s}_{i})\mu_{ij}(\mathbf{s}_{i})^{1- \xi}}{1-\xi}-\frac{\mu_{ij}(\mathbf{s}_{i})^{2-\xi}}{2-\xi}\right)\right]. \tag{3}\] Working with the likelihood, \(\pi(\cdot)\), when devising computation, evaluating the infinite series representation of \(a(y,\phi)\) is required. The two commonly used methods are--saddle-point approximation (see for e.g. Nelder and Pregibon, 1987; Smyth and Jorgensen, 2002; Dunn and Smyth, 2005; Zhang, 2013) and Fourier inversion (see, for e.g. Dunn and Smyth, 2008). The saddle-point approximation to (2) uses a deviance function based representation where, \(\widetilde{\pi}(y\mid\mu,\phi)=b(y,\phi)\exp\left\{-(2\phi)^{-1}d(y\mid\mu)\right\}\). For CP-g distributions, the deviance function is \(d(y\mid\mu)=d(y\mid\mu,\xi)=2\left\{\left(y^{2-\xi}-y\mu^{1-\xi}\right)(1-\xi) ^{-1}-\left(y^{2-\xi}-\mu^{2-\xi}\right)(2-\xi)^{-1}\right\}\), and \(b(y\mid\phi,\xi)=(2\pi\phi y^{\xi})^{-1/2}I(y>0)+1\cdot I(y=0)\approx a(y,\phi )\exp\left\{\phi^{-1}y^{2-\xi}(1-\xi)^{-1}(2-\xi)^{-1}\right\}\). We performed experiments which showed that the saddle-point approximation performs well when fewer zeros are present in the data. Under higher proportions of zeros its performance was sub-optimal. However, albeit its computationally intensive nature, in all scenarios the Fourier inversion based method had stable performance. Hence, in this paper we use the evaluation of \(a(y,\phi)\) that is based on Fourier inversion. The adopted Bayesian approach requires MCMC computation that relies on accurate likelihood evaluations. Hence, we emphasize the importance of choosing the appropriate likelihood function for application purposes. We denote the likelihood in (3) as \(Tw(\boldsymbol{\mu},\boldsymbol{\phi},\xi)\). Tweedie distributions are the only members of the ED family that possess a scale invariance property (see, for e.g., Jorgensen, 1997, Theorem 4.1). This suggests for \(c_{ij}>0\), \(\mathbf{y}^{\star}(\mathbf{s})=\{c_{ij}y_{ij}(\mathbf{s}_{i})\}\sim Tw(\mathbf{c} ^{T}\boldsymbol{\mu},\mathbf{c}^{2-\xi^{T}}\boldsymbol{\phi},\xi)\) allowing observations with different scales of measurement to be modeled jointly. ### Model Formulation Formulating DGLMs with spatial effects theoretically involves specification of a spatial random effect in both mean and dispersion models. In such a scenario complex dependencies can be specified to account for varied degrees of uncertainty quantification. In the simplest case the corresponding spatial random effects for the mean and dispersion models are independent Gaussian processes. More complex scenarios can feature dependent Gaussian processes, where the dependence arises from a cross-covariance matrix. Spatial random effects in the mean model are readily interpretable--risk faced (adjustment to mean premium paid in our case) owing to location. However, spatial random effects in the dispersion model are not readily interpretable. Subsequently, we choose to include spatial random effects only in the mean model for this work. Let \(\mathbf{x}_{ij}(\mathbf{s}_{i})\) denote a \(p\times 1\) vector of observed covariates for the mean model and \(\boldsymbol{\beta}\) be the corresponding a \(p\times 1\) vector of coefficients. \(\mathbf{f}_{ij}(\mathbf{s}_{i})\) denotes a \(L\times 1\) vector specifying the location and \(w(\mathbf{s}_{i})\) is the spatial effect at location \(\mathbf{s}_{i}\) with \(\mathbf{w}=\mathbf{w}(\mathbf{s})=(w(\mathbf{s}_{1}),w(\mathbf{s}_{2}),\dots, w(\mathbf{s}_{L}))^{T}\) denoting the \(L\times 1\) vector of spatial effects; \(\mathbf{z}_{ij}(\mathbf{s}_{i})\) denotes a \(q\times 1\) vector of known covariates for the dispersion model and \(\boldsymbol{\gamma}\) is the corresponding \(q\times 1\) vector of coefficients. A Bayesian hierarchical double generalized linear model (DGLM) using a non-canonical logarithmic link function is specified as \[\log\mu_{ij}(\mathbf{s}_{i})=\mathbf{x}_{ij}^{T}(\mathbf{s}_{i})\boldsymbol{ \beta}+\mathbf{f}_{ij}(\mathbf{s}_{i})^{T}\mathbf{w}(\mathbf{s}_{i}),\ \log\phi_{ij}=\mathbf{z}_{ij}^{T}\boldsymbol{\gamma}, \tag{4}\] which implies \(\mu_{ij}(\mathbf{s}_{i})=\mu_{ij}(\boldsymbol{\beta},\mathbf{w})=\exp\left( \mathbf{x}_{ij}^{T}(\mathbf{s}_{i})\boldsymbol{\beta}+\mathbf{f}_{ij}^{T}( \mathbf{s}_{i})\mathbf{w}(\mathbf{s}_{i})\right)\) and \(\phi_{ij}=\phi_{ij}(\boldsymbol{\gamma})=\exp(\mathbf{z}_{ij}^{T}\boldsymbol{ \gamma})\). ### Hierarchical Prior Specification In this section, we first present the hierarchical prior formulation for the model and process parameters, \(\tilde{\boldsymbol{\theta}}\), followed by the prior formulation for the variable selection parameters \(\boldsymbol{\theta}_{vs}\). Prior specification for model and process parameters are as follows: \[\begin{split}&\text{Model Parameters: }\xi\sim U(a_{\xi},b_{\xi}); \boldsymbol{\beta}\sim N_{p}\left(\boldsymbol{0}_{p},\boldsymbol{\lambda}_{ \beta}^{T}\mathbf{I}_{p}\right);\boldsymbol{\gamma}\sim N_{q}\left(\boldsymbol {0}_{q},\boldsymbol{\lambda}_{\gamma}^{T}\mathbf{I}_{q}\right),\\ &\text{Process Parameters: }\phi_{s}\sim U\left(a_{\phi_{s}},b_{\phi_{s}} \right);\sigma^{-2}\sim\text{Gamma}\left(a_{\sigma},b_{\sigma}\right);\ \nu\sim U(a_{\nu},b_{\nu}),\\ &\text{Process: }\mathbf{w}\sim N_{L}\left(\boldsymbol{0}_{L}, \sigma^{2}\mathbf{R}(\phi_{s})\right),\end{split} \tag{5}\] where \(\boldsymbol{0}_{m}\) is the \(m\times 1\) zero vector and \(\mathbf{I}_{m}\) is the \(m\times m\) identity matrix; \(\mathbf{X}\in\mathbb{R}^{n\times p}\) and \(\mathbf{Z}\in\mathbb{R}^{n\times q}\) are design matrices corresponding to the mean and dispersion models, with coefficients \(\boldsymbol{\beta}\in\mathbb{R}^{p}\) and \(\boldsymbol{\gamma}\in\mathbb{R}^{q}\), respectively; \(\mathbf{F}\in\mathbb{R}^{n\times L}\) is the spatial incidence matrix, \(\mathbf{w}\in\mathbb{R}^{L}\) is the spatial effect, and \(\mathbf{R}(\phi_{s})=\sigma^{2}(\phi||\Delta||)^{\nu}K_{\nu}(\phi||\Delta||)\), where \(K_{\nu}\) is the modified Bessel function of order \(\nu\)(Abramowitz et al., 1988), is the Matern covariance kernel. Here \(\{||\Delta||\}_{ii^{\prime}}=||\mathbf{s}_{i}-\mathbf{s}_{i^{\prime}}||_{2}\), the Euclidean distance between locations \(\mathbf{s}_{i}\) and \(\mathbf{s}_{i^{\prime}}\). \(U(\cdot\mid a,b)\) denotes the uniform distribution, \(N_{m}(\cdot\mid\boldsymbol{0},\Sigma)\) is the \(m\)-dimensional Gaussian with zero mean and covariance matrix \(\Sigma\), and \(\text{Gamma}(\cdot\mid a,b)\) is the Gamma distribution with shape-rate parameterization. Note that the priors on \(\boldsymbol{\lambda}_{\beta}=(\lambda_{\beta,1},\ldots,\lambda_{\beta,p})\) and \(\boldsymbol{\lambda}_{\gamma}=(\lambda_{\gamma,1},\ldots,\lambda_{\gamma,q})\) are part of the variable selection priors and are discussed next. Referring to the framework in (1), the resulting posterior from (5) establishes the \([\text{data}\mid\text{process},\boldsymbol{\theta}]\times[\text{process}\mid \widetilde{\boldsymbol{\theta}}]\) step. Conditional posteriors for \(\widetilde{\boldsymbol{\theta}}\) are outlined in Section 7 of Supplementary Materials. For the continuous spike-and-slab prior formulation, \(\boldsymbol{\theta}_{vs}=\{\zeta_{\beta},\zeta_{\gamma},\sigma_{\beta}^{2}, \sigma_{\gamma}^{2},\alpha_{\beta},\alpha_{\gamma}\}\) (see, for e.g., Ishwaran and Rao, 2005). Note that we have separate prior formulations for mean and dispersion models. Let \(\boldsymbol{\beta}=(\beta_{1},\beta_{2},\ldots,\beta_{p})^{T}\) and \(\boldsymbol{\gamma}=(\gamma_{1},\gamma_{2},\ldots,\gamma_{q})^{T}\) be the model coefficients corresponding to the mean and the dispersion models. Let us define \(\lambda_{\beta,u}=\zeta_{\beta,u}\sigma_{\beta,u}^{2}\) and \(\lambda_{\gamma,v}=\zeta_{\gamma,v}\sigma_{\gamma,v}^{2}\) for \(u=1,\ldots,p\) and \(v=1,\ldots,q\), respectively. We consider the following prior formulation: \[\pi(\boldsymbol{\theta}_{vs})=\begin{cases}\zeta_{\beta,u}\stackrel{{ iid}}{{\sim}}(1-\alpha_{\beta})\delta_{\nu_{0}}(\cdot)+\alpha_{\beta} \delta_{1}(\cdot),\ \zeta_{\gamma,v}\stackrel{{ iid}}{{\sim}}(1-\alpha_{\gamma}) \delta_{\nu_{0}}(\cdot)+\alpha_{\gamma}\delta_{1}(\cdot),\\ \alpha_{\beta}\sim U(0,1),\ \alpha_{\gamma}\sim U(0,1),\\ \sigma_{\beta,u}^{-2}\stackrel{{ iid}}{{\sim}}\text{Gamma}(a_{ \sigma_{\beta}},b_{\sigma_{\beta}}),\ \sigma_{\gamma,v}^{-2}\stackrel{{ iid}}{{\sim}}\text{Gamma}(a_{ \sigma_{\gamma}},b_{\sigma_{\gamma}}),\end{cases} \tag{6}\] where \(\beta_{u}\) and \(\gamma_{v}\) have normal priors with mean \(0\) and variance \(\zeta_{\beta,u}\sigma_{\beta,u}^{2}\) and \(\zeta_{\gamma,u}\sigma_{\gamma,u}^{2}\), respectively. Here, \(\delta_{c}(\cdot)\) denotes the discrete measure at \(c\); hence, \(\zeta_{\beta,u}\) and \(\zeta_{\gamma,u}\) are indicators taking values \(1\) or \(v_{0}\) (small number close to \(0\)) based on the selection of their corresponding covariates. The probabilities of these indicators taking the value \(1\) is given by \(\alpha_{\beta}\) and \(\alpha_{\gamma}\) respectively. We place a uniform prior on these selection probabilities and an inverse-Gamma prior on the parameters \(\sigma_{\beta,u}^{2}\) and \(\sigma_{\gamma,u}^{2}\). The choice of the shape and rate parameters \((a_{\sigma_{\beta}},b_{\sigma_{\beta}};a_{\sigma_{\gamma}},b_{\sigma_{\gamma}})\) of these inverse-Gamma priors induces a continuous bimodal distributions on \(\zeta_{\beta,u}\sigma_{\beta,u}^{2}\) and \(\zeta_{\gamma,u}\sigma_{\gamma,u}^{2}\) with a spike at \(\nu_{0}\) and a right continuous tail. Combining the priors in (5) and (6) completes the hierarchical prior formulation for parameters \(\boldsymbol{\theta}\) as defined in (1). Evidently, the above prior formulation allows for sufficient flexibility regarding variations in implementation. For instance, a hierarchical Bayesian framework for a simple DGLM can be obtained by omitting the process specification and variable selection. Analogously, DGLMs featuring variable selection or, spatial effects are obtained by omitting respective components from the prior specification outlined previously. ### Bayesian Estimation and Inference In its full capacity (a model with spatial effects and variable selection) the model structure with prior specifications in (5) and (6) contains \(3p+L+3q+4\) parameters. Depending on the dimensions of \(\mathbf{X}(\mathbf{s})\) and \(\mathbf{Z}\), and the number of locations \(L\), posterior inference can be a sufficiently daunting task. Traditional Metropolis-Hasting (M-H) random walk strategies are sub-optimal, involving costly pilot runs to determine viable initial starting points and unreasonably long chains while performing MCMC sampling. To avoid such issues, we use an adaptive rejection sampling while leveraging the log-concavity of the posteriors to perform effective inference that is not plagued by the above described issues (for more details, see, for e.g, Girolami and Calderhead, 2011). In the following, we describe (i) briefly, our adaptive rejection MCMC sampling approach (more details are provided in the Supplementary Materials), (ii) the identifiability issues on the overall intercept that arise due to inclusion of a spatial effect and a strategy to address this, and (iii) a false discovery rate (FDR)-based approach for performing variable selection. The joint posterior \(\pi(\boldsymbol{\theta}\mid\mathbf{y})\) generated as a result of the hierarchical priors in eq. 5 is sampled using a hybrid sampling strategy that includes M-H random walk and the Metropolis-Adjusted Langevin Algorithm (MALA) (Roberts and Stramer, 2002; Girolami and Calder \begin{table} \begin{tabular}{l|l|c|c} \hline \hline Models & Frameworks & Specification (\(\boldsymbol{\theta}\)) & Number of Parameters \\ \hline M1 & DGLM & \(\boldsymbol{\theta}_{m}\) & \(p+q+1\) \\ \hline M2 & DGLM + Variable Selection & \(\boldsymbol{\theta}_{m}\), \(\boldsymbol{\theta}_{vs}\) & \(3p+3q+1\) \\ \hline M3 & DGLM + Spatial Effect & \(\boldsymbol{\theta}_{m}\), \(\boldsymbol{\theta}_{pr}\) & \(p+q+L+4\) \\ \hline M4 & DGLM + Spatial Effect + Variable Selection & \(\boldsymbol{\theta}_{m}\), \(\boldsymbol{\theta}_{vs}\), \(\boldsymbol{\theta}_{pr}\) & \(3p+3q+L+4\) \\ \hline \hline \end{tabular} \end{table} Table 1: Proposed Bayesian Hierarchical Double Generalized Linear Modeling Frameworks head, 2011). We consider MALA updates for the model parameters \(\{\mathbf{\beta},\mathbf{w}\}\) for the mean model. The dispersion model coefficients are sampled depending on the choice of likelihood, i.e., \(\mathbf{\gamma}\) is sampled using a MALA if a saddle-point approximation of the likelihood is considered, otherwise \(\mathbf{\gamma}\) is sampled using a MALA with a numerical approximation to the derivative of the conditional posterior for \(\mathbf{\gamma}\) or using a M-H random walk. The parameters \(\{\xi,\phi_{s}\}\) are updated using a M-H random walk. All the other remaining parameters are sampled using Gibbs sampling. In particular, we employ block updates for \(\mathbf{\beta}_{\mathbf{w}}=\{\mathbf{\beta},\mathbf{w}\}\) and \(\mathbf{\gamma}\). Proposal variances feature adaptive scaling such that the optimal acceptance rate (\(\approx 58\%\)) to capture Langevin dynamics is achieved upon convergence (see, Carlin and Louis, 2008; Girolami and Calderhead, 2011). Proposal variances in the M-H updates also feature adaptive scaling such that the optimal acceptance rate (\(\approx 33\%\)) for random walks is achieved upon convergence. We outline the full sampling algorithm at the end of Section 7 of the Supplement. For the hierarchical DGLM in (4), the specification of a spatial effect translates to fitting a random intercept mean model. Consequently, having an additional overall intercept \(\beta_{0}\) in the model renders it unidentifiable (see, Gelfand et al., 1995, 1996). Hence, \(\beta_{0}\) is not estimable, although \(\beta_{0}+\mathbf{w}\) is estimable. \(\beta_{0}\) is estimated through hierarchical centering of the posterior for \(\mathbf{w}\)(see, for e.g., Gelfand et al., 1996). The MCMC samples of \(\mathbf{\beta}\) and \(\mathbf{\gamma}\) explore their conditional posterior distributions and point estimates for these model parameters can be obtained using maximum a-posteriori (MAP) estimates or the posterior means. Although we obtain point estimates, these estimates do not yield exact zero values since we have considered a continuous spike-and-slab prior with a spike at \(\nu_{0}\) (a small positive number). Additionally, these estimates do not make use of the all the MCMC samples. We use a Bayesian model averaging-based strategy that leverages all the MCMC samples to build inference (Hoeting et al., 1999). Specifically, we use a FDR-based using Bayesian model averaging combined with point estimates (see for e.g., Morris et al., 2008; Mohammed et al., 2021). Let \(\beta_{u}^{(m)}\) for \(m=1,\ldots,M\) denote the MCMC samples (after burn-in and thinning) for the coefficients of the mean model. We compute \(p_{u}=\frac{1}{M}\sum_{m}I\big{(}|\beta_{u}^{(m)}|\leq c\big{)}\), where \(I(\cdot)\) is the indicator function; these probabilities \(p_{u}\) can be interpreted as local FDR (Morris et al., 2008). The probability \((1-p_{u})\) can be interpreted as the probability that covariate \(u\) is significantly associated with the response. We use \(p_{u}\)s to decide on which covariates to select while controlling the FDR at level \(\alpha\). That is, we infer that the covariate \(u\) has a _non-zero_ coefficient if \(p_{u}<\kappa_{\alpha}\) for some threshold \(\kappa_{\alpha}\in(0,1)\). We compute the threshold \(\kappa_{\alpha}\) as follows: We first sort the probabilities \(p_{u}\) and denote the sorted probabilities as \(p_{(u)}\) for \(u=1,\ldots,p\). We then assign \(\kappa_{\alpha}=p_{(u^{\star})}\) where \(u^{*}=\max\left\{\widetilde{u}\mid\frac{1}{\widetilde{u}}\sum_{u=1}^{\widetilde{u} }p_{(u)}\leq\alpha\right\}\). This approach caps our false discoveries of selected variables at \(100\alpha\%\). We employ the same approach using the MCMC samples of \(\mathbf{\gamma}\) to select significant coefficients for the dispersion models. Posterior inference on \(\mathbf{\beta}\), \(\mathbf{\gamma}\) is performed using MAP (point) estimates along with posterior mean, median, standard deviation and highest posterior density (HPD) intervals. For \(\mathbf{w}\), we employ posterior mean, median, standard deviation and HPD intervals to perform inference. Next, we demonstrate some synthetic experiments that document the performance of our proposed models. The computation has been performed in the R statistical environment. The required subroutines can be accessed via an open-source repository at: [https://github.com/arh926/sptwdglm](https://github.com/arh926/sptwdglm). ## 3 Synthetic Experiments We begin with an observation--the spatial heterogeneity that our models aim to quantify is not observed in real life. Hence, it is imperative to document the accuracy of estimating such effects through synthetic experiments. Settings used are outlined--we consider varying proportion of zeros (15%, 30%, 60%, 80% and 95%) under which the quality of posterior inference for \(\mathbf{\theta}\) is assessed. Proportion of zeros can be interpreted as an inverse signal-to-noise ratio for the synthetic response. For the sake of brevity, we only show the results for synthetic experiments pertaining to Bayesian variable selection in the presence of spatial effects. Additional simulations can be found in the online Supplement. To construct the synthetic data we consider three scenarios pertaining to _model structure_, (a) there is no overlap (i.e. selected \(\beta\)'s and \(\gamma\)'s do not intersect) (b) 50% overlap (in the union of all selected variables across the mean and dispersion models) (c) 100% overlap between mean and dispersion model specification. We use 10 covariates including an intercept, where the columns of the synthetic design matrices \(\mathbf{X}\) and \(\mathbf{Z}\) are hierarchically centered and scaled, independently sampled Gaussian variables with mean 0 and variance 1. Naturally, specifi \begin{table} \begin{tabular}{c|c c c} \hline \hline Proportion of 0s & \(\mu_{\mathbf{\beta}_{-0}}\left(\sigma_{\mathbf{\beta}_{-0}}\right)\) & \(\gamma_{0}\) & \(\mu_{\mathbf{\gamma}_{-0}}\left(\sigma_{\mathbf{\gamma}_{-0}}\right)\) \\ \hline 15\% & 0.50 (0.1) & -1.50 & 0.50 (0.1) \\ 30\% & 0.50 (0.1) & 0.70 & 0.50 (0.1) \\ 60\% & 0.50 (0.1) & 2.50 & 0.50 (0.1) \\ 80\% & 1.00 (0.1) & 4.50 & 0.50 (0.1) \\ 95\% & 1.00 (0.1) & 7.00 & 0.50 (0.1) \\ \hline \hline \end{tabular} \end{table} Table 2: Parameter settings used to obtain varying proportion of zeros in the synthetic data. cation of the true \(\mathbf{\beta}\), \(\mathbf{w}\) and \(\mathbf{\gamma}\) parameters determine the proportion of zeros in the synthetic response. Table 2 contains the parameter specifications used. The true value of the index parameter, \(\xi=1.5\). In an attempt to produce a synthetic setup that resembles reality we simulate, \(\mathbf{w}\sim N(5(\sin(3\pi\mathbf{s}_{1})+\cos(3\pi\mathbf{s}_{2})),1)\) (see Figure 1, second row). The alternative route would be to fix values of \(\sigma^{2}\) and \(\phi_{s}\) and generate a realization \(\mathbf{w}\sim N_{L}(\mathbf{0}_{L},\sigma^{2}\mathbf{R}(\phi_{s}))\) (see Figure 1, first row). Under each setting we consider \(M=10\) replications. Within each replication we fit all the proposed modeling frameworks as shown in Table 1. The hyper-parameter settings used while specifying priors for the models are, \(a_{\xi}=1\), \(b_{\xi}=2\), \(a_{\sigma}=a_{\sigma_{\beta}}=a_{\sigma_{\gamma}}=2\), \(b_{\sigma}=b_{\sigma_{\beta}}=b_{\sigma_{\gamma}}=1\) (producing inverse-Gamma priors with mean 1 and infinite variance), \(a_{\phi_{s}}=0\), \(b_{\phi_{s}}=30\), \(\sigma_{\beta}^{-2}=\sigma_{\gamma}^{-2}=10^{-6}\) and \(\nu_{0}=5\times 10^{-4}\), producing a vague and non-informative hierarchical prior. We maintain an FDR of 5% for all settings while performing model selection. The sample size varies from \(N=\{2\times 10^{3},5\times 10^{3},1\times 10^{4}\}\) and the number of locations are \(L=1\times 10^{2}\). Across replications, false positive rate (FPR) and true positive rate (TPR) are computed to measure accuracy of our model selection procedure. To record the quality of estimation, we compute the mean squared error (MSE), for e.g. \(MSE(\mathbf{\beta})=\frac{1}{p}\sum_{i_{\beta}=1}^{p}(\beta_{i_{\beta}}-\widehat{ \beta}_{i_{\beta}})^{2}\), which can be computed similarly for the other parameters. We also compute average coverage probabilities, for e.g. considering these probabilities for \(\mathbf{\beta}\) we define \(CP(\mathbf{\beta})=\frac{1}{M}\sum_{m=1}^{M}I(\mathbf{\beta}_{true}\in(l_{m}(\mathbf{\beta }),u_{m}(\mathbf{\beta})))\), where \(l_{m}(\mathbf{\beta})\) and \(u_{m}(\mathbf{\beta})\) are the lower and upper 95% HPD intervals respectively for \(\mathbf{\beta}\) in replication \(m\); we obtain coverage probabilities for \(\mathbf{w}\) and \(\mathbf{\gamma}\) similarly. The results obtained under the above settings are shown in Table 3. The first column is named configuration (abbreviated as config.) with entries denoting the proportion of overlap between selected coefficients in the mean and dispersion models, which is indicative of model structure. This is estimated by observing the overlap between selected variables following the model fit (for models M2 and M4). No variable selection is performed for models M1 and M3. From the results shown, we see that models M1 and M2 perform poorly. Estimates, \(\widehat{\mathbf{\beta}}\) remain fairly unaffected as compared to \(\widehat{\mathbf{\gamma}}\) and \(\widehat{\xi}\), where all of the variation not quantified, yet present in the synthetic data spills over to corrupt and compromise the quality of estimates. This also does not produce reliable results pertaining to model structure recovery for M1 and M2. However, significant improvements show up with M3 and M4. Particularly, under higher proportion of zeros in the synthetic data (low signal to noise ratio) the performance of M4 remains stable with respect to model structure recovery and estimation of parameters (refer to Table 3), thereby producing _robust inference_ among the models in comparison. As an example within our simulation setting, under 95% of 0s in the data and under low sample sizes, for example 2000 or, 5000, the estimates of model coefficients and spatial effects in M3 and M4 are adversely affected by locations having fewer non-zero observations. This observation addresses the concern around specifying DGLMs without spatial random effects in a scenario where the data displays spatial variation. The results demonstrate expected gains when our model in its full capacity is used instead of an usual DGLM. We use the MCMC algorithm featuring MALA updates for \(\mathbf{\beta},\mathbf{w}\) and \(\mathbf{\gamma}\). Chain lengths are set to \(1\times 10^{4}\), with the initial 5,000 samples as burn-in and thin the rest by selecting every 10-th sample which reduces any remaining auto-correlation and produces 500 independent posterior samples for each setting. The posterior estimate, \(\widehat{\mathbf{\theta}}\) is then obtained using the produced samples by computing the median or a MAP estimate as applicable for the model. Coverage probabilities for model M4 remained sufficiently high (\(\approx 1\)) across all settings; only declining marginally for \(\mathbf{w}\) (remaining above 90%) under high proportions of zeros (low signal to noise ratio) in the data. We performed additional synthetic experiments to showcase (a) the performance of M3 with respect to the quality of estimation for spatial effects and (b) the performance of M2. They are detailed in the Supplementary materials--we briefly outline its contents in the next section. Figure 1: Plots showing synthetic spatial patterns, pattern 1 (top, left column) and pattern 2 (bottom, left column) and corresponding logarithm of aggregated synthetic response (right column). ## 4 Supplementary Analysis The online Supplement to this paper contains details on the derivations of the posteriors essential for constructing MCMC subroutines. They are outlined in Section 7. Section 8 features additional simulation experiments that supplement those outlined previously in Section 3. It documents performance of M2, shown in Table 7, contains results of experiments for scenarios featuring spatial covariates, shown in Table 8, and varying spatial patterns as seen in Figure 1, shown in Tables 9 and 10. Convergence diagnostics are shown for selected model parameters (index parameter \(\xi\)) in Section 8.2. Contents of the R-package are described in Section 8.3. Finally, results for models M1 and M3 pertaining to the real data analysis described in the next section appear in Section 9, Tables 11, 12, 13 and 14. They are as follows, 1. _Individual Level:_ (i) accident and model year of the vehicle, (ii) age, gender, marital status. 2. _Policy level:_ (i) policy payments, measured in United States dollars, (ii) exposure-measured in policy years, for e.g. 0.5 indicates a coverage period of 6 months or, half a year, measured in years, (iii) policy risk-having two levels, which is assigned by the insurance company based on provided information by the individual, (iv) deductible limit-with 8 categories. 3. _Spatial:_ 5-digit zip code. Derived variables like age categories, vehicle age in years and interactions like gender \(\times\) marital status are computed and used as covariates in the model. For the state of Connecticut, \begin{table} \begin{tabular}{c|c|c|c c c|c c c} \hline \hline \multirow{2}{*}{\(N\)} & \multirow{2}{*}{True Overlap} & \multirow{2}{*}{Prop. of 0s} & \multicolumn{3}{c|}{M2} & \multicolumn{3}{c}{M4} \\ \cline{3-10} & & & Overlap & FPR & TPR & Overlap & FPR & TPR \\ \hline \multirow{9}{*}{0.00} & \multirow{9}{*}{0.15} & 0.03 & 0.04 & 0.69 & 0.00 & 0.00 & 0.90 \\ & & & (0.06) & (0.06) & (0.07) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.30} & 0.05 & 0.05 & 0.89 & 0.00 & 0.00 & 1.00 \\ & & & (0.09) & (0.09) & (0.10) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.60} & 0.01 & 0.01 & 0.98 & 0.00 & 0.00 & 1.00 \\ & & & (0.04) & (0.04) & (0.04) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.80} & 0.04 & 0.04 & 0.99 & 0.00 & 0.00 & 1.00 \\ & & & (0.06) & (0.06) & (0.03) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.95} & 0.10 & 0.06 & 0.89 & 0.08 & 0.04 & 0.95 \\ & & & (0.20) & (0.10) & (0.20) & (0.10) & (0.11) & (0.08) \\ \hline \multirow{9}{*}{5000} & \multirow{9}{*}{0.50} & 0.15 & 0.24 & 0.01 & 0.67 & 0.50 & 0.00 & 0.90 \\ & & & (0.16) & (0.05) & (0.09) & (0.00) & (0.00) & (0.03) \\ \cline{3-10} & & \multirow{2}{*}{0.30} & 0.49 & 0.01 & 0.94 & 0.50 & 0.00 & 1.00 \\ & & & (0.08) & (0.05) & (0.06) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.60} & 0.51 & 0.00 & 0.93 & 0.50 & 0.00 & 1.00 \\ & & & (0.03) & (0.00) & (0.06) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.80} & 0.56 & 0.07 & 0.97 & 0.49 & 0.01 & 1.00 \\ & & & (0.09) & (0.08) & (0.04) & (0.02) & (0.05) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.95} & 0.65 & 0.20 & 0.85 & 0.48 & 0.04 & 0.92 \\ & & & (0.31) & (0.17) & (0.22) & (0.10) & (0.15) & (0.18) \\ \hline \multirow{9}{*}{1.00} & \multirow{9}{*}{0.15} & 0.40 & 0.03 & 0.66 & 1.00 & 0.00 & 0.90 \\ & & & (0.25) & (0.05) & (0.07) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.30} & 0.90 & 0.04 & 0.86 & 1.00 & 0.00 & 1.00 \\ & & & (0.14) & (0.08) & (0.08) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.60} & 1.00 & 0.00 & 0.96 & 1.00 & 0.00 & 1.00 \\ & & & (0.00) & (0.00) & (0.05) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & & 1.00 & 0.00 & 1.00 & 1.00 & 0.00 & 1.00 \\ & & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cline{3-10} & & \multirow{2}{*}{0.80} & 0.35 & 0.15 & 0.85 & 0.89 & 0.10 & 0.89 \\ & & & (0.21) & (0.16) & (0.22) & (0.21) & (0.11) & (0.10) \\ \hline \hline \end{tabular} \end{table} Table 3: Table showing results of synthetic experiments for model selection for models M2 and M4. Corresponding standard deviations are shown in brackets below. 1,513,655 (\(\approx 1.5\) million) data records were obtained in the year 2008, at 281 zip-codes. Zip-codes are areal units, we consider the latitude-longitude corresponding to the centroid of each zip code as the point reference counterpart unit for our application purposes. Distance between two zip-codes is then specified as the Euclidean distance between their centroids. The proportion of zeros in the payments is 95.73%. From an insurer's perspective, policy rate-making is the problem of assigning policy-premium to a new customer's policy based on their covariate information (for instance, individual level, policy and residence zip-code). We achieve this via out-sample prediction. To that end, we consider a 60-40 split for the data, the split is performed by using stratified sampling without replacement over zip-codes, such that the same 281 zip-codes are also available in the out-sample. The training data then contains \(N_{tr}=908,741\) observations with \(N_{pr}=604,914\) observations kept in reserve for prediction constituting the out-sample data. We denote payments towards a policy made by individual \(i\), residing in zip-code \(j\) as \(y_{ij}\) with an exposure of \(t_{ij}\). We assume that the policy-premium defined as \(y_{ij}^{*}=\frac{y_{ij}}{t_{ij}}\sim Tw\left(\mu_{ij},\phi_{ij},\xi\right)\), which implies \(y_{ij}\sim Tw(t_{ij}\mu_{ij},t_{ij}^{2-\xi}\phi_{ij},\xi)\) using the scale invariance property. The following hierarchical DGLM is then specified, \[\begin{split}\log\mu_{ij}(\mathbf{s}_{i})&=-\log t _{ij}+\mathbf{x}_{ij}^{T}(\mathbf{s}_{i})\boldsymbol{\beta}+\mathbf{f}_{ij}( \mathbf{s}_{i})^{T}\mathbf{w}(\mathbf{s}_{i}),\\ \log\phi_{ij}&=-(2-\xi)\log t_{ij}+\mathbf{z}_{ij}^ {T}\boldsymbol{\gamma},\end{split} \tag{7}\] when considered with a spatial specification, where the terms \(-\log t_{ij}\) and \(-(2-\xi)\log t_{ij}\) act as offsets for the respective mean and dispersion models. Given the covariates described at the beginning \(p=q=29\), producing a \(N_{tr}\times(p-1)\) design matrix for the mean model Figure 3: (left) Spatial plot of zip-code level aggregated pure-premium \(\times 10^{-6}\) for the state of Connecticut, 2008 (right) histogram for the pure-premium overlaid with a probability density estimate. and a \(N_{tr}\times q\) design matrix for the dispersion model. The model in (7) specifies model M3 from Table 1, M4 is obtained by specifying \(\pi(\mathbf{\theta}_{vs})\) from (6) on \(\mathbf{\theta}_{m}\). M1 is obtained by setting \(f_{ij}(\mathbf{s}_{i})=\mathbf{0}\) and M2 is obtained by specifying \(\pi(\mathbf{\theta}_{vs})\) on \(\mathbf{\theta}_{m}\) for the resulting model. For M1 and M2 we include an intercept in the mean model. We fit models M1-4 on the training data. Model selection is performed using FDR based variable selection on the posterior MCMC samples from fitting models M2 and M4, controlling for FDR at 1%. The performance of M1-4 is assessed using the Akaike Information Criteria (AIC) (Akaike, 1998). While specifying (5) and (6) the hyper-parameter settings used are, \(\sigma_{\beta}^{2}=\sigma_{\gamma}^{2}=10^{6}\), \(a_{\sigma_{\beta}}=a_{\sigma}=a_{\sigma_{\gamma}}=2\) and \(b_{\sigma_{\beta}}=b_{\sigma}=b_{\sigma_{\gamma}}=1\) (generating inverse-gamma priors with mean 1 and infinite variance), \(a_{\phi_{s}}=0\), \(b_{\phi_{s}}=60\). We maintain \(a_{\xi}=1\) and \(b_{\xi}=2\) for all models. For ease of implementation, the fractal parameter, \(\nu\) is fixed at 0.5, producing the exponential covariance kernel. We consider \(1\times 10^{5}\) MCMC iterations for generating samples from respective posteriors with burn-in diagnosed at \(5\times 10^{4}\) and include every 20-th sample to compute posterior estimates as our thinning strategy. Convergence is assessed through inspecting trace plots. Proposal variances were scaled in an adaptive fashion to provide optimal acceptance rates of 58% (MALA) and 33% (MH). Predictive performance for models M1-4 is judged based on square root deviance on the out-sample data. The results are shown in Table 6. Optimal values are marked in bold. We show the results for estimated model coefficients featuring Bayesian variable selection (models M2 and M4) in Tables 4 and 5. Results for M1 and M3 are postponed to Tables 11, 12, 13 and 14 in the Supplementary Materials. Posterior estimates for the spatial effects in models M3 and M4 are shown in Table 7. Figure 4: Spatial plot showing the posterior estimates for 281 zipcodes from (left) M3, and (right) M4 in Connecticut. The locations are color coded based on significance, with white indicating a location with 0 in its HPD interval, blue (red) indicating HPD interval with both endpoints negative (positive). are shown in Figure 4. Zip-codes with significant effects are color coded appropriately. Comparing the models we observe that M4 produces the most optimal model fit criteria among the models considered. This extends to out-sample performance when predicting policy premiums. Plots produced for spatial effects in models M3 and M4 are mean adjusted. Since specification of M3 and M4 differ only in presence/absence of the hierarchical Bayesian variable selection component, the produced spatial effects mimic each other after adjusting for the mean. Comparing results in Tables 4 and 5 we observe that including the spatial effect results in more categories for vehicle age, driver age and deductible being selected. Overall, we observe that the findings remain consistent with our earlier research (see, Halder et al., 2021) but within a more robust proposed model choice framework. This is evident when comparing model estimates between Table 5 and Table 13 and 14 in the Supplement. The marital status and interaction between gender and marital status not being selected. We conclude by observing that the spatial effects are significantly positive in major cities in Connecticut indicating higher spatial risk as opposed to sparsely populated regions showing significantly lower risk. \begin{table} \begin{tabular}{c|c|c|c c c c c c} \hline \hline Parameters & Levels & MAP & Median & Mean & SD & Lower HFD & Upper HFD \\ \hline \hline \multirow{3}{*}{\(\beta\)} & (Intercept) & – & 5.815 & 5.965 & 5.955 & 0.151 & 5.706 & 0.227 \\ \cline{2-9} & & 1 & 0.113 & 0.112 & 0.110 & 0.033 & 0.044 & 0.173 \\ \cline{2-9} & age-car & 2 & 0.234 & 0.229 & 0.240 & 0.030 & 0.186 & 0.304 \\ \cline{2-9} & & 5 & 0.124 & 0.122 & 0.121 & 0.033 & 0.062 & 0.189 \\ \cline{2-9} & risk & S & -0.213 & -0.217 & -0.217 & 0.023 & -0.258 & -0.168 \\ \cline{2-9} & & 3 & -0.399 & -0.399 & -0.400 & 0.030 & -0.454 & -0.311 \\ \cline{2-9} & age & 4 & -0.518 & -0.515 & -0.515 & 0.035 & -0.580 & -0.444 \\ \cline{2-9} & & 5 & -0.712 & -0.710 & -0.708 & 0.065 & -0.832 & -0.575 \\ \cline{2-9} & gender & F & 0.458 & 0.502 & 0.513 & 0.067 & 0.400 & 0.648 \\ \cline{2-9} & & M & 0.608 & 0.614 & 0.619 & 0.053 & 0.521 & 0.731 \\ \cline{2-9} & marital & M & -0.376 & -0.392 & -0.408 & 0.067 & -0.577 & -0.302 \\ \cline{2-9} & & B & 0.858 & 1.051 & 1.104 & 0.244 & 0.740 & 1.525 \\ \cline{2-9} & & E & 0.616 & 0.482 & 0.478 & 0.158 & 0.192 & 0.741 \\ \cline{2-9} & & F & 0.580 & 0.428 & 0.435 & 0.154 & 0.135 & 0.656 \\ \cline{2-9} & & G & 0.287 & 0.348 & 0.358 & 0.159 & 0.073 & 0.633 \\ \hline \multirow{9}{*}{7} & (Intercept) & – & 7.354 & 7.346 & 7.345 & 0.048 & 7.249 & 7.435 \\ \cline{2-9} & & 0 & -0.820 & -0.811 & -0.811 & 0.053 & -0.911 & -0.714 \\ \cline{2-9} & & 1 & -1.024 & -1.017 & -1.016 & 0.051 & -1.115 & -0.922 \\ \cline{2-9} & & 2 & -0.888 & -0.874 & -0.873 & 0.052 & -0.960 & -0.776 \\ \cline{2-9} & age-car & 3 & -0.864 & -0.854 & -0.851 & 0.052 & -0.953 & -0.700 \\ \cline{2-9} & & 4 & -0.843 & -0.838 & -0.838 & 0.052 & -0.936 & -0.743 \\ \cline{2-9} & & 5 & -0.788 & -0.781 & -0.780 & 0.052 & -0.877 & -0.682 \\ \cline{2-9} & & 6 & -0.770 & -0.765 & -0.765 & 0.053 & -0.862 & -0.665 \\ \cline{2-9} & & 7 & -0.730 & -0.723 & -0.723 & 0.053 & -0.829 & -0.627 \\ \cline{2-9} & risk & S & 0.114 & 0.114 & 0.114 & 0.011 & 0.093 & 0.135 \\ \cline{2-9} & age & 5 & -0.289 & -0.293 & -0.293 & 0.029 & -0.345 & -0.234 \\ \cline{2-9} & & & B & -0.971 & -0.963 & -0.951 & 0.099 & -1.136 & -0.759 \\ \cline{2-9} & & C & -0.519 & -0.522 & -0.517 & 0.049 & -0.614 & -0.414 \\ \cline{2-9} & & D & -0.965 & -0.568 & -0.564 & 0.044 & -0.657 & -0.470 \\ \cline{2-9} & & E & -0.506 & -0.501 & -0.495 & 0.042 & -0.568 & -0.394 \\ \cline{2-9} & & F & -0.348 & -0.346 & -0.340 & 0.039 & -0.410 & -0.242 \\ \cline{2-9} & & H & 0.411 & 0.419 & 0.426 & 0.097 & 0.251 & 0.618 \\ \cline{2-9} & genderJafntal & A & -0.113 & -0.106 & -0.097 & 0.043 & -0.168 & -0.001 \\ \hline \(\xi\) & – & – & 1.673 & 1.673 & 1.673 & 0.001 & 1.671 & 1.676 \\ \hline \hline \end{tabular} \end{table} Table 4: Estimated Coefficients for fixed effects corresponding to model M2. We show the maximum a-posteriori (MAP) estimates along with median, mean, standard deviation and highest posterior density intervals (HPD). The variables listed are a result of FDR-based variable selection at 1%. \begin{table} \begin{tabular}{|c|c|c c c c c c|} \hline \hline Parameters & Levels & MAP & Median & Mean & SD & Lower HPD & Upper HPD \\ \hline (Intercept) & – & 5.215 & 5.213 & 5.216 & 0.135 & 5.165 & 5.327 \\ \hline & 0 & 0.252 & 0.251 & 0.251 & 0.063 & 0.123 & 0.377 \\ & 1 & 0.329 & 0.317 & 0.318 & 0.058 & 0.217 & 0.431 \\ & 2 & 0.424 & 0.423 & 0.425 & 0.061 & 0.318 & 0.549 \\ age.car & 3 & 0.209 & 0.193 & 0.195 & 0.060 & 0.084 & 0.318 \\ & 4 & 0.236 & 0.252 & 0.255 & 0.062 & 0.149 & 0.387 \\ & 5 & 0.297 & 0.302 & 0.307 & 0.059 & 0.191 & 0.428 \\ & 6 & 0.182 & 0.209 & 0.214 & 0.061 & 0.113 & 0.343 \\ & 7 & 0.155 & 0.154 & 0.157 & 0.063 & 0.049 & 0.286 \\ \hline risk & S & -0.190 & -0.184 & -0.183 & 0.023 & -0.226 & -0.137 \\ \hline \multirow{3}{*}{\(\beta\)} & 2 & -0.121 & -0.126 & -0.128 & 0.030 & -0.186 & -0.068 \\ & 3 & -0.461 & -0.467 & -0.470 & 0.031 & -0.531 & -0.412 \\ & 4 & -0.608 & -0.617 & -0.619 & 0.034 & -0.685 & -0.554 \\ & 5 & -0.721 & -0.699 & -0.696 & 0.065 & -0.808 & -0.561 \\ \hline gender & F & 0.535 & 0.540 & 0.541 & 0.066 & 0.407 & 0.679 \\ & M & 0.670 & 0.705 & 0.720 & 0.100 & 0.535 & 0.919 \\ \hline & B & 1.337 & 1.383 & 1.408 & 0.216 & 1.036 & 1.866 \\ & C & 0.209 & 0.256 & 0.303 & 0.152 & 0.073 & 0.624 \\ & D & 0.603 & 0.627 & 0.672 & 0.161 & 0.397 & 0.982 \\ & E & 0.800 & 0.825 & 0.864 & 0.157 & 0.625 & 1.189 \\ & F & 0.753 & 0.774 & 0.819 & 0.155 & 0.592 & 1.154 \\ & G & 0.756 & 0.786 & 0.825 & 0.158 & 0.580 & 1.169 \\ & H & 0.820 & 0.829 & 0.824 & 0.190 & 0.463 & 1.149 \\ \hline genderMarital & B & -0.368 & -0.236 & -0.169 & 0.233 & -0.457 & 0.311 \\ \hline & (Intercept) & – & 6.423 & 6.429 & 6.415 & 0.073 & 6.310 & 6.504 \\ & & 0 & -0.790 & -0.809 & -0.811 & 0.040 & -0.891 & -0.731 \\ \hline & 1 & -1.006 & -1.025 & -1.028 & 0.039 & -1.099 & -0.954 \\ & 2 & -0.875 & -0.885 & -0.889 & 0.039 & -0.972 & -0.819 \\ age.car & 3 & -0.842 & -0.861 & -0.863 & 0.039 & -0.950 & -0.799 \\ & 4 & -0.831 & -0.844 & -0.848 & 0.039 & -0.923 & -0.777 \\ & 5 & -0.781 & -0.792 & -0.795 & 0.039 & -0.879 & -0.728 \\ & 6 & -0.762 & -0.766 & -0.769 & 0.039 & -0.849 & -0.703 \\ & 7 & -0.725 & -0.732 & -0.735 & 0.039 & -0.814 & -0.662 \\ \hline risk & S & 0.110 & 0.110 & 0.110 & 0.012 & 0.087 & 0.132 \\ \hline & ngec & 5 & -0.262 & -0.262 & -0.261 & 0.030 & -0.316 & -0.201 \\ \hline & B & -1.023 & -1.026 & -1.029 & 0.158 & -1.328 & -0.721 \\ & C & -0.534 & -0.553 & -0.571 & 0.069 & -0.700 & -0.461 \\ & D & -0.591 & -0.611 & -0.634 & 0.067 & -0.760 & -0.534 \\ & E & -0.533 & -0.552 & -0.575 & 0.066 & -0.702 & -0.478 \\ & F & -0.377 & -0.390 & -0.416 & 0.065 & -0.543 & -0.335 \\ & H & 0.352 & 0.361 & 0.369 & 0.107 & 0.170 & 0.591 \\ \hline \(\xi\) & – & & 1.667 & 1.667 & 1.667 & 0.001 & 1.665 & 1.670 \\ \hline \hline \end{tabular} \end{table} Table 5: Estimated coefficients for fixed effects corresponding to model M4. We show the MAP, median, mean, standard deviation and HPDs. The variables listed are a result of FDR-based variable selection at 1%. ## 6 Discussion Double generalized linear models have not seen much use after their inception by Lee and Nelder (2006). Hindrances presented by ambiguities existing around model specification/choice have been addressed in this paper. We propose Bayesian modeling frameworks that perform model selection using continuous spike and slab priors for hierarchical double generalized linear Tweedie spatial process models. Leveraging Langevin dynamics we are able to successfully produce practical implementations for the proposed frameworks which would otherwise remain unachievable with standard MCMC techniques. The proposed algorithms are available as a publicly accessible package for the R-statistical environment. Although the formulation considers the CP-g densities, evidently such modeling could be effected under any probabilistic framework that allows for varying dispersion. The application offers some key insights into the actuarial domain. It is generally believed that marital status and gender play a key role. However, the model inference suggests otherwise, with marital status not being selected as a significant feature. Future work is aimed at extending this framework in multiple directions. Firstly, with the advent of modern Bayesian variable selection priors--for example, the Bayesian Lasso, the Horseshoe prior etc., a comparative model selection performance remains to be seen when considered within hierarchical DGLM formulations. Secondly, with the emerging techniques for handling large spatial and spatiotemporal data (see for e.g., Heaton et al., 2019) the DGLM framework could be extended to model spatially or spatio-temporally indexed observations over massive geographical domains. With respect to our application, this would allow us to investigate properties of the premium surface over much larger domains, for instance a country-wide study. Finally, extending these models to a spatiotemporal setting could be achieved using spatiotemporal covariance kernels that are commonly used. Depending on the nature of spatial and temporal interaction, we can have separable and non-separable kernels at our disposal (see, Cressie, 2015, and references therein). Bayesian variable selection could then be effected to examine resulting changes in model specification upon inclusion of random effects that address spatiotemporal variation in the data. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & M1 & M2 & M3 & M4 \\ \hline AIC & 1340060 & 1117733 & 1117114 & **1115363** \\ \hline \(\sqrt{\text{ Deviance}}\) & 5565.209 & 5509.549 & 5507.926 & **5441.23** \\ \hline \hline \end{tabular} \end{table} Table 6: Table showing AIC and out-sample square root deviance for models M1–M4. **Supplementary Materials for** **"Bayesian Variable Selection in Double Generalized Linear Tweedie Spatial Process Models"** Aritra Halder\({}^{a,*\dagger}\), Shariq Mohammed\({}^{b,c\dagger}\) and Dipak K. Dey\({}^{d}\) \({}^{a}\)Department of Biostatistics, Dornsife School of Public Health, Drexel University, Philadelphia, PA, USA \({}^{b}\)Department of Biostatistics, Boston University School of Public Health, Boston University, Boston, MA, USA \({}^{c}\)Rafik B. Hariri Institute for Computing and Computational Science & Engineering, Boston University, Boston, MA, USA \({}^{d}\)Department of Statistics, University of Connecticut, Storrs, CT, USA \({}^{*}\)corresponding author. E-mail: [email protected] \({}^{\dagger}\)equal contribution Posteriors Under chosen priors on \(\{\mathbf{\theta}_{m},\mathbf{\theta}_{pr}\}=\{\mathbf{\beta},\mathbf{\gamma},\xi,\sigma^{2}, \phi_{s}\}\), the resulting joint posterior is specified by \[\begin{split}\pi\Big{(}\mathbf{\theta}_{m},\mathbf{\theta}_{pr}\mid\mathbf{ y}\Big{)}&\propto U\Big{(}\xi\mid a_{\xi},b_{\xi}\Big{)}\times N_{p}\Big{(}\mathbf{ \beta}\mid\mathbf{0}_{p},\mathbf{\lambda}_{\beta}^{T}\mathbf{I}_{p}\Big{)}\times N_{q }\Big{(}\mathbf{\gamma}\mid\mathbf{0}_{q},\mathbf{\lambda}_{\gamma}^{T}\mathbf{I}_{q}\Big{)} \times\\ & U\Big{(}\phi_{s}\mid a_{\phi_{s}},b_{\phi_{s}}\Big{)}\times \text{Gamma}\Big{(}\sigma^{-2}\mid a_{\sigma},b_{\sigma}\Big{)}\times N_{L} \Big{(}\mathbf{w}\mid\mathbf{0}_{L},\sigma^{2}\mathbf{R}(\phi_{s})\Big{)}\times \\ & Tw\Big{(}\mathbf{y}\mid\mathbf{\mu}=\exp(\mathbf{X}\mathbf{\beta}+ \mathbf{Fw}),\mathbf{\phi}=\exp(\mathbf{Z}\mathbf{\gamma}),\xi\Big{)},\end{split} \tag{8}\] We list the posteriors for individual parameters in the following equations, \[\begin{split}\pi&\left(\mathbf{\beta}\mid\mathbf{\gamma}, \xi\right)\propto\exp\left\{-\left(\sum_{i=1}^{L}\sum_{j=1}^{n_{i}}\frac{1}{ \phi_{ij}}d(y_{ij}\mid\mu_{ij}(\mathbf{\beta},\mathbf{w}),\xi)+\frac{\sigma_{ \beta}^{-2}}{2}||\mathbf{\beta}||_{2}^{2}+\frac{\sigma^{-2}}{2}\mathbf{w}^{T} \mathbf{R}^{-1}(\phi_{s})\mathbf{w}\right)\right\},\\ \pi&\left(\mathbf{\gamma}\mid\mathbf{\beta},\xi\right)\propto \exp\left\{-\left(\sum_{i=1}^{L}\sum_{j=1}^{n_{i}}\frac{1}{\phi_{ij}(\mathbf{\gamma })}d(y_{ij}\mid\mu_{ij},\xi)+\frac{\log\phi_{ij}(\mathbf{\gamma})}{2}I(y_{ij}>0)+ \frac{\sigma_{\gamma}^{-2}}{2}||\mathbf{\gamma}||_{2}^{2}\right)\right\},\\ \pi&(\xi\mid\mathbf{\beta},\mathbf{\gamma})\propto\prod_{i=1 }^{L}\prod_{j=1}^{n_{i}}c_{ij}(y_{ij}\mid\phi_{ij},\xi)\exp\left\{-\frac{1}{ \phi_{ij}}d(y_{ij}\mid\mu_{ij},\xi)\right\}I\left(\xi\in(a_{\xi},b_{\xi}) \right),\\ \pi&(\phi_{s})\propto|\mathbf{R}(\phi_{s})|^{-1/2} \exp\left(-\frac{\sigma^{-2}}{2}\mathbf{w}^{T}\mathbf{R}^{-1}(\phi_{s}) \mathbf{w}\right)I\left(\phi_{s}\in(a_{\phi_{s}},b_{\phi_{s}})\right),\\ \pi&(\sigma^{-2})=\text{Gamma}\left(a_{\sigma}+\frac {L}{2},b_{\sigma}+\frac{1}{2}\mathbf{w}^{T}\mathbf{R}^{-1}(\phi_{s})\mathbf{w }\right),\end{split} \tag{9}\] where \(|\cdot|\) denotes the determinant and \(d(y_{ij}\mid\mu_{ij},\xi)\) is the deviance function. In the scenario where we have no spatial effect in the DGLM, the reduced set of parameters are \(\mathbf{\theta}_{m}=\{\mathbf{\beta},\mathbf{\gamma},\xi\}\) having a similar joint posterior after omitting the second row involving spatial process prior specification and setting \(\mathbf{\mu}=\exp(\mathbf{X}\mathbf{\beta})\) in the likelihood, keeping the same prior specifications on the other parameters, the resulting posteriors are as follows, \[\pi\left(\mathbf{\beta}\mid\mathbf{\gamma},\xi\right)\propto\exp\left\{- \left(\sum_{i=1}^{N}\frac{1}{\phi_{i}}d(y_{i}\mid\mu_{i}(\mathbf{\beta}),\xi)+ \frac{\sigma_{\beta}^{-2}}{2}||\mathbf{\beta}||_{2}^{2}\right)\right\},\] \[\pi\left(\mathbf{\gamma}\mid\mathbf{\beta},\xi\right)\propto\exp\left\{- \left(\sum_{i=1}^{N}\frac{1}{\phi_{i}(\mathbf{\gamma})}d(y_{i}\mid\mu_{i},\xi)+ \frac{1}{2}\log\phi_{ij}(\mathbf{\gamma})I(y_{ij}>0)+\frac{\sigma_{\gamma}^{-2}}{2} ||\mathbf{\gamma}||_{2}^{2}\right)\right\},\] \[\pi(\xi)\propto\prod_{i=1}^{N}c_{i}(y_{i}\mid\phi_{i},\xi)\exp \left\{-\frac{1}{\phi_{i}}d(y_{i}\mid\mu_{i},\xi)\right\}I\left(\xi\in(a_{\xi}, b_{\xi})\right).\] Updates leveraging MALA for \(\mathbf{\beta},\mathbf{w}\) (or \(\mathbf{\beta}\)) and \(\mathbf{\gamma}\) require the proposals to be specified appropriately using the gradients of log-posterior densities, \(\nabla\log\pi\left(\mathbf{\beta}\mid\mathbf{\gamma},\xi\right)\) and \(\nabla\log\pi\left(\mathbf{\gamma}\mid\mathbf{\beta},\xi\right)\) respectively. Candidate samples are obtained using, \[\begin{split}&\left(\mathbf{\beta},\mathbf{w}\right)^{T^{*}}=\left(\mathbf{ \beta},\mathbf{w}\right)^{T}+\frac{\tau_{\beta,w}^{2}}{2}\mathbf{A}_{\beta,w} \nabla\log\pi\left(\mathbf{\beta}\mid\mathbf{\gamma},\xi\right)+\tau_{\beta,w}\mathbf{ A}_{\beta,w}^{1/2}\cdot N_{p+L}(\mathbf{0},\mathbf{I}_{p+L}),\\ &\mathbf{\gamma}^{*}=\mathbf{\gamma}+\frac{\tau_{\gamma}^{2}}{2}\mathbf{ A}_{\gamma}\nabla\log\pi\left(\gamma\mid\mathbf{\beta},\xi\right)+\tau_{\gamma}\mathbf{ A}_{\gamma}^{1/2}\cdot N_{q}(\mathbf{0},\mathbf{I}_{q}),\end{split} \tag{10}\] where \(\mathbf{A}_{\beta,w}^{-1}=\mathrm{E}\left[-\nabla^{2}\log\pi\left(\mathbf{\beta}_{ \mathbf{W}}\mid-\right)\right]\) and \(\mathbf{A}_{\gamma}^{-1}=\mathrm{E}\left[-\nabla^{2}\log\pi\left(\mathbf{\gamma} \mid-\right)\right]\). Under a continuous spike and slab prior on \(\mathbf{\beta}\) and \(\mathbf{\gamma}\) the posteriors for the hierarchical latent DGLM is as follows, \[\begin{split}&\pi\left(\mathbf{\beta}_{\mathbf{W}}\mid-\right) \propto\exp\left\{-\left(\sum_{i=1}^{L}\sum_{j=1}^{n_{i}}\frac{1}{\phi_{ij}}d( y_{ij}\mid\mu_{ij}(\mathbf{\beta},\mathbf{w}),\xi)+\frac{1}{2}\mathbf{\beta}^{T} \mathbf{\Gamma}_{\beta}^{-1}\mathbf{\beta}+\frac{\sigma^{-2}}{2}\mathbf{w}^{T} \mathbf{R}^{-1}(\phi_{s})\mathbf{w}\right)\right\}\\ &\mathbf{\Gamma}_{\beta}=\mathrm{diag}\{\zeta_{\beta,i_{b}}\sigma _{\beta,i_{b}}^{2}\}\\ &\pi(\zeta_{\beta,i_{b}}\mid-)=\frac{\alpha_{1,\beta}}{\alpha_{1,\beta}+\alpha_{2,\beta}}\delta_{\nu_{0}}(\cdot)+\frac{\alpha_{2,\beta}}{ \alpha_{1,\beta}+\alpha_{2,\beta}}\delta_{\nu_{1}}(\cdot)\\ &\pi(\sigma_{\beta,i_{b}}^{-2}\mid-)=\mathrm{Gamma}\left(a_{ \sigma}+\frac{1}{2},b_{\sigma}+\frac{\beta_{b}^{2}}{2\zeta_{\beta,i_{b}}} \right)\\ &\pi(\alpha_{\beta}\mid-)=\mathrm{Beta}\left(1+\#\{i_{b}:\zeta_ {\beta,i_{b}}=1\},1+\#\{i_{b}:\zeta_{\beta,i_{b}}=\nu_{0}\}\right)\\ &\pi\left(\mathbf{\gamma}\mid-\right)\propto\exp\left\{-\left(\sum_{i =1}^{L}\sum_{j=1}^{n_{i}}\frac{1}{\phi_{ij}(\mathbf{\gamma})}d(y_{ij}\mid\mu_{ij},\xi)+\frac{1}{2}\log\phi_{ij}(\mathbf{\gamma})I(y_{ij}>0)+\frac{1}{2}\mathbf{\gamma}^ {T}\mathbf{\Gamma}_{\gamma}^{-1}\mathbf{\gamma}\right)\right\},\end{split}\] \[\begin{split}&\left.\mathbf{\Gamma}_{\gamma}=\mathrm{diag}\{\zeta_{ \gamma,i_{g}}\sigma_{\gamma,i_{g}}^{2}\}\\ &\pi(\zeta_{\gamma,i_{g}}\mid-)=\frac{\alpha_{1,\gamma}}{\alpha_{1,\gamma}+\alpha_{2,\gamma}}\delta_{\nu_{0}}(\cdot)+\frac{\alpha_{2,\gamma}}{ \alpha_{1,\gamma}+\alpha_{2,\gamma}}\delta_{\nu_{1}}(\cdot)\\ &\pi(\sigma_{\gamma,i_{g}}^{-2}\mid-)=\mathrm{Gamma}\left(a_{ \sigma}+\frac{1}{2},b_{\sigma}+\frac{\gamma_{q}^{2}}{2\zeta_{\gamma,i_{g}}} \right)\\ &\pi(\alpha_{\gamma}\mid-)=\mathrm{Beta}\left(1+\#\{i_{g}:\zeta_ {\gamma,i_{g}}=1\},1+\#\{i_{g}:\zeta_{\gamma,i_{g}}=\nu_{0}\}\right)\\ &\pi(\xi\mid-)\propto\prod_{i=1}^{L}\prod_{j=1}^{n_{i}}c_{ij}(y_ {ij}\mid\phi_{ij},\xi)\exp\left\{-\frac{1}{\phi_{ij}}d(y_{ij}\mid\mu_{ij},\xi )\right\}I\left(\xi\in(a_{\xi},b_{\xi})\right),\\ &\pi(\phi_{s}\mid-)\propto|\mathbf{R}(\phi_{s})|^{-1/2}\exp\left(- \frac{\sigma^{-2}}{2}\mathbf{w}^{T}\mathbf{R}^{-1}(\phi_{s})\mathbf{w}\right) I\left(\phi_{s}\in(a_{\phi_{s}},b_{\phi_{s}})\right),\\ &\pi(\sigma^{-2}\mid-)=\mathrm{Gamma}\left(a_{\sigma}+\frac{L}{2},b _{\sigma}+\frac{1}{2}\mathbf{w}^{T}\mathbf{R}^{-1}(\phi_{s})\mathbf{w}\right), \end{split} \tag{11}\] where, \[\begin{split}&\alpha_{1,\beta}=(1-\alpha_{\beta})\nu_{0}^{-1/2}\exp \left\{-\frac{\beta_{i_{b}}^{2}}{2\nu_{0}\sigma_{\beta,i_{b}}^{2}}\right\}, \ \alpha_{2,\beta}=\alpha_{\beta}\exp\left\{-\frac{\beta_{i_{b}}^{2}}{2\sigma_{\beta,i _{b}}^{2}}\right\},i_{b}=1,2,\ldots,p;\\ &\alpha_{1,\gamma}=(1-\alpha_{\gamma})\nu_{0}^{-1/2}\exp\left\{- \frac{\gamma_{i_{g}}^{2}}{2\nu_{0}\sigma_{\gamma,i_{g}}^{2}}\right\},\ \alpha_{2,\gamma}=\alpha_{\gamma}\exp\left\{-\frac{\gamma_{i_{g}}^{2}}{2\sigma_{ \gamma,i_{g}}^{2}}\right\},i_{g}=1,2,\ldots,q.\end{split}\] Under no latent specification the resulting posteriors can be obtained similarly, by omitting updates for the latent process and associated process parameters/hyper-parameters. The joint posterior is sampled by leveraging the posteriors in eq. (11) and employing a similar hybrid sampling strategy as earlier, with additional Gibbs updates for the spike and slab parameters, \(\{\mathbf{\zeta}_{\beta},\mathbf{\sigma}_{\beta}^{2},\alpha_{\beta},\mathbf{\zeta}_{\gamma}, \mathbf{\sigma}_{\gamma}^{2},\mathbf{\alpha}_{\gamma}\}\). ## 8 Experiments, Diagnostics and Software We outline supporting simulation experiments, convergence diagnostics and details on available software to supplement Section 3 in the manuscript. We begin with results from additional experiments. ### Supporting Experiments We performed simulation experiments to assess performance of model M2--Table 2 shows the results featuring the same settings as outlined in Section 3, only without the spatial effects. In what follows, we outline simulation experiments for the scenario where we have spatial covariates in our data. Finally, we focus on showing simulations results for different synthetic spatial patterns. #### 8.1.1 Spatial Covariates Setting up experiments with spatial covariates we are mindful of possible endogeneity issues arising from included spatial covariates being correlated with the true spatial pattern (see, for e.g., Fan and Liao, 2014). We use the following settings and true values: \(N=1\times 10^{4}\), \(L=1\times 10^{2}\), \(\mathbf{\beta}=(1.0,1.5,1\times 10^{-5},1.4,1.1,1\times 10^{-5},2.5)^{T}\), \(\mathbf{\gamma}=(1.0,1\times 10^{-5},1.5,1.1,1\times 10^{-5},2.5)^{T}\), \begin{table} \begin{tabular}{c|c|c c c} \hline \hline Configuration & Prop. of & \(CP(\mathbf{\theta})\) & Pale Positive Rate & True Positive Rate \\ \hline \multirow{4}{*}{Configuration 1} & 0.15 & 1.00 (0.00) & 0.00 (0.00) & 1.00 (0.00) \\ & 0.30 & 1.00 (0.00) & 0.00 (0.00) & 1.00 (0.00) \\ & 0.00 & 1.00 (0.00) & 0.00 (0.00) & 1.00 (0.00) \\ & 0.80 & 1.00 (0.10) & 0.01 (0.05) & 0.99 (0.02) \\ \hline \multirow{4}{*}{Configuration 2} & 0.15 & 1.00 (0.00) & 0.01 (0.00) & 0.99 (0.03) \\ & 0.30 & 1.00 (0.00) & 0.00 (0.00) & 1.00 (0.00) \\ & 0.00 & 1.00 (0.10) & 0.00 (0.00) & 0.98 (0.04) \\ \hline \multirow{4}{*}{Configuration 3} & 0.15 & 0.90 (0.10) & 0.01 (0.05) & 0.99 (0.03) \\ & 0.30 & 1.00 (0.00) & 0.01 (0.04) & 1.00 (0.00) \\ \cline{1-1} & 0.60 & 1.00 (0.00) & 0.00 (0.00) & 0.99 (0.04) \\ \cline{1-1} & 0.80 & 1.00 (0.00) & 0.05 (0.00) & 0.96 (0.03) \\ \hline \hline \end{tabular} \end{table} Table 7: Results for synthetic experiments corresponding to Table 3 for model M2 showing average coverage probabilities for the estimated model coefficients. \(10^{-5},-2.5,1\times 10^{-5})^{T}\). In \(\mathbf{X}\) and \(\mathbf{Z}\) we include two spatial covariates in the last two columns: \(\mathbf{x}_{6}=\mathbf{z}_{6}\sim N(5(\cos(3\pi s_{x})+\sin(3\pi s_{y})),1)\) and \(\mathbf{x}_{7}=\mathbf{z}_{7}\sim N(2(\cos(\pi s_{x})+\sin(\pi s_{y})),1)\), while the rest were sampled from standard Gaussian distributions resulting in an average of 50% zeros in the data. Under this setup, we observe that for the mean model \(\mathbf{x}_{6}\) is significant and \(\mathbf{x}_{7}\) is not while for the dispersion model it is the other way around. The true spatial effect is simulated using \(\sigma^{2}=1.5\) and \(\phi_{s}=3\) from an exponential kernel. The true index parameter, \(\xi=1.5\). The design matrices were centered and scaled. The resulting absolute value of the correlations within covariates and between them and the true spatial effect from the setup above was \(<0.01\). To assess the performance of our models we focus on M3 and and M4 and compute the MSE, coverage probability, FPR and TPR for each of these models. We performed 100 replications under these settings. The results are shown below in Table 8. Models M3 and M4 were able to estimate coefficients \(\beta_{6}\), \(\beta_{7}\) and \(\gamma_{6}\), \(\gamma_{7}\) with sufficient accuracy. #### 8.1.2 Spatial Patterns We show the results for simulation experiments performed for patters shown in Figure 1 of the manuscript in Tables 9 and 10 below. ### Convergence To assess convergence we resort to monitoring trace plots, posterior density and auto-correlation (ACF) plots for model parameters and coefficients. We show them particularly for the index parameter \(\xi\) in Figure 5 below. They were produced using parameter settings outlined in the examples within the R-package. Additional plots for \(\mathbf{\beta}\), \(\mathbf{\gamma}\), \(\sigma^{2}\) and \(\phi_{s}\) are included in examples within the R-package described in the next subsection. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \(N\) & \(\text{Prog. of the}\) & \multicolumn{4}{c}{MSE} & \multicolumn{2}{c}{CT(\(\mathbf{\beta}\))} \\ \cline{2-7} & 0.15 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ 2000 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & 2.71 & 0.00 & 1.00 & 0.01 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \hline & 0.15 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ 30000 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \hline & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ 30000 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \hline & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ 30000 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ [MISSING_PAGE_POST] ### R-package: sptwdglm Algorithms featuring posteriors in Section 7 constitute our implementation that is available in the publicly accessible open-source repository: [https://github.com/arh926/sptwdglm](https://github.com/arh926/sptwdglm). It is composed of four Markov chain Monte Carlo samplers as described below. Each function is accompanied with examples in the package. 1. dglm.autograd.R: Fits a DGLM to the data. It requires the response as a vector, \(y\), covariates as matrices \(\mathbf{X}\) and \(\mathbf{Z}\), upper and lower bounds for the index parameter \(\xi\) as input at the minimum to produce an output. Other optional parameters are also provided for custom fine-tuning which have optimized defaults for ease of use. This is model M1 in the manuscript. 2. ssdglm.autograd.R: Fits a DGLM featuring variable selection via the spike and slab Figure 5: Convergence diagnostics for posterior samples of the index parameter, \(\xi\). Each row corresponds to a model—(first row) M1 (second row) M2 (third row) M3 (fourth row) M4. In each row we show (left) trace, (center) posterior probability density and (right) ACF plots. prior to the data. It requires the response as a vector, \(y\), covariates as matrices \(\mathbf{X}\) and \(\mathbf{Z}\), upper and lower bounds for the index parameter \(\xi\) as input at the minimum to produce an output. Other optional parameters are also provided for custom fine-tuning which have optimized defaults for ease of use. This is model M2 in the manuscript. 3. spdglm.autograd.R: Fits a spatial DGLM to the data. It requires the coordinates as a matrix, the response as a vector, \(y\), covariates as matrices \(\mathbf{X}\) and \(\mathbf{Z}\), upper and lower bounds for the index parameter \(\xi\) as input at the minimum to produce an output. Other optional parameters are also provided for custom fine-tuning which have optimized defaults for ease of use. This is model M3 in the manuscript. 4. spssdglm.autograd.R: Fits a spatial DGLM featuring variable selection via the spike and slab prior to the data. It requires the coordinates as a matrix, the response as a vector, \(y\), covariates as matrices \(\mathbf{X}\) and \(\mathbf{Z}\), upper and lower bounds for the index parameter \(\xi\) as input at the minimum to produce an output. Other optional parameters are also provided for custom fine-tuning which have optimized defaults for ease of use. This is model M4 in the manuscript. 5. FDR.R: Computes local false discovery rates (see page 10 of manuscript) for posterior samples of \(\boldsymbol{\beta}\) and \(\boldsymbol{\gamma}\) arising from M2 or M4. Optional parameters to be specified are threshold defaulting at 0.05 and percentage at which FDR is computed defaulting to 5%. ## 9 Automobile Insurance Premiums, Connecticut, 2008: Additional results
2310.02199
Racial and Ethnic Disparities in Transit Supply During the COVID-19 Pandemic: Insights from 232 U.S. Transit Agencies
COVID-19 introduced tremendous disruptions to the normal operation of transit agencies, led to a massive suspension of regular service, and disturbed the access to essential mobility for many transit-dependent populations. In this study, we reexamine the national-level service disruption of the public transportation system as a result of the COVID-19 pandemic and investigate the disparities in the reduction of transit supply, causing certain racial-ethnic groups to be disproportionately affected. To support our analysis, we collect GTFS data from 232 transit agencies covering over 89 million people across 45 states in the U.S. Our findings suggest more disadvantaged communities and certain racial-ethnic groups have experienced disproportional reductions in transit supply, regardless of their pre-pandemic level of service. Further, we employ causal mediation analyses to address the mediation effect of the pre-pandemic level of transit service on the relationship between race/ethnicity and transit supply reduction. This analysis validates the disproportionate reduction in service quality for specific racial/ethnic groups, a trend evident across the majority of transit agencies. In particular, Black Americans are found to be associated with the greatest absolute service loss and also the largest total effects on transit supply reductions, while the White and Asian groups are less impacted. We further report that the most impacted communities by transit loss mainly consist of minority populations, where the reductions in transit are correlated with pandemic-related health and economic burdens and result in greater changes in mobility intensity. Our study contributes to a comprehensive understanding of transit inequities due to the pandemic and provides valuable insights to better prepare public transportation systems against future nationwide disasters.
Hossein Gazmeh, Lijun Sun, Yuntao Guo, Steven Jones, Xinwu Qian
2023-10-03T16:50:01Z
http://arxiv.org/abs/2310.02199v1
Racial and Ethnic Disparities in Transit Supply During the COVID-19 Pandemic: Insights from 232 U.S. Transit Agencies ###### Abstract COVID-19 introduced tremendous disruptions to the normal operation of transit agencies, led to a massive suspension of regular service, and disturbed the access to essential mobility for many transit-dependent populations. In this study, we reexamine the national-level service disruption of the public transportation system as a result of the COVID-19 pandemic and investigate the disparities in the reduction of transit supply, causing certain racial-ethnic groups to be disproportionately affected. To support our analysis, we collect General Transit Feed Specification (GITS) data from 232 transit agencies covering over 89 million people across 45 states in the U.S. Our findings suggest more disadvantaged communities and certain racial-ethnic groups have experienced disproportional reductions in transit supply, regardless of their pre-pandemic level of service. Further, we employ causal mediation analyses to address the mediation effect of the pre-pandemic level of transit service on the relationship between race/ethnicity and transit supply reduction. This analysis validates the disproportionate reduction in service quality for specific racial/ethnic groups, a trend evident across the majority of transit agencies. In particular, Black Americans are found to be associated with the greatest absolute service loss and also the largest total effects on the transit supply reductions (Black Total Effects = 0.014), while the White and Asian groups are less impacted (White Total Effects = \(-\)0.018). We further report that the most impacted communities by transit loss mainly consist of minority populations, where the reductions in transit are correlated with pandemic-related health and economic burdens and result in greater changes in mobility intensity. Our study contributes to a comprehensive understanding of transit inequities due to the pandemic, and provides valuable insights to better prepare public transportation systems against future nationwide disasters. ## 1 Introduction The COVID-19 pandemic led to substantial disruptions in transit services and imposed unprecedented challenges for transit users and agencies to adjust their demands and supply to the new circumstances. On the one hand, there was a consistent decline in ridership due to passengers' concerns about the role of public transportation as a significant disseminator of the virus during its initial takeoff [1, 2, 3]. At the same time, due to pandemic-induced operation costs and a result of the drastically reduced ridership, extensive service suspensions were observed among transit agencies across the U.S. [4, 5]. For instance, by the end of March 2020, bus and rail ridership in Washington D.C. experienced a sharp decline of 75% and 90%, respectively [6]. On the other hand, transit developments in the U.S. are known for their negative contributions to the racial-ethnic disparities in wealth, health, and access to fair opportunities, especially between White and Black Americans [7, 8, 9]. In particular, public transportation remains an essential mobility mode (if not the only one) for disadvantaged populations, of which the accessibility to daily needs is unevenly distributed during regular services [10]. In the wake of the COVID-19 pandemic, the above issues raise the question of whether the disruptions in transit services have exacerbated disparity gaps in the U.S. and their relation to the challenges faced by the affected communities. Prior to the COVID-19 pandemic, a persistent gap existed across racial-ethnic groups in terms of wealth, health and environmental justice. For instance, there remains a large gap of 20 to 30% between Black and White Americans in homeownership for more than a century [11], and the 7-year racial gap in life expectancy between Black and White Americans in 1960 was only reduced to 4 years by 2018 [12]. At the same time, the racial and ethnic equity issues are intertwined with the public transportation policies considering the operation and planning of transit services [13], transit access [8] and transit-oriented development [14]. Specifically, studies have shown that transit policies have mainly reinforced gentrification and segregation of communities to more and less transit-accessible areas, impacting the housing prices and quality of access to employment and healthcare [15, 16, 17]. Moreover, a plethora of evidence indicates that vulnerable populations are often disproportionately hit in times of crisis [18, 19, 20], and we hypothesize the loss of transit services during the pandemic is no different. For COVID-19 alone, studies have revealed the gender, racial, and ethnic gaps in job losses [21, 22], health risks [23], and COVID-19 diagnoses and death cases [24, 25, 26, 27]. Furthermore, investigations of passengers' mobility patterns suggested that more economically disadvantaged populations and people of color were inclined to maintain their transit demand despite the pandemic [4, 28, 29], and vulnerable populations relied on transit to access essential services and healthcare facilities (e.g., for dialysis) [30]. Nevertheless, the evidence from transit agencies indicates that service adjustments vary in both the magnitude of reduction and their impact on communities with different sociodemographic statuses [31, 32, 33]. In this study, we aim to investigate the possible racial-ethnic disparities in the reduction of transit services during the COVID-19 pandemic. We specifically focus on disparities among racial and ethnic groups, rather than other socioeconomic factors, which is motivated by the fact that the racial and ethnic profile of the transit-dependent population is significantly different than those populations that are less transit-dependent [34, 35]. We quantify disparity as the percentage of transit supply changes as compared to the pre-pandemic level of service, and we adopt a multi-dimensional assessment that accounts for the heterogeneous operational capacity of transit agencies captured by ridership and coverage areas. In addition to the impacts within the transit systems, we further explore the possible health and economic aftereffects on the disproportionately impacted communities that stem from the loss of transit service. In light of all these issues, we propose a series of hypotheses to unveil the potential disparities across different racial-ethnic groups, as summarized below: * _For certain racial-ethnic groups, the more race/ethnicity-specific population the local community had, the greater the reduction in transit supply would be._ The first hypothesis gives an understanding of the possible disparities among communities in terms of the absolute quantity of the transit service reduction. However, a high loss during the pandemic may be attributed to a high level of transit supply pre-pandemic. Thereby, we seek to explore the impact of racial-ethnic groups on the reduction of transit supply with regard to the pre-pandemic level. This gives rise to a better understanding of the disparities in the quality of the transit supply, as described by our second hypothesis: * _For certain racial-ethnic groups, the more race/ethnicity-specific population the local community had, the greater the reduction in transit supply would be regardless of the pre-pandemic service level._ Additionally, our third hypothesis aims to reveal if the racial-ethnic disparities in the quality of service reduction can be observed across the majority of transit agencies: * _There exists a racial-ethnic disparity across the majority of the transit agencies so that transit-dependent commuters from certain racial-ethnic groups experienced a disproportionate loss in transit access._ Finally, our fourth hypothesis concerns the potential associations between the pandemic-related health, economic, and mobility intensity burdens with the racial-ethnic disparities in transit service loss: * _For the disproportionately impacted racial-ethnic communities, reduction in the transit supply is more correlated with pandemic-related hardships including COVID-19 death rates, job losses, and reductions in mobility intensities._ To test our hypotheses, we collect data from 232 agencies' operational shifts and the sociodemographic statuses of their 21,714 covered communities at the census tract level. Furthermore, we address **H-1** by presenting our analysis of the quantity of transit supply change among different racial-ethnic groups. For **H-2** and **H-3**, we develop a multi-level causal mediation framework and explore the results both on the national and agency level. This will confirm a racial and ethnic disparity in both the quantity and the quality of the shifts in transit service during the pandemic. Our validation of **H-4** is based on correlating the shifts in the transit supply with COVID-19 death rates, job losses and changes in mobility behaviors for the most impacted transit-dependent communities on the county level. Figure 1 presents the study framework. The rest of this study is organized as follows. First, in Section 2, we present the data collection and processing steps used to analyze transit supply reduction resulting from the pandemic, as well as shifts in mobility activities, COVID-19 health and economic implications. Section 3 outlines the sociodemographic and agency-related variables employed to assess disparities in the reduction of transit services. We then introduce a causal mediation framework designed to account for pre-pandemic service levels across communities. The results of our primary hypotheses are provided in Section 4. Furthermore, in Section 5, we provide a discussion by summarizing our key findings, conclusions and future direction. ## 2 Data Collection and Processing ### Shifts in Transit Supply First, transit agencies' detailed service data, including scheduled transit routes, stops, trips and stop times, were fetched from an open-source service, OpenMobilityData [36], in the form of the General Transit Feed Specification (GTFS) [37]. Moreover, we narrowed down the selection of agencies to those with valid reports during the pre-pandemic period (from the beginning of 2018 to the conclusion of the first quarter of 2020) and at the initial takeoff of the pandemic (during the second quarter of 2020). Our definition for each period is based on the overall service elimination/restoration trends across the agencies as well as the lockdown orders across the covered states. For each of these periods, we selected the first available GTFS data. As a result, a total of 232 valid transit agencies were identified across 45 states. Figure 2 displays the spatial distribution of the 232 selected agencies. Among the 232 agencies, 122 agencies serve populations of less than 200,000, 87 agencies cover populations ranging from 200,000 to 1,000,000, and the remaining 23 agencies provide transit services to populations exceeding 1,000,000. These classifications align with the categorization of agencies found in the National Transit Database (NTD) [38], and we designate them as'small','medium', and 'large', respectively. Second, to understand causal factors that are likely contributors to the supply change, we obtain the demographic and economic statuses of the communities across the nation by collecting population, race, income and transportation usage data from the American Community Survey (ACS) 2016-2020 5-year estimates on the census tract (CT) level [39]. Next, using TIGER/Line geometric datasets, a shapefile containing the above data for all the CTs is created, where all the collected data are processed and consolidated by mapping to the underlying CTs [40]. The covered region by the selected agencies consists of 21,714 CTs with a population exceeding 89 million people. We further report that 9.7% of the population is served by small agencies, while more than 46% is covered by medium-sized agencies, and 23 large agencies serve the remaining population of about 39.7 million. Figure 1: Study framework ### COVID-19 Deaths, Job Losses, and Mobility Shifts We use The New York Times data repository on cumulative COVID-19 cases and deaths in the U.S., available on the county level [41]. In line with our previous definition of the pandemic period as the second quarter of 2020, we base our analysis on the total number of death cases up to the end of June 2020. The mobility data presented in this study can be accessed via Google COVID-19 Community Mobility Reports [42]. The data shows the mobility activities (visits and length of stay at different categories of places) changes compared to a 5-week period (Jan. 3 - Feb. 6) as the baseline. Data includes six categories of places, namely grocery & pharmacy, parks, transit stations, retail, residential and workplaces. We note that communities vary significantly in their capacities to relocate or remote working options [43]. Also, shifts in transit stations captured by the big mobility data (such as Google and Apple) can greatly differ from the agencies data, leading to under or over-estimation of the actual trends [44]. Thus, we focus on two categories of mobility activities, "grocery & pharmacy" and "parks" and use the mean value for mobility changes during the pandemic period. Information on the estimated job loss due to the pandemic comes from Urban Data Catalog [45], capturing the estimated percentage of the job loss till August 2021. Finally, we map all the above information to the 381 counties covered by the 232 selected transit agencies. ## 3 Methods ### Variables **Change Rate**: In line with our hypotheses, the dependent variable in our study is the rate of transit service reduction for the covered CTs. For further analysis, the total weekly transit trips (TWTT) are calculated as the total number of trips made at transit stops within a CT throughout a week. This accounts for the scenario where a specific transit route will make multiple stops in a large CT to cater to the travel needs of a greater population. Next, we measure the reduction in CT \(i\) transit supply level by calculating the change in TWTT per population (transit supply) between the pre-pandemic and pandemic periods, which we will refer to as the "Change Rate", as expressed in Equation (1). A zero value indicates no change of transit supply, and we set the maximum transit supply in the pre-pandemic period at 1, and we cap the maximum change rate at 0.3 to remove excessive outliers such as a transit hub located inside a commercial area with few populations. This leads to a total of 21,714 observations, where the mean change rate is 0.031 with a standard deviation of 0.054. \[\text{Change Rate}_{i}=\frac{\text{TWTT}_{i,\text{pre-pandemic}}-\text{TWTT}_ {i,\text{pandemic}}}{\text{Population}_{i}} \tag{1}\] **Race/Ethnicity**: Moreover, we are primarily interested in racial and ethnic status for independent variables. We focus on the three major races in the U.S.: White, Black, and Asian. We also account for people of Hispanic or Latino ethnicity. More specifically, consistent with Census Bureau terms, we refer to "non-Hispanic White" as "White", "non-Hispanic Black or African American" as "Black", "non-Hispanic Asian" as "Asian" and "people with Hispanic or Figure 2: Spatial distribution of 232 transit agencies Latino origin" as "Hispanic". Since the calculation of the change rate already eliminates the effect of the population size of the CTs, we also quantify race/ethnicity impacts by measuring their composition over the entire population. In our study, the investigated transit service areas consist of 48.6% of White, 15.4% of Black, 7.6% Asian and 24.4% of Hispanic. This reveals a lower representation of White individuals (as opposed to 58.9%) and a higher representation of Black (compared to 13.6%), Asian (as opposed to 6.3%), and Hispanic individuals (as opposed to 19.1%) compared to the national average. **Transit Dependency and Agency Scale**: In addition to race/ethnicity, it is also important to acknowledge the impacts of potential confounder variables that could be linked to both race/ethnicity and changes in transit supply. For example, economic status and car ownership are two likely confounders that impact both the variables [46, 47]. Since there are a plethora of factors that may build a bridge between race/ethnicity and the transit supply, here we simply use the baseline transit dependency as the single indicator of all such factors. This transit dependency is captured in the ACS as the percentage of the population in a CT who use public transportation for work commuting. In our study areas, transit dependency has an average of 0.071 and a standard deviation of 0.118. We also consider an agency-level confounder denoted by the "Agency Scale" (mean=13.54, std=1.02). The variable is calculated as the logarithmic value of the total covered population of each transit agency, which serves as a proxy of the agency's size and coverage area. Table 1 presents the summary statistics of the variables. ### Causal Mediation Analysis Most of the efforts in recent decades to examine the linear causal relationships between the observed and latent variables have been based on path analytical models, particularly Structural Equation Modeling (SEM) [48, 49]. However, a number of shortcomings in the traditional SEM framework have been raised recently. For instance, the conventional SEM framework is not generalizable to nonlinear problems and also falls short of offering a comprehensive definition of causal mediation effects as its identification assumptions are model-specific. At the same time, advancements in causal mediation analysis have emerged as a response to the shortcomings of traditional SEM, where causal mediation analysis can be regarded as a broader framework that encompasses SEM as a special case [50, 51, 52]. As a result, it has lately seen extensive applications in different domains of social science, psychology, and epidemiology [49, 53, 54]. In our study, causal mediation analysis enables us to establish a common framework for the definition, identification and estimation of causal relationships between the treatment (X) and the outcome (Y) through the mediator (M). Here, X corresponds to the composition of racial-ethnic groups, Y is the change rate, and M concerns the transit supply level before the pandemic in a CT. In line with our discussions, Figure 3 shows the proposed framework. \begin{table} \begin{tabular}{l l l l l l l l} **Statistic** & **Mean** & **St. Dev.** & **Min** & **25\%** & **50\%** & **75\%** & **Max** \\ \hline **Dependent** & & & & & & & \\ Change Rate & 0.031 & 0.054 & 0.000 & 0.000 & 0.004 & 0.040 & 0.300 \\ **Race/Ethnicity** & & & & & & & \\ White & 0.486 & 0.292 & 0.000 & 0.216 & 0.515 & 0.746 & 1.000 \\ Black & 0.154 & 0.230 & 0.000 & 0.015 & 0.054 & 0.172 & 1.000 \\ Asian & 0.076 & 0.113 & 0.000 & 0.009 & 0.034 & 0.093 & 0.904 \\ Hispanic & 0.244 & 0.248 & 0.000 & 0.057 & 0.146 & 0.357 & 1.000 \\ **Transit related** & & & & & & & \\ Pre-pandemic Supply & 0.152 & 0.179 & 0.000 & 0.025 & 0.087 & 0.211 & 0.996 \\ Agency Scale & 13.544 & 1.023 & 7.629 & 13.003 & 13.622 & 14.350 & 15.049 \\ Transit Dependency & 0.071 & 0.118 & 0.000 & 0.006 & 0.026 & 0.079 & 1.000 \\ \hline \end{tabular} \end{table} Table 1: Summary Statistics (N=21,714) Although conventional path analytical models, such as SEM, can separate the direct effect (\(X\to Y\)) from the indirect effect (\(X\to M\to Y\)) and lead to an interpretation of possible racial and ethnic disparities in the reduction of transit service, we suspect that their modeling assumption on the path independence between \(X\to M\) and \(M\to Y\) is likely violated. Figure 4 displays the relationship between the race/ethnicity percentages (X) and pre-pandemic level of transit supply (M) at different values of change rate. Each axis is divided into ten quantiles, and the mean change rate for each range is indicated by the color density. We observe that the impact of the pre-pandemic supply (M) on change rate (Y) is not independent of the racial-ethnic percentage (X). In particular, for the Black group, a high level of transit service before the pandemic will more likely lead to a high change rate in CTs with a relatively low or high composition of Black Americans. Thereby, to achieve an unbiased understanding of both direct and indirect effects, the causal mediation analysis is employed to address the interdependency between the causal pathways and the nonlinear interactions among the variables. The essence of the causal mediation analysis resides in the counterfactual framework to identify the direct and indirect effects, ultimately yielding the total effect. In the context of our study, consider \(Y_{i}(x,M(x))\) as a function of the change rate in CT \(i\) given the racial-ethnic composition \(x\) and pre-pandemic supply \(M(x)\). The impacts of \(X\) on \(Y\) can be decomposed as the average causal mediation effect (ACME) and the average direct effect (ADE), where the latter captures the direct effects of the racial-ethnic composition on the change rate regardless of the pre-pandemic service. In view of the counterfactual framework, the ACME concerns the indirect effect as the expectation of the function \(\delta_{i}(x)\) on how Y will change by altering M (e.g., change from high pre-pandemic supply to low pre-pandemic supply) while holding X constant, as presented in Equation (2). Note that for the particular example, \(Y_{i}(x,\text{High Pre-pandemic Supply})\) is observed while \(Y_{i}(x,\text{Low Pre-pandemic Supply})\) is the counterfactual outcome. Although the latter cannot be observed directly, the value of which can always be predicted by estimating the \(Y\sim M+X+X\times M\) from the data (e.g., a regression model). \[\delta_{i}(x)\equiv Y_{i}(x,\text{High Pre-pandemic Supply})-Y_{i}(x,\text{Low Pre-pandemic Supply}) \tag{2}\] Figure 4: The relationship between the CT’s racial-ethnic composition and pre-pandemic transit supply for different values of change rate. Figure 3: Conceptual causal mediation framework Similarly, taking a CT with a high percentage of the race/ethnicity-specific population as an example, the ADE concerns the expectation of the function \(\zeta_{i}(x)\) on how \(Y\) will change by altering the race/ethnicity percentage while holding the pre-pandemic supply constant. This is displayed in Equation (3). Once more, the former expression on the right-hand side of the equation is observed, while the latter term is the counterfactual outcome. The counterfactual outcome can also be predicted by the same estimation equation as before. \[\zeta_{i}(x)\equiv Y_{i}(\text{High Race Percentage, }M_{i}(x))-Y_{i}(\text{Low Race Percentage, }M_{i}(x)) \tag{3}\] Finally, based on ACME and ADE, one can compute the total effect (TE) as \(\mathbb{E}[\delta_{i}(x)+\mathbb{E}[\zeta_{i}(x)]\). Furthermore, considering the fact that the change rate is a result of the shifts in services provided by transit agencies with different capabilities, a multilevel structure is needed to address the heterogeneity across agencies. To this end, we use hierarchical linear modeling (HLM) to fit the mediator and outcome models. For each racial-ethnic group, a variety of linear and polynomial models are examined, and the choice of model forms is based on their goodness of fit. The models for White and Black groups are expressed as follows: \[\text{Pre-pandemic Supply}_{ij} =\gamma_{0j}+\gamma_{1j}*\text{Race}_{ij}+\gamma_{2}*\text{Race} _{ij}^{2}+\gamma_{3}*\text{Agency Scale}_{j}+\gamma_{4}*\text{Transit Dependency}_{ij}+ \epsilon_{ij} \tag{4}\] \[\gamma_{0j} =\alpha_{0}+\mu_{0j},\quad\gamma_{1j}=\alpha_{0}+\mu_{1j}\] \[\text{Change Rate}_{ij} =\kappa_{0j}+\kappa_{1j}*\text{Pre-pandemic Supply}_{ij}+\kappa_{2 }*\text{Race}_{ij}+\kappa_{3}*\text{Race}_{ij}^{2}+\kappa_{4}*\text{Pre-pandemic Supply}_{ij}*\text{Race}_{ij} \tag{5}\] \[+\kappa_{5}\text{Transit Dependency}_{ij}+\kappa_{6}\text{Agency Scale}_{j}+ \epsilon_{ij}\] \[\kappa_{0j} =\alpha_{0}+\mu_{0j},\quad\kappa_{1j}=\alpha_{0}+\mu_{1j}\] For the Asian group, the HLM model for the mediator is the same as Equation (4) while the model for the Hispanic group excludes the term \(\gamma_{2}*\text{Race}_{ij}^{2}\). On the other hand, the outcome HLM model for both Asian and Hispanic groups is identical where there is no quadratic term \(\kappa_{3}*\text{Race}_{ij}^{2}\) as Equation (5). In all models, predictors are scaled and centered around the mean. We also allow random intercepts for each transit agency \(j\) to account for the unobserved heterogeneity. In other words, adding the random intercept considers the agencies' differences in providing/eliminating the service. The random slopes are introduced for race/ethnicity in the mediator model and for the pre-pandemic level of supply in the outcome model. The former addresses the demographical differences between the covered areas by the agencies, while the latter means that the rate at which the change rate is impacted by the pre-pandemic level of service can be different across the agencies. Finally, the variables agency scale and transit dependency are used as the two confounders at the agency level and the observation level, and the nonlinear interaction between the racial-ethnic composition and the pre-pandemic supply is introduced in the outcome model. Given the finalized HLM models, the model parameters are estimated using the lmer package in R [55] for the HLM models and the mediation package in R [56] for the causal mediation effects. ## 4 Results ### Racial-ethnic Disparity in the Quantity of Transit Supply Reduction Our initial findings delve into the socioeconomic and racial-ethnic attributes of the covered CTs, categorized by the absolute magnitude of transit supply reduction. This gives us a better understanding of the disparities regarding the quantity of the reduction in transit supply, where the pre-pandemic level of service is not explicitly included. We categorize CTs into four groups based on their loss of transit supply, i.e., change rate: the top 25% with the highest change rates are labeled as "High," the next 25% as "Med-High," the subsequent 25% as "Med-Low," and the lowest 25% which have a change rate value of 0 are labeled as "No Change." Figure 5 shows an apparent disparity between CTs with different socioeconomic statuses in terms of their change rate. More precisely, on average, 12.2% of the households in CTs with a high change rate do not own a vehicle. This is 15% higher than the average across all the covered CTs. Moreover, there is a notable gap between the CTs in terms of health insurance coverage. On average, CTs with a high change rate are 45% more likely to lack coverage compared to those with no change (values being 11.5% and 7.9%, respectively). In addition, a higher median income is consistently attributed to a lower change rate (the average median income for CTs with no change is almost 18% more than CTs with a high change rate). Also, the relative values for transit-dependent populations are similar to people without a vehicle, validating our usage of transit dependency as a representation of various socioeconomic characteristics. More interestingly, the racial and ethnic status of the impacted CTs displays a distinct opposite trend between the Black and White communities regarding the level of change rate (Figure 6). The average proportion of White people in CTs with no reduction is above 52.2%, which is 11% more than their average in CTs with a high change rate. At the same time, CTs with a high change rate consist of 22.2% Black Americans on average, which is 44.2% higher than their actual representation in the covered CTs. On the other hand, it is evident that there are no noticeable variations in the CTs' Asian and Hispanic compositions across different change rate levels. Figure 7 shows the transit supply levels among racial-ethnic groups prior to the pandemic. From Figure 7, we report that densely populated Black CTs are heavily reliant on the transit system, contrary to the White group. More importantly, the findings from Figure 6 and Figure 7 highlight the potential influence of the pre-pandemic supply on the disparities in change rates, given the significant variation across different racial-ethnic groups' pre-pandemic transit supply level. Nevertheless, the qualitative results of the impacted CTs' sociodemographic status confirm our first hypothesis, **H-1**, regarding the disproportionate transit loss in quantity for certain transit-dependent racial-ethnic groups. However, these findings also prompt the question of whether the observed disparities remain when accounting for the pre-pandemic level of transit supply. (pre-pandemic supply levels). Specifically, our focus lies in determining whether the racial-ethnic composition of CTs (\(X\)) has a significant impact on the pre-pandemic transit supply level (\(M\)). Subsequently, another hierarchical linear model connects the race/ethnicity (\(X\)), pre-pandemic supply (\(M\)) and transit-related confounders to the change rate (\(Y\)). The main question here revolves around whether the interaction of racial-ethnic compositions and pre-pandemic supply significantly influences the change rate, confirming the mediation effect of the pre-pandemic service level. Finally, we break down the impact of race/ethnicity on the change rate into average **direct** effects and average causal mediation, **indirect**, effects and provide a discussion of the results for each racial-ethnic group. Table 2 presents the results of the first hierarchical linear model, where we explore the influence of the independent variable (racial-ethnic composition) and confounders (transit dependency and agency scale) on the pre-pandemic level of transit supply. For all racial and ethnic groups, the transit dependency of the covered CTs positively and significantly affects the pre-pandemic service level (\(p<0.001\)). This indicates that a higher proportion of transit-dependent population in the CTs corresponds to a greater level of transit service prior to the pandemic. The agency scale is found negative and statistically significant (\(p<0.01\)), suggesting that the size of the agencies has a negative effect on the trip density of their covered CTs. Additionally, all the linear and quadratic effects of racial-ethnic compositions in our models are found to be statistically significant. Overall, interpreting the linear terms indicates that the proportion of White Americans in a given CT significantly influences the supply level adversely (\(p<0.001\)). The same holds for Asian Americans (\(p<0.001\)) and linearly for Hispanics (\(p<0.001\)). On the other hand, the ratio of the Black population is found to be positive and statistically significant (\(p<0.001\)), such that a higher percentage of Black Americans is attributed to more transit supply levels. Additionally, all the quadratic terms are significant and positive, suggesting a convex relationship between the White, Black and Asian proportion in CTs with the pre-pandemic level of supply. In short, the results for the first model emphasize the significant and nonlinear relationships between the \begin{table} \begin{tabular}{l l l l l} & **White** & **Black** & **Asian** & **Hispanic** \\ \hline (Intercept) & \(0.078^{*}\) & \(0.084^{*}\) & \(0.089^{*}\) & \(0.091^{*}\) \\ & (0.035) & (0.034) & (0.036) & (0.035) \\ Transit Dependency & \(0.567^{***}\) & \(0.537^{***}\) & \(0.575^{***}\) & \(0.570^{***}\) \\ & (0.016) & (0.015) & (0.016) & (0.015) \\ Agency Scale & \(-0.009^{**}\) & \(-0.010^{**}\) & \(-0.010^{**}\) & \(-0.010^{**}\) \\ & (0.003) & (0.003) & (0.003) & (0.003) \\ \% White & \(-0.083^{***}\) & & & \\ & (0.013) & & & \\ & (0.013) & & & \\ & (0.019) & & & \\ & & 0.094\({}^{***}\) & & \\ & & (0.010) & & \\ & & 0.050\({}^{*}\) & & \\ & & (0.021) & & \\ & & & \(-0.098^{***}\) & \\ & & & (0.016) & \\ & & & 0.136\({}^{*}\) & \\ & & & (0.057) & \\ & & & & \(-0.034^{***}\) \\ & & & & (0.007) \\ \hline AIC & \(-21490.507\) & \(-21615.753\) & \(-21482.016\) & \(-21466.241\) \\ BIC & \(-21418.641\) & \(-21543.887\) & \(-21410.150\) & \(-21402.360\) \\ ICC & 0.100 & 0.092 & 0.105 & 0.102 \\ Pseudo-\(R^{2}\) (total) & 0.280 & 0.286 & 0.316 & 0.311 \\ No. Observations & 21714 & 21714 & 21714 & 21714 \\ No. Groups & 232 & 232 & 232 & 232 \\ \hline \multicolumn{4}{l}{\({}^{***}p<0.001;^{**}p<0.01;^{*}p<0.05;p<0.1\)} \\ \end{tabular} \end{table} Table 2: Hierarchical Linear Model Results for Pre-pandemic Supply. percentage of racial-ethnic groups and transit-related variables with the pre-pandemic transit supply, underlining the necessity of capturing the disparities from a causal mediation standpoint. We present the results of the hierarchical linear model for the change rate in Table 3. We report the effects of transit dependency on the change rate to be positive and statistically significant (\(p<0.001\)), indicating a higher reduction of service for CTs with a higher level of transit dependency. Across all racial-ethnic groups, the impact of the pre-pandemic level of service is significantly positive (\(p<0.001\)), suggesting a higher pre-pandemic supply led to a greater reduction in the transit service. In addition, larger transit agencies seem to have a more challenging time maintaining their level of service as their operational scale is positively associated with more reduction (\(p<0.001\)), which is likely due to the higher level of service before the pandemic. Moreover, the interaction term between racial-ethnic percentages and pre-pandemic supply is significant across all the models (for White, Black and Asian \(p<0.001\), for Hispanic \(p<0.01\)), confirming the impact of race-ethnicity on change rate to be mediated via the pre-pandemic supply. More interestingly, we observe a notable disparity in the influence of race/ethnicity on the change rate. Specifically, the influence of the proportion of Black or Hispanic Americans is negative and significant (for Black \(p<0.01\), for Hispanic \(p<0.05\)). At the same time, the composition of White Americans is significantly associated with more reduction in the transit service (\(p<0.01\)) and the impact of Asian Americans is found to be positive but insignificant. Nevertheless, \begin{table} \begin{tabular}{l c c c c} & **White** & **Black** & **Asian** & **Hispanic** \\ \hline (Intercept) & \(-0.047^{**}\) & \(-0.046^{**}\) & \(-0.047^{**}\) & \(-0.047^{**}\) \\ & (0.015) & (0.015) & (0.015) & (0.015) \\ Transit Dependency & \(0.020^{***}\) & \(0.019^{***}\) & \(0.018^{***}\) & \(0.018^{***}\) \\ & (0.003) & (0.003) & (0.003) & (0.003) \\ Agency Scale & \(0.006^{***}\) & \(0.006^{***}\) & \(0.006^{***}\) & \(0.006^{***}\) \\ & (0.001) & (0.001) & (0.001) & (0.001) \\ Pre-pandemic Supply & \(0.225^{***}\) & \(0.225^{***}\) & \(0.224^{***}\) & \(0.224^{***}\) \\ & (0.015) & (0.015) & (0.015) & (0.015) \\ \% White & \(0.002^{**}\) & & & \\ & (0.001) & & & \\ \% White\({}^{2}\) & \(0.001\) & & & \\ & (0.003) & & & \\ \% White \(\times\) Pre-pandemic Supply & \(0.018^{***}\) & & & \\ & (0.005) & & & \\ \% Black & & \(-0.004^{**}\) & & \\ & & (0.001) & & \\ \% Black\({}^{2}\) & & \(0.015^{***}\) & & \\ & & (0.003) & & \\ \% Black \(\times\) Pre-pandemic Supply & \(-0.023^{***}\) & & \\ & & (0.005) & & \\ \% Asian & & & 0.002 & \\ & & & (0.002) & \\ \% Asian \(\times\) Pre-pandemic Supply & & & 0.071^{***}\) & \\ & & & (0.013) & \\ \% Hispanic & & & & \(-0.002^{*}\) \\ & & & & (0.001) \\ \% Hispanic \(\times\) Pre-pandemic Supply & & & & \(-0.019^{**}\) \\ & & & & (0.007) \\ \hline AIC & \(-97079.017\) & \(-97088.785\) & \(-97098.620\) & \(-97081.451\) \\ BIC & \(-96991.181\) & \(-97000.949\) & \(-97018.769\) & \(-97001.600\) \\ ICC & \(0.555\) & \(0.555\) & \(0.554\) & \(0.555\) \\ Pseudo-\(R^{2}\) (total) & \(0.830\) & \(0.830\) & \(0.830\) & \(0.830\) \\ No. Observations & \(21714\) & \(21714\) & \(21714\) & \(21714\) \\ No. Groups & \(232\) & \(232\) & \(232\) & \(232\) \\ \hline \multicolumn{6}{l}{\({}^{***}\)\(p<0.001^{**}\) \(p<0.01^{**}\) \(p<0.05\); \(p<0.1\)} \\ \end{tabular} \end{table} Table 3: Hierarchical Linear Model Results for Change Rate. capturing an accurate estimation of the impact of race-ethnicity on the change rate requires a causal mediation analysis that accounts for both the direct and indirect effects, which are mediated by the pre-pandemic supply. Based on the results from the mediator (pre-pandemic supply) and outcome (change rate) models, we proceed with the causal mediation analysis to unveil the indirect and direct effects of racial-ethnic groups on transit supply reduction. Figure 8 displays the graphical summary of the causal mediation analysis results for racial-ethnic groups, including the average causal mediation effects (ACME) and average direct effects (ADE). Similar to our findings on the absolute quantity of service change in section (4.1), we observe distinct effects for various racial-ethnic groups. Specifically, the indirect effect of transit loss for Black Americans, mediated by the pre-pandemic level of service, is significant and positive (ACME \(=0.019,p<0.001\)), highlighting a connection between transit supply and change rate. On the contrary, the indirect effect of White Americans on the change rate is negative and statistically significant (ACME \(=-0.020,p<0.001\)). On the other hand, the direct effect of Black Americans is statistically significant and negatively associated with change rate (ADE \(=-0.005,p<0.001\)), whereas a positive and significant effect for White Americans is revealed (ADE \(=0.002,p<0.05\)). However, a notably higher indirect effect resulted in a significant positive total effect for the Black group (Total Effects \(=0.014,p<0.001\)) and a total negative effect for White Americans (Total Effects \(=-0.018,p<0.001\)). Asian and Hispanic are found to be the groups with both negative direct and indirect effects. Asian group's direct effect is insignificant (ADE \(=-0.001\)) while their average causal mediation effect is the highest among all the groups, with a negative and significant value (ACME \(=-0.025,p<0.001\)). Consequently, their total effect is found to be negative and statistically significant (Total Effects \(=-0.026,p<0.001\)). For Hispanics, the direct effect is negative and significant (ADE \(=-0.002,p<0.05\)), as the indirect effect is also significant (ACME \(=-0.007,p<0.001\)), resulting in a negative and significant total effect (Total Effects \(=-0.009,p<0.001\)). The above discussions reveal a significant disparity between racial-ethnic groups in the quality of transit service reduction during the COVID-19 pandemic, where the differences in the pre-pandemic levels of transit supply are taken into account. In brief, the indirect effects of race/ethnicity on the change rate highlight the pronounced reliance of Black Americans on the pre-pandemic transit service, as their effect on the change rate is positively and significantly mediated through the pre-pandemic supply. Conversely, the opposite pattern is observed for White, Asian, and Hispanic Americans. Furthermore, the direct effect of the racial-ethnic composition of CTs on the change rate, regardless of their pre-pandemic service levels, indicates a negative direct effect for Black and Hispanic Americans but a positive direct effect for White Americans (with the results for Asian group being insignificant). This can be interpreted as service preservation is generally observed in areas with higher proportions of Black and Hispanic minorities, suggesting possible prioritization of communities with vulnerable populations when adjusting the service. Lastly, the total effects of these racial-ethnic groups on the change rate demonstrate that Black Americans experience the most substantial impact, while White and Asian Americans are comparatively less affected. Hence, based on the results of causal mediation analysis, our second hypothesis (**H-2**) is confirmed, as considering the pre-pandemic service level highlights a significant disparity in the reduction of transit supply across racial-ethnic groups. Figure 8: Results for causal mediation effects. ### Racial-ethnic Disparities in Transit Supply Reduction at Agency Levels We are further interested in checking whether the observed disparities represent a common trend among the majority of transit agencies in their service adjustments during the pandemic. We present the results for agency-level causal mediation effects in Figure 9 and Figure 10. A threshold (\(p=0.1\)) for the significance of the effects is specified. A slightly higher significance level allows a more inclusive consideration of potential effects and associations, aligning with the aim of capturing a balance between minimizing false negatives and guarding against unwarranted conclusions. Each transit agency is represented by a scatter point (darker and larger points for agencies with a higher value for agency scale). A review of agency-level direct and causal mediation effects demonstrates two notable trends for shifts in Black Americans' transit supply. First, the pre-pandemic level of service by the agencies is significantly (\(p<0.001\)) and positively influenced by the proportion of Black Americans (Table 2), also noticeable from Figure 9 (b) with their significant (\(p<0.1\)) impact on over 40% of agencies' ACME. Second, The direct effect of the Black group on supply reduction across all agencies is negative, with more than 95% being significant (Figure 10 (b)). In contrast, while the significant (\(p<0.001\)) negative influence of the White group on the pre-pandemic service level already highlighted their less reliance on the transit system (Table 2), Figure 10 (a) displays the positive direct effect of the White composition on the service reduction for all the agencies, with more than 50% being statistically significant. For the Asian group, the indirect effect is negative across all agencies, while their direct effect is non-significant (Figure 10 (c)). Furthermore, the earlier discussion on (Table 3) also highlighted an insignificant contribution of this group to the change rate. These observations collectively indicate that the Asian group appears to be the least reliant and least affected among minority groups. In addition, the results at the agency level indicate a resemblance between the direct effects of the Hispanic group on the change rate and those of the Black group (Figure 10 (d), with a negative value across all the agencies (70% being significant). However, when considering their mediated effect through the pre-pandemic supply level on the change rate, the Hispanic group's pattern aligns more closely with that of the White and Asian groups, with predominantly negative values (Figure 9 (d)). The discussed findings confirm a similar pattern among transit agencies regarding the effects of race-ethnicity on observed service reductions, with a largely consistent positive or negative impact on the transit service reductions across the transit agencies. We conduct further investigation to explore whether transit agencies of varying sizes differ significantly in terms of the effects of racial-ethnic groups on their change rate. This will highlight whether having access to sufficient resources may mitigate the disparity of transit losses among different racial-ethnic groups. To this end, Figure 11 presents the Figure 9: Estimation results for agency-level average causal mediation effects. causal effects of race-ethnicity compositions on the change rate categorized by agency sizes. The classification of agency sizes follows the convention in the NTD, where agencies are categorized as small if they cover a population of less than 200,000, medium if their coverage ranges from 200,000 to 1,000,000, and large if they serve more than 1,000,000 people [38]. We use the Kruskal-Wallis (KW) [57] test to examine the null hypothesis that the mean rank of the race-ethnicity causal effects is the same for small, medium, and large transit agencies. This is to account for the non-normality of both direct and indirect causal effects distributions (Shapiro-Wilk [58] test of normality; null hypothesis rejected with \(p<0.001\) for all categories), which prevents us from using an ANOVA test. Following the KW test, we fail to reject the null hypothesis for all twelve scenarios covering different combinations of three different causal effects and four racial-ethnic groups (with \(p>0.1\)). This indicates that transit agency sizes, which represent their operational capabilities and service coverage, may not impact their immediate response to national emergencies such as the pandemic, and the transit riders in small, medium, and large urban areas suffered a similar level of transit supply reduction disparities. Regarding our above discussions on the agency-level causal mediation effects of different racial-ethnic groups on change rates, we observe the disparities to be mainly similar across the majority of the transit agencies. Note that in all the cases related to the direct effects of race/ethnicity, we report a comparable positive or negative influence of race-ethnicity on transit loss across at least 50% of the transit agencies. In short, our findings verify our third hypothesis, **H-3**, Figure 11: Distribution of causal effects by agency size for each racial-ethnic group Figure 10: Estimation results for agency-level average direct effects. as there exist similar patterns among transit agencies toward the reduction of their service such that certain racial-ethnic groups are disproportionately impacted. ### Transit Supply Reduction and the Related Consequences We further check if the observed racial and ethnic disparities of transit supply reduction may also correlate to pandemic-related health, economic and mobility consequences for the impacted transit-dependent communities. This helps us to explore the potential concurrent implications of such disparities in relation to the well-being and essential mobility intensity of the impacted communities. To this end, we focus on the relationships between the change rate and the health, economic and mobility factors in the most impacted areas with a high density of certain racial-ethnic groups. For health-related indicators, we use the data on COVID-19 deaths, available on the county level for all 381 covered counties by 232 transit agencies [41]. Economic impacts are based on the expected job loss rate as a result of the pandemic [45] and mobility changes are captured by the shifts in activities related to grocery & pharmacy, and park places [42]. Here, we focus on the counties rather than CTs due to the aforementioned data being only available at the county level, and we also aggregate the change rate of transit services to the associated counties. As for the most impacted racial-ethnic communities, we target the communities corresponding to the top 25% of the minority compositions in the transit-covered communities that suffer the top 25% change rate in transit supply. The resulting number of such counties for Black, Asian and Hispanic groups are 34, 29 and 23, respectively. The analyses for the White group are excluded in this section due to only one county (Pitkin, CO) meeting the above criteria. Figure 12 to Figure 14 show the results of the relationships between the change rate and cumulative COVID-19 death rates, job losses and mobility intensities, where the most-impacted counties are highlighted in blue and the dot size represents each county's number of population. The dashed horizontal line shows the threshold for the 25th percentile of the change rate in all counties. The Pearson correlation coefficients (r) between the change rate and the selected factors for the most impacted counties are also displayed. As can be observed in Figure 12, there is a strong correlation, ranging from 0.40 to 0.52, between the loss in transit service and the COVID-19 death rate for all three racial-ethnic groups and the correlations are all statistically significant (\(p<0.10\)). In other words, impacted counties with a high density of minorities that experience a higher rate of transit loss also happen to suffer more from COVID-19 deaths. While such correlations are not significant for other less impacted communities, the results highlight the ripple effects of transit shortage for the most vulnerable groups. Moreover, the relationship between the job loss rate and the change rate points out a difference among the three racial-ethnic groups (see Figure 13). The only group with a positive and significant correlation is the Black group (\(r(32)=0.30,p<0.10\)). On the other hand, the association between the change rate and job loss due to the pandemic is positive but insignificant for Asian Americans (\(r(27)=0.31,p>0.10\)), and negative but insignificant for the Hispanic group (\(r(21)=-0.20,p>0.10\)). Finally, the results for the shifts in essential mobility activities related to grocery & pharmacy, and park categories are shown in Figure 14. Both trends suggest a more prominent negative correlation between the change rate and visits to essential activities for Hispanics as compared to other groups. The correlation between the grocery and pharmacy mobility with change rate for this group is strongly negative and statistically significant (\(r(21)=-0.55,p<0.10\)) while the correlation between the parks mobility activities and change rate is negative but statistically insignificant (\(r(21)=-0.30,p>0.10\)). All the other correlations for Black and Asian groups are found to be statistically insignificant (\(p>0.10\)), suggesting less substantial shifts in the essential mobility activities for the highly impacted counties with a high composition of Black and Asian Americans by the end of June 2020. To summarize, analyzing the correlation between pandemic-related burdens and the service change rate in transit-dependent counties suggests that the loss of transit supply may have ripple effects on socioeconomic and mobility activities that vary among different racial/ethnic groups, especially in the most impacted communities. These observations verify our final hypothesis **H-4**. We note that death rates and mobility intensities are related to the more immediate aftereffects of the COVID-19 outbreak as they capture the cumulative number of death cases and mean changes in mobility activities by the end of June 2020. On the other hand, the job loss rate pertains to the subsequent impacts of the pandemic, signifying the percentage of jobs lost by August 2021. As such, we believe that immediate mobility changes for the Hispanic group exhibit a stronger correlation with the change rate, whereas the enduring economic challenges of the pandemic correlate most with the Black group. Additionally, all three economic and mobility indicators related to the Asian group were found to be insignificant, which may indicate better access to other mobility solutions for Asians as compared to the two other groups. Finally, note that the above conclusions are in line with our results in the previous sections, as a disparity in transit loss was revealed, with Black Americans being the most impacted group while less impact on the Asian and White groups was identified. Figure 14: County-level results for mean shifts in mobility activities by the end of June 2020. Figure 12: County-level results for COVID-19 death rates by the end of June 2020. Figure 13: County-level results for job loss due to the pandemic by August 2021. ## 5 Summary and Conclusion In this study, we investigate the racial-ethnic disparities in the reduction of transit supplies as a result of the COVID-19 pandemic. Through the integration of causal mediation analysis applied to data collected from 232 agencies spanning 45 states, we establish connections between the racial-ethnic compositions of communities, pre-pandemic supply levels, and shifts in transit service, all while accommodating the heterogeneous capabilities of the selected agencies. Our findings show that transit-dependent and lower-income communities and certain racial-ethnic groups have experienced disproportional reductions in their absolute reduction of transit supply. Furthermore, accounting for the pre-pandemic level of supply reveals the disparities in the reduction of transit service to more affect Black and Hispanic communities, compared to White and Asian groups. Table 4 presents a summary of our main findings across the four target racial-ethnic groups. In short, our comprehensive analyses led to four major conclusions given the four target racial-ethnic groups, as summarized below: * A higher composition of White Americans in CTs was associated with less reduction in the quantity of transit service during the pandemic. Nevertheless, the percentage of the White population was also found to have a negative relationship with the pre-pandemic transit supply (\(p<0.001\)). When controlling for the initial supply level, significant and negative total effects on the transit service reduction for the White group were observed at the 0.001 level (Total Effects = \(-0.018\) vs. Black Total Effects = \(0.014\)). These findings favor the conclusion that the White group, along with the Asian group, are less impacted by the disruption in transit service as compared to other racial-ethnic groups. * Our analysis indicated that among the target groups, Black Americans were the most reliant on transit. They were the only racial-ethnic group with a positive relationship with pre-pandemic transit supply levels (\(p<0.001\)). But they also suffered the highest loss in absolute service reduction during the pandemic. Our analysis of their \begin{table} \begin{tabular}{l l l l l} \hline & **White** & **Black** & **Asian** & **Hispanic** \\ \hline Impact on pre-pandemic supply & \(-0.083^{***}\) & \(0.094^{***}\) & \(-0.098^{***}\) & \(-0.034^{***}\) \\ Direct effects on change rate & \(0.002^{*}\) & \(-0.005^{***}\) & \(-0.001\) & \(-0.002^{*}\) \\ Indirect effects on change rate & \(-0.020^{***}\) & \(0.019^{***}\) & \(-0.025^{***}\) & \(-0.007^{***}\) \\ Agency-level indirect effects on change rate & negative for 90\% of agencies (\(>20\%\) significant) & positive for 88\% of agencies (\(>40\%\) significant) & negative for all agencies (\(>50\%\) significant) & negative for agencies (\(>50\%\) significant) \\ Agency-level direct effects on change rate & positive for all agencies (\(>50\%\) significant) & negative for all agencies (\(>95\%\) significant) & negative for all agencies (\(>70\%\) significant) & negative for all agencies (\(>70\%\) significant) \\ Correlation with pandemic death rates (most impacted) & \(-\) & \(0.43^{*}\) & \(0.52^{*}\) & \(0.40^{*}\) \\ Correlation with pandemic-induced job losses (most impacted) & \(-\) & \(0.30\) & \(0.31\) & \(-0.20\) \\ Correlation with shifts in grocery\& pharmacy activities (most impacted) & \(-\) & \(-0.14\) & \(-0.28\) & \(-0.55^{*}\) \\ Correlation with shifts in parks activities (most impacted) & \(-\) & \(-0.13\) & \(0.06\) & \(-0.30\) \\ \hline \multicolumn{5}{l}{\(""p<0.001;"p<0.01;"p<0.05;p<0.1\)} \\ \end{tabular} \end{table} Table 4: Summary Results transit losses suggests that their high pre-pandemic transit dependency (as a mediator) resulted in the highest negative total effect on the reduction of transit services (ACME = 0.019, \(p<0.001\)). This direct loss was also shown to be strongly (\(p<0.1\)) associated with pandemic-induced job loss and COVID-19 death rate among the Black population. Nevertheless, we also observed a negative direct effect (ADE=\(-0.005\), \(p<0.001\)) for the Black population, which implies efforts from the majority of the transit agencies to maintain their level of transit service that may have prevented an additional 27% loss in the transit supply. * For the Asian group, similar to White Americans, we report a negative relationship with transit supply prior to the pandemic (\(p<0.001\)). This group also displayed a most negative indirect effect (ACME=\(-0.025\), \(p<0.001\)) on transit supply reduction between the target groups, while their direct effect on the change rate was insignificant. Additional analysis indicated potential indifference toward their transit needs during the pandemic, as no significant direct effect was observed across any agencies at the 0.1 level. Furthermore, correlations between the transit service loss and indicators related to job losses and essential mobility shifts were insignificant for this group, suggesting that Asian Americans were less affected by transit loss due to the pandemic compared to other minority groups. * Our findings suggest that the Hispanic group is the second most affected racial-ethnic group after the Black group. We observed Hispanic Americans to be less transit-dependent and disproportionately affected by the quantity of service loss compared to the Black group. As a result, despite having a negative and significant direct effect (ADE=\(-0.002\), \(p<0.05\)) on the transit supply reduction (similar to the Black group), their total effect was notably lower and significant (Total Effect = \(-0.009\) vs. Black Total Effects = \(-0.024\)). Regarding the difference between the Hispanic and Black groups, further analysis revealed that the Hispanic group exhibited a stronger correlation with shifts in immediate pandemic consequences, such as visits to grocery and pharmacy locations (Pearson-r = \(-0.55\)). On the other hand, the Black group demonstrated a stronger association with more lasting pandemic implications like higher rates of job loss (Pearson-r = 0.30). As public transportation remains an essential mobility service for vulnerable populations and disadvantaged communities in the post-pandemic era, transit agencies face the challenge of adapting to the "new normal" where substantial modifications in their practices are required [59, 60]. Under the raised complexities, the knowledge and insights generated in this study will have significant implications for the restoration of transit services and highlight the pathways for local and federal transit agencies to retain equitable transit services for the general public. Nevertheless, there are several additional aspects that warrant consideration. First, even prior to the pandemic, transit ridership declines were widespread in U.S. cities [61]. For instance, the year 2019 saw U.S. bus ridership at its third-lowest since World War II, following a trend of seven consecutive years of reduction [59, 62]. At the same time, a review of transit agencies' operations suggested a prevailing focus on serving more affluent riders and voters as a whole rather than prioritizing vulnerable populations [63]. Therefore, we point out that the disparities in transit service, where specific racial-ethnic groups experience disproportionate impacts from supply changes, likely extend beyond the scope of the pandemic era. Second, our approach not only underlines the complexities involved in identifying the disparities within communities as a result of transportation policies but also provides a broad overview of the racial-ethnic disparities during the pandemic. However, given the multifaceted nature of this challenge, additional targeted inquiries are imperative. For instance, linking the shared patterns among agencies to their operational and strategic capacities or establishing causal relationships between pandemic-related service adjustments and economic and health consequences demands further exploration. In this context, case studies focusing on specific regions and agencies can provide a microscopic perspective that complements the broader analysis. Lastly, our study makes a substantial contribution to the growing body of evidence highlighting the disproportionate impact on vulnerable populations during times of crisis [18, 19, 20]. This observation holds significant implications, as our perspective can be extended to both past and potential future nationwide emergencies, shedding light on the consequences of our policy responses. Through this lens, we gain valuable insights into strategies for restoring the resilience of public infrastructure in the long term to better protect vulnerable communities. ## CRediT authorship contribution statement **Hossein Gazmeh:** Conceptualization, Methodology, Formal analysis, Writing - Original, Review & Editing, Visualization. **Ljun Sun:** Methodology, Software, Data Curation, Supervision. **Yuntao Guo:** Writing - Review & Editing, Data Curation, Supervision. **Steven Jones:** Resources, Writing - Review & Editing. **Xinwu Qian:** Conceptualization, Formal analysis, Methodology, Writing - Review & Editing, Supervision.
2306.07707
Incentive-Compatible Selection for One or Two Influentials
Selecting influentials in networks against strategic manipulations has attracted many researchers' attention and it also has many practical applications. Here, we aim to select one or two influentials in terms of progeny (the influential power) and prevent agents from manipulating their edges (incentive compatibility). The existing studies mostly focused on selecting a single influential for this setting. Zhang et al. [2021] studied the problem of selecting one agent and proved an upper bound of 1/(1+ln2) to approximate the optimal selection. In this paper, we first design a mechanism to actually reach the bound. Then, we move this forward to choosing two agents and propose a mechanism to achieve an approximation ratio of (3+ln2)/(4(1+ln2)) (approx. 0.54).
Yuxin Zhao, Yao Zhang, Dengji Zhao
2023-06-13T11:50:07Z
http://arxiv.org/abs/2306.07707v1
# Incentive-Compatible Selection for One or Two Influentials ###### Abstract Selecting influentials in networks against strategic manipulations has attracted many researchers' attention and it also has many practical applications. Here, we aim to select one or two influentials in terms of progeny (the influential power) and prevent agents from manipulating their edges (incentive compatibility). The existing studies mostly focused on selecting a single influential for this setting. Zhang _et al._ [2021] studied the problem of selecting one agent and proved an upper bound of \(1/(1+\ln 2)\) to approximate the optimal selection. In this paper, we first design a mechanism to actually reach the bound. Then, we move this forward to choosing two agents and propose a mechanism to achieve an approximation ratio of \((3+\ln 2)/(4(1+\ln 2))\) (\(\approx 0.54\)). ## 1 Introduction Consider the scenario where we want to select influential agents in a network constructed by referral relationships (e.g., the following relationships in Twitter, the citations between academic papers, etc.). The selected agents may be rewarded with prizes or benefits (e.g., job opportunities [14]). Hence, agents have the incentive to manipulate their relationships to make themselves selected. Therefore, selection mechanisms that can prevent agents from strategic manipulations (which is referred to as the property of incentive compatibility) are highly demanded [1]. Many studies have investigated incentive-compatible selection mechanisms on different influence measurements for different purposes (see [13] for a complete survey). In this paper, we focus on the setting where an agent's influential power is measured by her progeny (the number of all agents who directly or indirectly follow her). For this setting, two studies have been conducted before. Babichenko _et al._ [2020] proposed the first single agent selection mechanism for progeny maximization that can prevent agents from adding or hiding their out-edges. Their mechanism reaches an approximation ratio of about \(1/3\) (i.e., the expected progeny of the chosen agent is about \(1/3\) of the largest). However, their mechanism only works in forests. Therefore, Zhang _et al._ [2021] further studied the same problem in directed acyclic graphs (DAGs) with restricting manipulations in the scope of hiding edges (agents cannot add new edges). Their proposed mechanism achieves an approximation ratio of \(1/2\). Moreover, they proved an upper bound \(1/(1+\ln 2)\) of the approximation ratio for any incentive-compatible and fair selection mechanism in the DAG setting. In this paper, we follow the DAG setting of [10] and make the following contributions: * For selecting one agent, we close the gap between the known approximation ratio of \(1/2\) and the upper bound of \(1/(1+\ln 2)\). We propose a mechanism to achieve the exact upper bound. * For selecting two agents, we show that, for the class of mechanisms that only select agents from the \(1\)-influential set1 (most of the existing mechanisms belong to this class), the approximation ratio cannot exceed \(1/2\) if the target is to select at most two agents. Moreover, we provide a deterministic mechanism in this class that exactly reaches the approximation ratio of \(1/2\). Footnote 1: 1-influential set contains all agents each of whom has the largest progeny by hiding her out-edges. * We then propose a new incentive-compatible mechanism based on a \(2\)-influential set for selecting two agents. The new mechanism achieves a higher approximation ratio of \((3+\ln 2)/(4(1+\ln 2))\) (\(\approx 0.54\)). We also provide a general upper bound (\(23/27\)) of any incentive-compatible mechanism for selecting two agents. ### Other Related Work Many studies on incentive-compatible selection mechanisms use in-degrees to measure agents' influential power, which is also referred to as peer selection. For the in-degree measurement, Alon _et al._ [2011] firstly proposed an incentive-compatible peer selection mechanism by a randomized partition method, which divides agents into two groups and chooses the agents according to their in-degrees from the other group. Following this work, there are two major directions. One is to characterize incentive-compatible peer selection mechanisms with axioms, which is initiated by Holzman and Moulin [2013]. Mackenzie [2015] continued this study by adding symmetric axiomatizations. The other direction is to improve the approximation ratios of the existing incentive-compatible peer selection mechanisms. Fischer and Klimm (2014) extended the idea of the partition mechanism to a permutation mechanism, which achieves the optimal approximation ratio \((1/2)\) for selecting one agent with in-degree. Then, Bousquet _et al._ (2014) characterized a class of networks where the permutation mechanism selects an agent close to the optimal. Bjelde _et al._ (2017) generalized the permutation mechanism for selecting multiple agents, and gave both lower and upper bounds of the approximation ratio. Recent studies also started to consider an alternative evaluation called additive approximation, which focuses on the expected difference from the optimal rather than the worst-case ratio (Caragiannis _et al._, 2021; Caragiannis _et al._, 2022; Cembrano _et al._, 2022). There are also extensions on the networks, including a weighted network, where the influential power is a weighted in-degree (Kurokawa _et al._, 2015; Wang _et al._, 2018; Babichenko _et al._, 2020a), and rank aggregation, where each agent assigns a rank to others (Kahng _et al._, 2018; Mattei _et al._, 2020). There is also a rich body of work focusing on different measurements of influential power. Ghalme _et al._ (2018) designed a naturally strategy-proof score function to measure the popularity of agents, thereby simplifying the selection. In contrast, Babichenko _et al._ (2018) targeted a PageRank-like centrality (Page _et al._, 1999) and offered a two-path mechanism which achieves a good approximation ratio of \(2/3\). Moreover, Babichenko _et al._ (2020b) focused on the progeny of the selected agent and proposed a mechanism with an approximation ratio of \(1/(4\ln 2)\) in forests. Zhang _et al._ (2021) then proposed a geometric mechanism with an approximation ratio of \(1/2\) in DAGs, which is what we follow here. ## 2 Preliminaries Let \(\mathcal{G}_{n}\) be the set of all directed acyclic graphs (DAGs) with \(n\) nodes and \(\mathcal{G}=\bigcup_{n\in\mathbb{N}^{+}}\mathcal{G}_{n}\) be the set of all DAGs. Consider a network represented by a graph \(G=(N,E)\in\mathcal{G}\), where \(N=\{1,2,\ldots,n\}\) is the node set and \(E\) is the edge set. Each node \(i\in N\) represents an agent, and each edge \((i,j)\in E\) represents that agent \(i\) follows (votes for, or quotes) agent \(j\). Let \(E_{i}=\{(i,j)\mid(i,j)\in E,j\in N\}\) be the set of all edges from \(i\). We say an agent \(j\) is influenced by the agent \(i\) if there exists a path from \(j\) to \(i\) in \(G\). Let \(P(i,G)^{2}\) be the set of agents who are influenced by agent \(i\) (including \(i\) herself), which is referred to as the progeny of agent \(i\). Our goal is to select a group of agents in the network as delegates with larger progeny. Let \(\mathcal{S}_{k}=\{S\mid S\subseteq N,|S|=k\}\) be the set of all subsets with \(k\) agents, and \(\mathcal{S}_{\leq k}=\bigcup_{t=0}^{k}\mathcal{S}_{t}\). A \(k\)-_selection mechanism_ decides how to choose up to \(k\) agents as delegates. **Definition 1**.: _A \(k\)-selection mechanism for \(\mathcal{G}\) is a family of functions \(f:\mathcal{G}_{n}\rightarrow[0,1]^{\mathcal{S}_{\leq k}}\) for all \(n\in\mathbb{N}^{+}\), that maps each graph to a probability distribution on all subsets with no more than \(k\) agents._ For a given graph \(G\in\mathcal{G}\) and a \(k\)-selection mechanism \(f\), denote \(x_{S}(G)=(f(G))_{S}\) as the probability of the subset \(S\in\mathcal{S}_{\leq k}\) being selected, and \(x_{i}(G)=\sum_{S\in\mathcal{S}_{\leq k}\,\text{and}\,i\in S}x_{S}(G)\) as the probability of the agent \(i\) being selected. For an agent \(i\in N\), she wants her probability to be selected (\(x_{i}\)) as large as possible, while for the owner of the mechanism, we want the influential power of the selected group (the sum of the progeny) as large as possible. Unfortunately, if we simply choose an optimal subset with \(k\) agents, i.e., \(S_{k}^{*}\in\operatorname*{arg\,max}_{S\in\mathcal{S}_{k}}\sum_{i\in S}|P(i,G )|\), then agents will have incentives to hide their edges to increase their ranks to be selected. We want to avoid such a manipulation, which requires the mechanism to be _incentive-compatible_. **Definition 2**.: _A \(k\)-selection mechanism for \(\mathcal{G}\) is incentive-compatible (IC) if for every \(n\in\mathbb{N}^{+}\), and every two graphs \(G=(N,E)\), \(G^{\prime}=(N,E^{\prime})\in\mathcal{G}_{n}\), such that \(E\left\backslash E_{i}=E^{\prime}\setminus E_{i}^{\prime}\right.\) and \(E_{i}\supseteq E_{i}^{\prime}\) for \(i\in N\), we have \(x_{i}(G)\geq x_{i}(G^{\prime})\)._ Intuitively, incentive compatibility implies that no matter how other agents follow each other, it is an undominated strategy for any agent not to hide her out-edges. Since an incentive-compatible \(k\)-selection mechanism cannot always choose a group with the highest influential power, we seek the approximation of the optimum, which guarantees a worst-case ratio between the expected progeny of the selected group and an optimal group for all DAGs. **Definition 3**.: _An incentive-compatible \(k\)-selection mechanism is \(\alpha\)-optimal if_ \[\inf_{G\in\mathcal{G}}\frac{\mathbb{E}_{S\sim x_{S}(G)}[\sum_{i\in S}P(i,G)]}{ \sum_{i\in S_{k}^{*}}P(i,G)}\geq\alpha.\] For convenience, we can characterize an optimal group by defining a strict order of agents as follows. **Definition 4**.: _For a graph \(G=(N,E)\in\mathcal{G}\), and agents \(i\), \(j\in N\), \(i\neq j\), we say \(i\succ j\) if either \(P(i,G)>P(j,G)\) or \(P(i,G)=P(j,G)\) with \(i>j\)._ Let \(i_{t}^{*}\) be the agent with rank \(t\) such that \(|\{j\mid j\succ i_{t}^{*}\}|=t-1\), which must be unique since the order is strict. Then, we can order all the agents as the _ranking sequence_\(i_{1}^{*}\succ i_{2}^{*}\succ\cdots\succ i_{n}^{*}\). Apparently, \(\{i_{1}^{*},\cdots,i_{k}^{*}\}\) is an optimal set for selecting \(k\) agents. Hence, for our strategic setting, we will pay attention to agents who can pretend to be the first \(k\) agents in the ranking sequence by hiding their out-edges. **Definition 5**.: _For a graph \(G=(N,E)\), an agent \(i\) belongs to the \(k\)-influential set \(S_{k}^{\text{inf.}}(G)\) if \(|\{j\mid j\succ i\}|<k\) holds in the graph \(G^{\prime}=(N,E\setminus E_{i})\)._ In this paper, we mainly focus on the cases for \(k\in\{1,2\}\). Hence, we use some observations about \(S_{1}^{\text{inf.}}(G)\) and \(S_{2}^{\text{inf.}}(G)\). To make it easy to follow, we present the observations about \(S_{1}^{\text{inf.}}(G)\) below as preparation and present the observations of \(S_{2}^{\text{inf.}}(G)\) in Section 4.2. **Observation 1** ([3]).: _For any graph \(G\), the set \(S_{1}^{\text{inf.}}(G)\) can be written as \(\{i_{1},i_{2},\cdots,i_{m}\}\), where \(m\geq 1\), \(i_{1}=i_{1}^{*}\), and \(i_{t+1}\in P(i_{t})\setminus\{i_{t}\}\) for all \(t<m\)._ Intuitively, the agent \(i_{1}^{*}\) who ranks the first is naturally in the 1-influential set. Furthermore, if there are more than one agent in the set, the agent who has a lower rank must be in the progeny of those who have higher ranks; otherwise, she will still have a lower rank when deleting her out-edges. Hence, in other words, we can find a path in \(G\) that passes through all agents in the set with the order of their ranks. **Observation 2** ([22]).: _For any graph \(G\), if the set \(S_{1}^{\text{inf.}}(G)=\{i_{1},i_{2},\cdots,i_{m}\}\) has more than one agent, i.e., \(m>1\), then for any \(1<t\leq m\), \(2P(i_{t})\geq P(i_{1})\)._ Inferred from Observation 1, if there is another agent except for \(i_{1}^{*}\) is in the 1-influential set, she should hold at least half of \(i_{1}^{*}\)'s progeny to make herself rank the first after removing her out-edges. ## 3 Select One Agent In this section, we present our result for only selecting \(k=1\) agent. Not only does it help us understand the proposed methods in the following section for \(k=2\), but also fills the gap between the existing mechanisms and the upper bound of approximation ratios for IC 1-selection mechanisms, which is \(1/(1+\ln 2)\) confirmed by Zhang _et al._[2021]. Formally, our method can be viewed as a general variant of the modified Babichenko's mechanism [2020b]. \(\beta\)**-logarithmic Mechanism (\(\beta\)-LM)** Given a network \(G=(N,E)\), find the 1-influential set \(S_{1}^{\text{inf.}}(G)=\{i_{1},\ldots,i_{m}\}\), where \(i_{t}\succ i_{t+1}\) for all \(1\leq t<m\). Assign the probability of each agent to be selected as follows: \[x_{j}=\begin{cases}\beta,&j=i_{m}\\ (1-\beta)\log_{2}\frac{P(i_{t})}{P(i_{t+1})},&j=i_{t},t\neq m\\ 0,&j\notin S_{1}^{\text{inf.}}(G).\end{cases}\] The total probability of selecting one agent in \(\beta\)-LM is at most \(\beta+(1-\beta)\log_{2}(P(i_{1})/P(i_{m}))\leq 1\) by the fact in Observation 2. Hence, the probabilities assigned by the mechanism are valid as long as \(0\leq\beta\leq 1\). Then, we show the mechanism is IC when \(\beta\geq 1/2\). **Theorem 1**.: _A \(\beta\)-logarithmic mechanism is IC if \(\beta\geq 1/2\)._ Proof.: For any graph \(G\in\mathcal{G}\), let \(S_{1}^{\text{inf.}}(G)=\{i_{1},\ldots,i_{m}\}\). Then, we consider three different types of agents. 1. For an agent \(i\notin S_{1}^{\text{inf.}}(G)\), by definition, she can never pretend to be the agent with rank 1 by hiding her out-edges. Hence, she will always not belong to the 1-influential set and will have 0 probability to be chosen. 2. For an agent \(i_{t}\in S_{1}^{\text{inf.}}(G)\) such that \(t<m\), no matter how she hides her out-edges, \(i_{t+1}\) will always belong to the 1-influential set because \(i_{t+1}\in P(i_{t})\) (Observation 1) and her progeny cannot be decreased. Hence, the probability of \(i_{t}\) to be chosen will not change. 3. For the agent \(i_{m}\in S_{1}^{\text{inf.}}(G)\), if she hides some of her out-edges, there may happen two cases. (i) If there is no agent in \(P(i_{m})\) occurs in the new 1-influential set, the probability of \(i_{m}\) to be chosen will remain \(\beta\). (ii) If there exists at least one agent in \(P(i_{m})\) that occurs in the new 1-influential set, let \(i_{1}^{\prime}\) be the first agent in the new set. Let \(i_{m+1}\) be the one with the highest rank after \(i_{m}\) in the new set. Then, the probability of \(i_{m}\) to be chosen will become \((1-\beta)\log_{2}(P(i_{m})/P(i_{m+1}))\leq(1-\beta)\log_{2}(P(i_{1}^{\prime}) /P(i_{m+1}))\leq(1-\beta)\leq\beta\) by the fact of \(\beta\geq 1/2\) and Observation 2. Taking all the above together, no agent can increase her probability to be chosen by hiding her out-edges. Now we can compute the approximation ratios of IC \(\beta\)-LMs, from which we can find that the optimal \(\beta\)-LM is also an optimal IC selection mechanism for \(k=1\). **Theorem 2**.: _An IC \(\beta\)-logarithmic mechanism (\(1/2\leq\beta\leq 1\)) is \(\left(\min\left\{\frac{1}{2}\left(\beta+\frac{1-\beta}{\ln 2}\right),\beta \right\}\right)\)-optimal._ Proof.: For any graph \(G\in\mathcal{G}\), let \(S_{1}^{\text{inf.}}(G)=\{i_{1},\ldots,i_{m}\}\). If \(m=1\), then we have \[\mathbb{E}_{i\sim x_{i}}[P(i)]/P(i_{1}^{*})=x_{i_{1}}P(i_{1})/P(i_{1})=\beta.\] If \(m>1\), then we have \[\mathbb{E}_{i\sim x_{i}}[P(i)]/P(i_{1}^{*})\] \[=\frac{1-\beta}{P(i_{1}^{*})}\sum_{t=1}^{m-1}P(i_{t})\log_{2}\frac {P(i_{t})}{P(i_{t+1})}+\beta\frac{P(i_{m})}{P(i_{1}^{*})}\] \[=\frac{1-\beta}{\ln 2\cdot P(i_{1}^{*})}\sum_{t=1}^{m-1}P(i_{t}) \int_{P(i_{t+1})}^{P(i_{t})}\frac{\mathrm{d}z}{z}+\beta\frac{P(i_{m})}{P(i_{1} ^{*})}\] \[\geq\frac{1-\beta}{\ln 2\cdot P(i_{1}^{*})}\sum_{t=1}^{m-1}\int_{P(i_{t +1})}^{P(i_{t})}\mathrm{d}z+\beta\frac{P(i_{m})}{P(i_{1}^{*})}\] \[=\frac{1-\beta}{\ln 2\cdot P(i_{1})}\sum_{t=1}^{m-1}(P(i_{t})-P(i_{t +1}))+\beta\frac{P(i_{m})}{P(i_{1})}\] \[=\frac{1-\beta}{\ln 2}+\left(\beta-\frac{1-\beta}{\ln 2}\right) \frac{P(i_{m})}{P(i_{1})}\] \[\geq\frac{1}{2}\left(\beta+\frac{1-\beta}{\ln 2}\right).\] Therefore, the mechanism is \(\left(\min\left\{\frac{1}{2}\left(\beta+\frac{1-\beta}{\ln 2}\right),\beta \right\}\right)\)-optimal. It is not hard to find out that when \(\beta=1/(1+\ln 2)\), the value \(\min\left\{\frac{1}{2}\left(\beta+\frac{1-\beta}{\ln 2}\right),\beta\right\}\) takes it maximum as \(1/(1+\ln 2)\), i.e., the optimal \(\beta\)-LM is \((1/(1+\ln 2))\)-LM, which is \((1/(1+\ln 2))\)-optimal. Recall that Zhang _et al._[2021] has proved that no IC and fair3 selection mechanism can be \(\alpha\)-optimal with \(\alpha>1/(1+\ln 2)\). Hence, we can infer the optimality of \((1/(1+\ln 2))\)-LM. **Corollary 1**.: _There is no other IC and fair selection mechanism for \(k=1\) that can have a higher approximation ratio than \((1/(1+\ln 2))\)-LM._ At the end of this section, we give a running example of \((1/(1+\ln 2))\)-LM. **Example 1**.: _Consider the network depicted in Figure 1, where \(S_{1}^{\mathsf{inf.}}(G)=\{i_{1},i_{2},i_{3},i_{4}\}\). For the last agent \(i_{4}\) in the set, her selection probability \(x_{i_{4}}\) is \(1/(1+\ln 2)\approx 0.59\). For the agents \(i_{3}\), \(i_{2}\) and \(i_{1}\), their selection probabilities are_ \[x_{i_{3}} =\frac{\ln 2}{1+\ln 2}\log_{2}\frac{P(i_{3})}{P(i_{4})}=\frac{ \ln 2}{1+\ln 2}\log_{2}\frac{5}{4}\approx 0.13;\] \[x_{i_{2}} =\frac{\ln 2}{1+\ln 2}\log_{2}\frac{P(i_{2})}{P(i_{3})}=\frac{ \ln 2}{1+\ln 2}\log_{2}\frac{6}{5}\approx 0.11;\] \[x_{i_{1}} =\frac{\ln 2}{1+\ln 2}\log_{2}\frac{P(i_{1})}{P(i_{2})}=\frac{ \ln 2}{1+\ln 2}\log_{2}\frac{7}{6}\approx 0.09.\] ## 4 Select Two Agents We start to consider selecting up to \(k=2\) agents as delegates. To select agents with larger progeny, one possible approach is to find the second delegate from the 1-influential set as well. However, this limits the performance of IC mechanisms. ### Limitation of the 1-influential Set The limitation of selecting agents from the 1-influential set for \(k=2\) mainly comes from the fact that there may be only a single agent in the set. **Theorem 3**.: _If an IC 2-selection mechanism only selects agents in the 1-influential set, then it cannot be \(\alpha\)-optimal with \(\alpha>1/2\)._ Proof.: Consider a two-star graph shown in Figure 2. Suppose \(P(i_{1})=P(j)=y\) and \(i_{1}>j\). Then the 1-influential set of this graph \(S_{1}^{\mathsf{inf.}}(G)\) only contains \(i_{1}\). Therefore, even if the mechanism can always select agent \(i_{1}\) with probability \(1\), the approximation ratio in this graph is only \((1\cdot y)/(y+y)=1/2\). Hence, if only selecting agents in the 1-influential set, an IC 2-selection mechanism cannot achieve an approximation ratio of more than \(1/2\). We can also show that the limitation described in Theorem 3 is tight by providing the following mechanism. **Least Deterministic Mechanism (LDM)** 1. Given a network \(G=(N,E)\), find the 1-influential set \(S_{1}^{\mathsf{inf.}}(G)=\{i_{1},\ldots,i_{m}\}\), where \(i_{t}\succ i_{t+1}\) for all \(1\leq t<m\). 2. Assign the probability of each agent to be selected as follows: \[x_{j}=\begin{cases}1,&j=i_{m},\text{or}\ j=i_{m-1}\\ 0,&j=i_{t},t<m-1,\text{or}\ j\notin S_{1}^{\mathsf{inf.}}(G).\end{cases}\] Intuitively, LDM deterministically selects the last two agents in the 1-influential set or it selects the only agent in the set if \(|S_{1}^{\mathsf{inf.}}(G)|=1\). **Example 2**.: _We take the networks shown in Figure 1 and Figure 2 as running examples. In Figure 1, there are four agents, \(i_{1}\), \(i_{2}\), \(i_{3}\), and \(i_{4}\), in the 1-influential set. Hence, LDM deterministically selects the last two agents \(i_{4}\) and \(i_{3}\), i.e., \(x_{i_{4}}=x_{i_{3}}=1\). In Figure 2, there is a single agent \(i_{1}\) in the 1-influential set. Hence, LDM deterministically selects the agent \(i_{1}\) only._ Now we prove that LDM is incentive-compatible and \(1/2\)-optimal as follows. **Theorem 4**.: _LDM is an IC 2-selection mechanism._ Proof.: For any graph \(G\in\mathcal{G}\), suppose that \(S_{1}^{\mathsf{inf.}}(G)=\{i_{1},\ldots,i_{m}\}\). Then we consider the following cases. 1. For an agent \(i\notin S_{1}^{\mathsf{inf.}}(G)\), same as the first point in the proof of Theorem 1, she always has no chance to be chosen by hiding her out-edges. 2. If \(m\leq 2\), the agents in \(S_{1}^{\mathsf{inf.}}(G)\) will be deterministically selected. Hence, they have no incentive to hide their out-edges. 3. If \(m>2\), first, for agents \(i_{m}\) and \(i_{m-1}\) who will be deterministically selected, they have no incentive to hide their out-edges. Then, for any agent \(i_{t}\in S_{1}^{\mathsf{inf.}}(G)\) with \(i<m-1\), no matter how she hides her out-edges, \(i_{m}\) and \(i_{m-1}\) will always belong to the 1-influential set because \(\{i_{m},i_{m-1}\}\subseteq P(i_{t})\) (Observation 1) and their progeny cannot be decreased. Hence, the probability of \(i_{t}\) to be chosen will remain to be 0. Taking all the above together, no agent can increase her probability to be chosen by hiding her out-edges. Therefore, the mechanism is IC. Figure 1: An example of the network, where the marked agents have the relationship as \(i_{1}\succ i_{2}\succ i_{3}\succ i_{4}\succ j\). The 1-influential set is represented by a dashed border. Figure 2: A two-star network where the hubs satisfy \(i_{1}\succ j\). Thus, the 1-influential set only contains \(i_{1}\). **Theorem 5**.: _LDM is \(1/2\)-optimal._ Proof.: For any graph \(G\in\mathcal{G}\), if \(|S_{1}^{\text{inf.}}(G)|=1\), then LDM deterministically selects the agent \(i_{1}^{*}\). Therefore, in this case, we have \[\frac{\mathbb{E}_{S}[\sum_{i\in S}P(i)]}{\sum_{i\in S_{2}^{*}}P(i)}=\frac{P(i_{1 }^{*})}{P(i_{1}^{*})+P(i_{2}^{*})}\geq\frac{P(i_{1}^{*})}{P(i_{1}^{*})+P(i_{1}^ {*})}=\frac{1}{2}.\] If \(|S_{1}^{\text{inf.}}(G)|\geq 2\), then LDM deterministically selects the agent \(i_{m}\) and \(i_{m-1}\). By Observation 2, in this case, we have \[\frac{\mathbb{E}_{S}[\sum_{i\in S}P(i)]}{\sum_{i\in S_{2}^{*}}P(i )} =\frac{P(i_{m})+P(i_{m-1})}{P(i_{1}^{*})+P(i_{2}^{*})}\] \[\geq\frac{P(i_{m})+P(i_{m-1})}{2P(i_{1})}\] \[=\frac{1}{2}\left(\frac{P(i_{m})}{P(i_{1})}+\frac{P(i_{m-1})}{P( i_{1})}\right)\] \[\geq\frac{1}{2}\left(\frac{1}{2}+\frac{1}{2}\right)=\frac{1}{2}.\] Therefore, LDM is \(1/2\)-optimal. If we consider general IC 2-selection mechanisms, we may have a higher upper bound of the approximate ratio. This suggests the limitation of only selecting agents from the 1-influential set when we target up to two delegates. **Theorem 6**.: _There is no IC 2-selection mechanism that can be \(\alpha\)-optimal with \(\alpha>23/27\)._ Proof.: Consider three networks with four agents shown in Figure 3. Applying a generic IC 2-selection mechanism on these graphs, suppose the probabilities of each agent \(i\) being chosen in the three graphs are \(x_{i}^{(a)}\), \(x_{i}^{(b)}\) and \(x_{i}^{(c)}\). Notice that network (b) can be obtained by agent 2 or 4 in network (a) hiding their out-edges (corresponding to agent 2 or 1 in network (b)), while network (c) can be obtained by agent 3 hiding her out-edge (corresponding to agent 1 or 3 in network (c)). Since the selection mechanism is IC, we have the following constraints: \[x_{2}^{(b)}\leq x_{2}^{(a)},\quad x_{1}^{(b)}\leq x_{4}^{(a)}; \tag{1}\] \[x_{3}^{(c)}\leq x_{3}^{(a)},\quad x_{1}^{(c)}\leq x_{3}^{(a)}. \tag{2}\] Moreover, the mechanism selects at most 2 agents. Hence, \[\sum_{i=1}^{4}x_{i}^{(a)}\leq 2,\quad\sum_{i=1}^{4}x_{i}^{(b)}\leq 2,\quad\sum_{i =1}^{4}x_{i}^{(c)}\leq 2. \tag{3}\] The approximation ratio of the mechanism must be no more than the least ratio in these three graphs, i.e., \[\alpha\leq\min\left\{\frac{4x_{1}^{(a)}+3x_{2}^{(a)}+2x_{3}^{(a) }+x_{4}^{(a)}}{7},\right.\] \[\left.\frac{x_{1}^{(b)}+3x_{2}^{(b)}+2x_{3}^{(b)}+x_{4}^{(b)}}{5},\right.\] \[\left.\frac{2x_{1}^{(c)}+x_{2}^{(c)}+2x_{3}^{(c)}+x_{4}^{(c)}}{4} \right\}\] With constraints (1) - (3), we can calculate the highest value of the minimum. Therefore, we have \(\alpha\leq 23/27\) and the equations holds when \[x_{1}^{(a)} =2/3,x_{2}^{(a)}=17/27,x_{3}^{(a)}=19/27,x_{4}^{(a)}=0;\] \[x_{1}^{(b)} =0,x_{2}^{(b)}=17/27,x_{3}^{(b)}=1,x_{4}^{(b)}=10/27;\] \[x_{1}^{(c)} =19/27,x_{2}^{(c)}=16/27,x_{3}^{(c)}=19/27,x_{4}^{(c)}=0.\] ### Utilizing the 2-influential Set To break through the limitation of the 1-influential set, one natural idea is to consider the 2-influential set. We first characterize the set by following observations. **Observation 3**.: _For any graph \(G\), \(S_{1}^{\text{inf.}}(G)\subseteq S_{2}^{\text{inf.}}(G)\)._ **Observation 4**.: _For any graph \(G\), \(\{i_{1}^{*},i_{2}^{*}\}\subseteq S_{2}^{\text{inf.}}(G)\)._ According to Definition 5, agents who can pretend to be the first in the ranking sequence are definitely in the 2-influential set. The agents \(i_{1}^{*}\) and \(i_{2}^{*}\) who rank first and second are also naturally in the 2-influential set. Based on the relationship between \(i_{1}^{*}\) and \(i_{2}^{*}\), the 2-influential set will have different forms. **Observation 5**.: _For any graph \(G\), if \(i_{2}^{*}\in P(i_{1}^{*})\), the set \(S_{2}^{\text{inf.}}(G)\) can be written as \(\{i_{1},i_{2},\cdots,i_{m}\}\), where \(i_{1}=i_{1}^{*}\), \(i_{2}=i_{2}^{*}\), and \(i_{t+1}\in P(i_{t})\) for all \(t<m\)._ Proof.: When there is another agent \(i_{3}\), except for \(i_{1}^{*}\) and \(i_{2}^{*}\), is in the 2-influential set, she must have the ability to decrease \(i_{1}^{*}\) or \(i_{2}^{*}\)'s progeny by hiding her out-edges, i.e., at least one of \(i_{3}\in P(i_{1}^{*})\) and \(i_{3}\in P(i_{2}^{*})\) is satisfied. If \(i_{2}^{*}\in P(i_{1}^{*})\), we will show \(i_{3}\) must belong to \(P(i_{2}^{*})\) by contradiction as follows. If \(i_{3}\notin P(i_{2}^{*})\), then she cannot decrease \(i_{2}^{*}\)'s progeny by hiding her out-edges. Since \(i_{2}^{*}\notin P(i_{3})\), \(i_{2}^{*}\in P(i_{1}^{*})\) is always satisfied no matter how \(i_{3}\) hides her out-edges. Hence, in Figure 3: Three networks with four agents where (b) and (c) can be obtained by one of the agents in (a) hiding her out-edge. The probabilities of each agent being chosen by a generic IC 2-selection mechanism are attached beside the node. order to rank first or second after hiding out-edges, \(i_{3}\succ i_{2}^{*}\) must be satisfied, which contradicts that \(i_{2}^{*}\) is with rank 2. Therefore, \(i_{3}\in P(i_{2}^{*})\). Similarly, when there is another agent \(i_{4}\prec i_{3}\) in the 2-influential set, she must belong to \(P(i_{3})\), and this pattern extends to subsequent agents in the 2-influential set. **Observation 6**.: _For any graph \(G\), if \(i_{2}^{*}\notin P(i_{1}^{*})\), the set \(S_{2}^{\text{inf.}}(G)\) can be written as \(\{i_{1},i_{2},\cdots,i_{m}\}\), where \(i_{1}=i_{1}^{*}\), \(i_{2}=i_{2}^{*}\), and for others, at least one of the below is true:_ * \(i_{3}\in P(i_{1})\)_,_ \(i_{t+1}\in P(i_{t})\) _for all_ \(3\leq t<m\)_;_ * \(i_{3}\in P(i_{2})\)_,_ \(i_{t+1}\in P(i_{t})\) _for all_ \(3\leq t<m\)_._ Proof.: When there is another agent \(i_{3}\), except for \(i_{1}^{*}\) and \(i_{2}^{*}\), is in the 2-influential set, at least one of \(i_{3}\in P(i_{1}^{*})\) and \(i_{3}\in P(i_{2}^{*})\) is satisfied. We first assume \(i_{3}\in P(i_{1}^{*})\). Then, when there is another agent \(i_{4}\prec i_{3}\) in the 2-influential set, we show that \(i_{4}\in P(i_{3})\) by contradiction as follows. If \(i_{4}\notin P(i_{3})\), she cannot decrease \(i_{3}\)'s progeny by hiding her out-edges. Since \(i_{3}\notin P(i_{4})\), \(i_{3}\in P(i_{3}^{*})\) is always satisfied no matter how \(i_{4}\) hides her out-edges. Hence, \(i_{4}\succ i_{3}\) must be satisfied to make \(i_{4}\) be in the 2-influential set, which makes a contradiction. Therefore, \(i_{4}\in P(i_{3})\). Similarly, when there is another agent \(i_{5}\prec i_{4}\) in the 2-influential set, she must belong to \(P(i_{4})\), and this pattern extends to subsequent agents in the 2-influential set. The same results can also be obtained similarly if \(i_{3}\in P(i_{2}^{*})\). From the above observations, we can see that the structure of the 2-influential set is much more complex than that of the 1-influential set. Furthermore, the progeny of the last agent in the 2-influential set might be much smaller than \(P(i_{1}^{*})\) since \(P(i_{2}^{*})\) might be small. These are the main difficulties to utilize the 2-influential set. Extending the idea of LDM and \(\beta\)-LM, we propose the following mechanism. **Logarithm After Least Deterministic (LALD)** 1. Given a network \(G=(N,E)\), find the 1-influential set \(S_{1}^{\text{inf.}}(G)\) and the 2-influential set \(S_{2}^{\text{inf.}}(G)\). 2. If \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)=\emptyset\), then \(S_{1}^{\text{inf.}}(G)=S_{2}^{\text{inf.}}(G)=\{i_{1},i_{2}\ldots,i_{m}\}\), where \(i_{t}\succ i_{t+1}\) for all \(1\leq t<m\). Then, assign the probability of each agent to be selected as follows: \[x_{j}=\begin{cases}1,&j=i_{m}\\ \frac{1}{1+\ln 2},&j=i_{m-1}\\ \frac{\ln 2}{1+\ln 2}\log_{2}\frac{P(i_{t})}{P(i_{t+1})},&j=i_{t},t<m-1\\ 0,&j\notin S_{2}^{\text{inf.}}(G).\end{cases}\] 3. If \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\neq\emptyset\), suppose \(S_{2}^{\text{inf.}}(G)=\{i_{1},\ldots,i_{m}\}\) where \(i_{t}\succ i_{t+1}\) for all \(1\leq t<m\). First, deterministically select the agent \(i_{m}\), i.e., \(x_{i_{m}}=1\). Then, select the second agent by applying \((1/(1+\ln 2))\)-LM on \(G\). Intuitively, LALD first deterministically selects the last agent in the 2-influential set. Then, it uses the same probability distribution as \((1/(1+\ln 2))\)-LM to select another agent among the remaining agents in the 1-influential set. **Example 3**.: _We take the networks shown in Figure 1 and Figure 4 as running examples._ _In Figure 1, suppose \(j<i_{2}\). Then, \(S_{2}^{\text{inf.}}(G)=S_{1}^{\text{inf.}}(G)=\{i_{1},i_{2},i_{3},i_{4}\}\). Hence, LALD first deterministically selects agent \(i_{4}\), i.e., \(x_{i_{4}}=1\). For the remaining agents in \(S_{1}^{\text{inf.}}(G)\), LALD assigns the probabilities as \(x_{i_{3}}=1/(1+\ln 2)\approx 0.59\), \(x_{i_{2}}=\ln 2/(1+\ln 2)\log_{2}(P(i_{2})/P(i_{3}))\approx 0.11\), and \(x_{i_{4}}=\ln 2/(1+\ln 2)\log_{2}(P(i_{1})/P(i_{2}))\approx 0.09\)._ _In Figure 4, suppose \(i_{2}>i_{3}\) and \(i_{4}>j\). Then, \(S_{1}^{\text{inf.}}(G)=\{i_{1}\}\) and \(S_{2}^{\text{inf.}}(G)=\{i_{1},i_{2},i_{3},i_{4}\}\). Hence, LALD first deterministically selects agent \(i_{4}\), i.e., \(x_{i_{4}}=1\). LALD then runs \((1/(1+\ln 2))\)-LM, which assigns the probabilities among \(S_{1}^{\text{inf.}}(G)\). Here, it assigns that \(x_{i_{1}}=1/(1+\ln 2)\approx 0.59\)._ **Theorem 7**.: _LALD is an IC 2-selection mechanism._ Proof.: For any graph \(G\in\mathcal{G}\), We consider three different types of agents. 1. For an agent \(i\notin S_{2}^{\text{inf.}}(G)\), by definition, she can never be in the set by hiding her out-edges. Hence, she will always have 0 probability to be chosen. 2. For an agent \(i\in S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\) when \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\neq\emptyset\), there are two cases. (i) If \(i\) is the last agent in the 2-influential set, her probability to be chosen is 1. Hence, she has no incentive to manipulate. (ii) If \(i\) is not the last agent, no matter how she hides her out-edges, both the last agent and herself are still in the set \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\). Hence, her probability to be chosen remains 0. 3. For an agent \(i\in S_{1}^{\text{inf.}}(G)\), suppose \(S_{1}^{\text{inf.}}(G)=\{i_{1},\ldots,i_{q}\}\) with \(i_{t}\succ i_{t+1}\) for all \(1\leq t<q\). There are three cases. (i) If \(i\) is the last agent \(i_{q}\) in the 1-influential set, she has no incentive to manipulate when \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)=\emptyset\) since \(x_{i}=1\). When \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\neq\emptyset\), no matter how \(i\) hides her out-edges, agents in the set \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\) will still be in the 2-influential set (even may be in the 1-influential set). After \(i\) hides some out-edges, if \(S_{2}^{\text{inf.}}(G^{\prime})\setminus S_{1}^{\text{inf.}}(G^{\prime})\neq\emptyset\), then \(i\) cannot have higher probability by Theorem 1; if \(S_{2}^{\text{inf.}}(G^{\prime})\setminus S_{1}^{\text{inf.}}(G^{\prime})=\emptyset\), then \(i\) can have at most \(1/(1+\ln 2)\) probability since she will no longer be Figure 4: An example of the network, where the marked agents have the relationship as \(i_{1}\succ i_{2}\succ i_{3}\succ i_{4}\succ j\). The 1-influential set and the 2-influential set are represented by dashed borders. last agent in \(S_{1}^{\text{inf.}}(G^{\prime})\), which equals to her original probability. (ii) If \(i=i_{q-1}\), she has no incentive to manipulate when \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)=\emptyset\). It is because \(i_{q}\) will always be in the 1-influential set no matter how \(i\) hides her out-edges, which makes increasing her probability to be chosen impossible. When \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\neq\emptyset\), it is almost the same condition as that for \(i=i_{q}\). Hence, she cannot increase her probability by manipulation, either. (iii) If \(i=i_{t}\) with \(t<q-1\), no matter how she hides her out-edges, \(i_{q}\) and \(i_{q-1}\) are always in the 1-influential set. Hence, \(i\)'s probability to be chosen will not change. Taking all the above together, we can conclude that the mechanism is IC. **Theorem 8**.: _LALD is \(\frac{3+\ln 2}{4(1+\ln 2)}\)-optimal._ Proof.: Suppose \(S_{2}^{\text{inf.}}(G)=\{i_{1},i_{2},\cdots,i_{m}\}\) for any graph \(G\in\mathcal{G}\). There are two different cases that need consideration. 1. If \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)=\emptyset\), then according to Observation 1 and Theorem 2, we have \[\frac{\mathbb{E}_{S}[\sum_{i\in S}P(i)]}{\sum_{i\in S_{2}^{*}}P(i)} \geq\frac{P(i_{m})+\frac{1}{1+\ln 2}P(i_{1}^{*})}{P(i_{1}^{*})+P(i_{2}^{* })}\] \[\geq\frac{\left(\frac{1}{2}+\frac{1}{1+\ln 2}\right)P(i_{1})}{P(i_{1})+P (i_{2})}\] \[\geq\frac{\frac{1}{2}+\frac{1}{1+\ln 2}}{2}=\frac{3+\ln 2}{4(1+\ln 2 )}.\] 2. If \(S_{2}^{\text{inf.}}(G)\setminus S_{1}^{\text{inf.}}(G)\neq\emptyset\), we first consider the progeny \(P(i_{m})\). When \(i_{2}^{*}\in P(i_{1}^{*})\), the structure of the 2-influential set is characterized by Observation 5. Then after deleting \(i_{m}\)'s out-edges, \(i_{1}\succ i_{2}\) holds. Hence, \(P(i_{m})\geq P(i_{2})-P(i_{m})\) which implies \(2P(i_{m})\geq P(i_{2})\). When \(i_{2}^{*}\notin P(i_{1}^{*})\), the structure of the 2-influential set is characterized by Observation 6. Hence, at least one of \(2P(i_{m})\geq P(i_{1})\) and \(2P(i_{m})\geq P(i_{2})\) is satisfied, which all imply that \(2P(i_{m})\geq P(i_{2})\). Therefore, let \(P(i_{2})/P(i_{1})=\rho\) and we have \[\frac{\mathbb{E}_{S}[\sum_{i\in S}P(i)]}{\sum_{i\in S_{2}^{*}}P(i)} \geq\frac{P(i_{m})+\frac{1}{1+\ln 2}P(i_{1}^{*})}{P(i_{1}^{*})+P(i_{2}^{* })}\] \[\geq\frac{\frac{1}{2}P(i_{2})+\frac{1}{1+\ln 2}P(i_{1})}{P(i_{1})+P (i_{2})}\] \[=\frac{\rho/2+1/(1+\ln 2)}{1+\rho}\] \[=\frac{\frac{1}{2}(1+\rho)+\frac{1-\ln 2}{2(1+\ln 2)}}{1+\rho}\] \[=\frac{1}{2}+\frac{1-\ln 2}{2(1+\ln 2)}\cdot\frac{1}{1+\rho}\] \[\geq\frac{1}{2}+\frac{1-\ln 2}{4(1+\ln 2)}=\frac{3+\ln 2}{4(1+\ln 2 )}.\] Therefore, we can conclude that the mechanism is \(\frac{3+\ln 2}{4(1+\ln 2)}\)-optimal. ## 5 Discussion In this paper, we investigate the incentive-compatible selection mechanisms for one or two influentials, where an agent's influential power is defined by her progeny. The goal is to select agents with progeny as large as possible and to prevent them from hiding their out-edges at the same time. Based on the idea of assigning possibilities of being selected to those agents who can pretend to be the one with the largest or second largest progeny, we first propose the \(1/(1+\ln 2)\)-LM mechanism for selecting one agent, which is optimal among all IC and fair single-agent selections. We then propose the LALD mechanism for selecting up to two influentials, which has an approximation ratio of \((3+\ln 2)/(4(1+\ln 2))\) (\(\approx 0.54\)). To the best of our knowledge, this is the first work to select more than one agent for progeny maximization. There are several interesting future directions worth investigating, and we provide some brief discussions here. One direction is to narrow the gap between the current lower bound (given by our proposed mechanism) and the current upper bound (\(23/27\) as we proved) of the approximation ratio for an optimal IC 2-selection mechanism. For the side of upper bounds, notice that our provided upper bound does not require additional properties like fairness defined in (Zhang _et al._, 2021). This is because the fairness for selecting a single agent does not apply in selecting multiple agents (e.g., in LALD, the probability of choosing \(i_{1}^{*}\) may be also related to the structure of the 2-influential set). If we extend the definition of fairness to \(k\)-fairness like **Definition 6** (sketch).: \(i_{1}^{*}\) _(or also with \(i_{2}^{*}\) to \(i_{k}^{*}\)) has the same probability to be chosen when the \(k\)-influential set and the structure formed by \(P(i_{1}^{*})\) (or also with \(P(i_{2}^{*})\) to \(P(i_{k}^{*})\)) remain the same._ Then, we can observe that \(k\)-fairness will become weaker when \(k\) becomes larger. Zhang _et al._ (2021) conjectured that dropping (1-)fairness will not affect the upper bound they characterized. If it can be proven to be true, then we can also draw a corollary that introducing \(k\)-fairness will not affect the upper bounds of approximation ratios for IC \(k\)-selection mechanisms. For the side of improving lower bounds, one may consider to utilize more agents in the 2-influential set but not in the 1-influential set. The main difficulty here is these agents may have too small progeny when \(P(i_{2}^{*})\ll P(i_{1}^{*})\). The other direction is to extend the mechanisms for selecting more agents (\(k\geq 3\)). Similar to the case of selecting two agents, only selecting agents in the \(k^{\prime}\)-influential set with \(k^{\prime}<k\) may limit the performance. A natural idea is to select \(k\) agents in the \(k\)-influential set. The main difficulty here is that the structure of the \(k\)-influential set will become more and more complex when \(k\) becomes larger. Intuitively, the structure of the \(k\)-influential set depends on the relationships among agents \(i_{1}^{*}\),..., \(i_{k}^{*}\). The number of different cases of the structure will grow exponentially with \(k\), which is roughly \(2^{O(n^{2})}\). A possible way to handle this challenge may be recursively considering the influential set with lower \(k\). Finally, in terms of other applications, such as recruiting agents to promote some advertisements, designing selection mechanisms to maximize the expected cardinality of the union of progeny is also a promising future direction. ## Acknowledgements This work is supported by Science and Technology Commission of Shanghai Municipality (No. 22ZR1442200 and No. 23010503000), and Shanghai Frontiers Science Center of Human-centered Artificial Intelligence (ShanghaiAI).
2305.11435
Syllable Discovery and Cross-Lingual Generalization in a Visually Grounded, Self-Supervised Speech Model
In this paper, we show that representations capturing syllabic units emerge when training a self-supervised speech model with a visually-grounded training objective. We demonstrate that a nearly identical model architecture (HuBERT) trained with a masked language modeling loss does not exhibit this same ability, suggesting that the visual grounding objective is responsible for the emergence of this phenomenon. We propose the use of a minimum cut algorithm to automatically predict syllable boundaries in speech, followed by a 2-stage clustering method to group identical syllables together. We show that our model not only outperforms a state-of-the-art syllabic segmentation method on the language it was trained on (English), but also generalizes in a zero-shot fashion to Estonian. Finally, we show that the same model is capable of zero-shot generalization for a word segmentation task on 4 other languages from the Zerospeech Challenge, in some cases beating the previous state-of-the-art.
Puyuan Peng, Shang-Wen Li, Okko Räsänen, Abdelrahman Mohamed, David Harwath
2023-05-19T05:19:04Z
http://arxiv.org/abs/2305.11435v2
# Syllable Discovery and Cross-Lingual Generalization in ###### Abstract In this paper, we show that representations capturing syllabic units emerge when training a self-supervised speech model with a visually-grounded training objective. We demonstrate that a nearly identical model architecture (HuBERT) trained with a masked language modeling loss does not exhibit this same ability, suggesting that the visual grounding objective is responsible for the emergence of this phenomenon. We propose the use of a minimum cut algorithm to automatically predict syllable boundaries in speech, followed by a 2-stage clustering method to group identical syllables together. We show that our model not only outperforms a state-of-the-art syllabic segmentation method on the language it was trained on (English), but also generalizes in a zero-shot fashion to Estonian. Finally, we show that the same model is capable of zero-shot generalization for a word segmentation task on 4 other languages from the Zerospeech Challenge, in some cases beating the previous state-of-the-art.1 Footnote 1: Code & Model: [https://github.com/jassonppy/syllable-discovery](https://github.com/jassonppy/syllable-discovery). **Index Terms**: visually-grounded speech, speech segmentation, self-supervised speech processing ## 1 Introduction Traditionally, automatic speech recognition, speech synthesis, and spoken language understanding tasks have relied on supervised learning and the assumption that ground-truth text transcriptions of the training speech are available. Such transcriptions are costly to collect and represent a major hurdle in developing speech recognition and related technologies that can serve the thousands of languages around the world. Recently the speech community has made tremendous progress developing self-supervised models that can learn powerful representations of the speech signal by being pre-trained on untranscribed speech data. After pre-training the models can be fine-tuned on a small amount of transcribed data to achieve impressive performance on a variety of tasks [1, 2, 3, 4, 5]. Furthermore, the representations learned by these models can be clustered into discrete speech units that have been shown to be strongly correlated with words and phones [6, 7]. These units can be used to tokenize speech into a pseudo-text sequence, which can be used as a drop-in replacement for a text transcription in a wide variety of downstream tasks, giving rise to a new genre of "textless" speech processing research [8, 9, 10, 11]. Because of the emergent nature of these units, it is not yet understood how to control what type of linguistic structure (e.g. phones, syllables, words) they will capture. It has been shown that the representations of self-supervised speech models tend to correlate with lower-level structure such as phones at lower model layers, and higher-level structure such as words at higher model layers [6, 12]. However, it has also been demonstrated that the model's training objective strongly influences the nature of these representations. Training the model to perform cross-modal grounding of speech to contextually-relevant visual images has been shown to dramatically increase the model's word learning capability over a masked language modeling objective, even when the model architecture is held nearly constant [7]. In this paper, we build on [7] and demonstrate that multimodal self-supervision simultaneously results in the emergence of word-like and syllable-like representations within the same model. While [7] showed that word-like units are encoded by the Transformer's attention heads, we show that syllable structure emerges within the embeddings of the token sequence itself. We propose the use of a minimum cut segmentation algorithm to derive syllable boundaries from these features, outperforming a state-of-the-art method for unsupervised syllabic segmentation. We then show that these segments can be clustered across a speech corpus to perform syllable discovery, enabling tokenization of the speech signal at the level of syllable-like units. Finally, we also show surprising results where our model trained only on English speech is able to perform zero-shot segmentation of syllables on another language (Estonian) and words in multiple non-English languages, in several cases outperforming the state-of-the-art models on the Zerospeech challenge [13]. ## 2 Related Work Besides the aforementioned work on self-supervised and textless speech processing, our work is also related to spoken term discovery and visually grounded speech processing. Spoken term discovery - inferring the temporal boundary and identity of words and short phrases from untranscribed speech audio data - has been an important research direction in Zero-resource speech processing [13]. The earliest work that tackles spoken term discovery date back to at least the segmental dynamic programming algorithm proposed by Park and Glass [14]. Since then, numerous other approaches have been proposed. [15, 16] developed Bayesian models for hierarchical phoneme and word discovery. Based on the fact that syllables are organized around particularly sonorous speech sounds, [17] developed sonority fluctuation-based method for syllabic segmentation. Other works model word directly either via an iterative segmentating-clustering approach [18], or reinforcement learning [19]. Self-supervised learning has also been considered for end-to-end phoneme and word segmentation [20, 21]. Mostly recently, Algayres et al. [22] identified the key issues in applying text-based models for speech segmentation, and proposed the DP-Parse algorithm which uses instance lexicon to mitigate clustering error. Herman [23] applied vector quantization for phoneme-like unit discovery, and then ran a dynamic program ming algorithm on the discovered units for word segmentation. Visually grounded speech (VGS) processing [24] generalizes the idea of self-supervised learning to multimodal (visual) data and learns speech representations by associating speech audio with contextually-relevant visual input. VGS usually leverages image-speech [25, 26] or video-speech [27, 28] paired data. In practice, besides speech-image retrieval and alignment [29, 30, 31, 32, 33, 34], VGS models has also be shown to achieves competitive performance keyword spotting [35], query-by-example research [36], and varies tasks in the SUPERB benchmark [37, 38]. The study of linguistic information learned in VGS models has been attracting increasing attention. In particular, researchers has measured the phonetic, syllabic, and lexical information in VGS models [39, 40, 6, 41, 42, 7, 43]. In addition to [7] which we build our work on, [43] is the most relevant to ours where they studied the emergence of phonetic, syllabic, and lexical information in different layers of CNN-based VGS models. Our work is different from their in that none of the modules of our model receives textual supervision, while their image encoder is pre-trained on Imagenet classification [44]. In addition, we show the emergence of hierarchical linguistic information in the non-hierarchical Transformer model, while they use hierarchical CNN models. ## 3 Technical Approach VG-HuBERT [7] is a self-supervised dual-encoder model trained using a contrastive loss to match speech waveforms with the images they describe. Although VG-HuBERT is not trained with any textual supervision, the model has been shown to exhibit strong word discovery capabilities [7]. Specifically, its CLS token places concentrated chunks of attention weight on word segments in input utterances (see lower left subfigure of figure 1 for an example). Our motivating hypothesis is that VG-HuBERT's word discovery ability is predicated on its ability to also discover sub-word units at earlier layers. To probe this we first extract a sequence of frame embeddings from some layer of the model given an input waveform, \(\mathbf{C}\in\mathbb{R}^{T\times D}\), (\(T\) is number of speech frames, \(D\) is the feature dimension). Next, we then calculate the feature self-similarity matrix as feat\(\text{SSM}:=\mathbf{C}\mathbf{C}^{\intercal}\). We normalize feat\(\text{SSM}\) by subtracting smallest element of the matrix from all elements to insure that all frame-pair similarity scores are non-negative. Figure 1 shows an example of feat\(\text{SSM}\), where green color denotes high similarity and blue denotes low similarity. We see a clear block diagonal structure in VG-HuBERT's feat\(\text{SSM}\), where each block corresponds to a syllable. In HuBERT's feat\(\text{SSM}\), however, the block structure hardly exists. Based on the different patterns we see between the feature self-similarity matrix and the CLS attention, we hypothesize that visually grounded training leads to the emergence of syllable identity being encoded in VG-HuBERT's features, and the CLS token attending to these features to infer the presence of words. To quantitatively study the syllable discovery phenomenon, we adopt the normalized minimum cut algorithm [45, 46, 47] to automatically segment the blocks in feat\(\text{SSM}\), and use the block boundaries to predict syllable boundaries. **A min-cut segmentation algorithm for featSSM.** We define a fully-connected, undirected graph \(G(V,E)\) for every speech utterance. Set \(V\) consists of all speech frames as nodes; Set \(E\) consists of edges, where the edge weight \(w(u,v)\) is defined as the similarity score corresponding to nodes \(u\) and \(v\). Segmenting the blocks in feat\(\text{SSM}\) means partitioning the corresponding graph \(G(V,E)\) into disjoint sets \(A_{1},A_{2},\cdots,A_{k}\) such that similarity among nodes (i.e. frames) within each set are maximized, and while minimizing the similarities of nodes between sets. To achieve this, [45] proposed the following objective: \[\text{Ncut}_{k}(V)=\frac{cut(A_{1},V-A_{1})}{vol(A_{1})}+\cdots+\frac{cut(A_{ k},V-A_{k})}{vol(A_{k})}\] where \(cut(A,B):=\sum_{u\in A,v\in B}w(u,v)\), and \(vol(A):=\sum_{u\in A,v\in V}w(u,v)\). For sequential data, the above minimization problem can be solved using a dynamic programming algorithm [46] in \(O(KN^{2})\) time. Here \(K\) is the number of partitions (estimated number of syllables in the utterance in our case), and \(N\) is the number of nodes (speech frames). \(K\) needs to be set up-front for every utterance, and we use a hyperparameter second-per-syllable (secPerSyllable) to decide \(K\) based on the duration of the utterance. In practice, we use the variant introduced in [47], where we first oversegment feat\(\text{SSM}\), and then iteratively merge temporally adjacent partitions if the cosine similarity of the averaged features belonging to the two partitions falls below some threshold (denoted as mergeThres). We found that this variant always outperformed the original algorithm proposed in [46]. **Clustering.** With hypothesized syllabic segment boundaries produced by the min-cut algorithm, we further use a 2-step clustering approach to categorize the segments. Average features within each segment are used as the embedding of the segment. We initially cluster the segment embeddings using KMeans to produce a large number of clusters, and then run agglomerate clustering to merge similar clusters. We found our 2-step clustering approach to work better compared to just using Kmeans, given the same number of final clusters. Since our work and [7] are both based on VG-HuBERT, we denote [7]'s segmentation approach as \(\mathbf{VG}\)-\(\mathbf{HuBERT_{ds}}\), where the CLS attention is used to segment speech, and denote our approach as \(\mathbf{VG}\)-\(\mathbf{HuBERT_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\ Forced Aligner2 to generate phonetic and word alignment, and then derive the corresponding syllable alignment utilizing a rule-based syllhidetical script3. For cross-lingual generalization experiments, we follow [17] and evaluate our approaches on Estonian syllide segmentation using the Phonetic Corpus of Estonian Spontaneous Speech [50], which contains conversational speech between two test subjects recorded with near-field microphones. The corpus comes with manually verified syllable transcription and alignment. We also evaluate our approach on the Zerospeech word segmentation task, which contains five languages: Mandarin, English, French, German, and Wolof. Footnote 2: [https://montreal-forced-aligner.readthedocs.io/en/latest/](https://montreal-forced-aligner.readthedocs.io/en/latest/) ### Implementation details **Model training.** We use the official open-sourced codebase and training recipe released by Peng and Harwath [7] and train a VG-HuBERT on SpokenCOCO. Model snapshots are saved during training for syllable and word discovery analysis. **Evaluation.** To evaluate segmentation performance, we use precision, recall, F1 and R-value [51, 23]. For the calculation of above metrics, we use a tolerance window of \(50\)ms for SpokenCOCO and Estonian following [17], and \(30\)ms for the Zerospeech Challenge [13]. To evaluate the quality of our syllable clustering, we first match hypothesized syllable segments with the ground truth segments for each utterance. To do so, we use a Hungarian matching algorithm where each segment is a node and edge weights are defined by temporal intersection-over-union between each hypothesized segment and ground truth segment (unmatched segments are assigned to a dummy segment). Then, we follow [7] and use cluster purity and number of detected syllables (DS). A syllable is defined as being detected if it achieves an F1 score greater than \(0.5\) for some cluster [7]. To avoid conflating word detection and syllable detection, we only evaluate on multisylable words. **Hyperparameter tuning.** For SpokenCOCO, we tune the mergeThres to maximize the segmentation R-value on the SpokenCO validation set. The number of clusters in Kmeans and agglomerative clustering are fixed at \(16384\) and \(4096\). For syllabic segmentation on Estonian, we tune the hyperparameters on a validation set created following the procedure introduced in [17], using a subset of the original Estonian corpus [50]. For cross-lingual word segmentation on the Zerospeech challenge, we use the hyperparameters selected from the SpokenCOCO validation set. ### When do syllables and words emerge during training? We first investigate when syllable and word information emerges during the training of VG-HuBERT. In Figure 2, we show the syllable and word segmentation performance of VG-HuBERT as a function of training iteration, along with speech-image retrieval accuracy on the SpokenCOCO validation set. Since the contrastive training loss is a direct approximation of the retrieval metric, speech-image retrieval accuracy keeps improving throughout the course of training as expected. For syllabic segmentation, VG-HuBERT reaches the first peak at \(20^{+}2\)k steps, and the performance keeps improving shortly afterwards, with a trend similar to retrieval performance. Interestingly, VG-HuBERT peaks at \(20^{+}2\)k steps for word segmentation, and the performance slightly decreases before levelling off. Anecdotally, by manually examining some examples we found that VG-HuBERT's CLS token tends to ignore more words in the later stages of training. This might be because the model is starting to ignore non-salient words in order to produce semantic representations that are more discriminative in terms of retrieval performance. Notably, as we can see in Figure 1, syllabic information for the entire utterance tends to persist in the model's representations even when some segments are ignored by the CLS token's attention. ### Where in the model do syllables and words emerge? We next perform a layer-wise study to show how visual grounding helps the emergence of syllables and words, and the interplay between the discovery of different linguistic units. Figure 3 compares VG-HuBERT to HuBERT for syllabic segmentation, and also shows VG-HuBERT's word segmentation on the SpokenCOCO validation set. HuBERT performs quite evenly across all layers, while syllabic segmentation is best in VG-HuBERT's mid to late layers, and VG-HuBERT's word segmentation ability is concentrated in the final few layers. We also fine-tuned HuBERT on the SpokenCOCO utterances using its original self-supervised loss to mitigate the potential domain gap, but did not see any improvement in syllabic segmentation (see first two rows in Table 1). We see a 'division of labor' between different layers in VG-HuBERT with middle layers performing best in syllabic segmentation, while the last three layers specialize in word segmentation. In addition, we note that the best syllabic segmentation layer (layer \(9\)) is right before the best word segmentation layer (layer \(10\)), indicating that the attention heads may be learning to string syllables together into words. We leave a more in-depth investigation of this phenomenon for future work. Figure 3: Layer-wise performance of VG-HuBERT on syllable and word segmentation, and HuBERT on syllabic segmentation on SpokenCOCO val set. HuBERT word segmentation gives very poor results [7] and therefore is not shown. Figure 2: The performance of speech-image retrieval, and syllable and word segmentation of VG-HuBERT as training progress. ### Syllable discovery on English Table 1 compares VG-HuBERT with other models for syllable discovery on the SpokenCOCO test set. We see that HuBERT performs the worst on this dataset, no matter whether it is fine-tuned on SpokenCOCO or not. VG-HuBERT\({}_{\text{clk}}\) denotes the CLS token's attention-based segmentation, a method that has been shown to achieve SotA on word segmentation [7], gives high precision and low recall on this syllabic segmentation task as expected. In terms of syllable detection, we see that VG-HuBERT\({}_{\text{cls}}\) can detect more than 700 syllables with a high cluster purity. Considering the high cluster purity and low boundary recall of VG-HuBERT\({}_{\text{cls}}\), we conclude that this approach is able to discover a smaller number of syllables, but is highly confident of the ones that it does discover. Oscillator [17] is a signal processing-based syllabic segmentation algorithm that achieves SotA for unsupervised syllabic segmentation on multiple languages, including English. Oscillator performs reasonably well on this dataset, only lagging behind our approach on segmentation. Our VG-HuBERT\({}_{\text{testSSM}}\) model achieves the best performance in both syllable segmentation (best F1 and R-val) and clustering (best DS). ### Zero-shot syllabic segmentation on Estonian Syllables are strongly correlated with speech intensity and voicing, and are organized around sonorant speech sounds [17]. This suggests that a syllable detection model trained on one language may able to generalize to other languages. We thus evaluate our English-trained models on a non-English language, namely Estonian. We use the same five-hour subset and evaluation pipeline as [17]. Table 2 lists the results. We see that compared to other methods including the Oscillator, our VG-HuBERT performs the best in both F1 and R-val metrics, indicating that its syllabic segmentation ability is at least somewhat language-agnostic. ### Zero-shot word segmentation on unseen languages Lastly, we ask the question: if VG-HuBERT's CLS token detects words in English, what does it do for a language it has not seen during training? To investigate CLS token's behavior on languages unseen during training, we first visualize the CLS attention for Estonian and Mandarin utterances in figure 4. We see that anecdotally, the CLS attention appears to be performing syllabic segmentation, but it sometimes also connect adjacent syllables together. In some cases, the connections give invalid words - in figure 4, for Estonian (the upper figure), 'h_ve' and 'i' are connected, but the result is not a valid word; for Mandarin, '_'_' is connected (in the middle figure), and the result is also not a valid word. However, in some other cases, the connections happen to give valid words - in the two Mandarin examples in figure 4, '_'_' and '_'_' got connected, and they are valid words. Based on the observation that the CLS token produces a mixture of monosyllablic and multiysllabic segmentation, we test VG-HuBERT\({}_{\text{cls}}\) for word segmentation on the Zerospeech challenge. In table 3, we see that VG-HuBERT achieves SotA performance on three out of five languages, despite only being trained on English. Interestingly, VG-HuBERT performs very differently on Mandarin and Wold. While this could be due to hyperparameter settings (we use the same hyperparameters for all languages), we are not able to verify because the Wolof transcripts are not publicly available. ## 5 Concluding Discussion In this paper, we demonstrated that the VG-HuBERT visually-grounded speech model exhibits emergent syllable recognition behavior. We proposed the use of a minimum cut algorithm to automatically extract syllable boundaries from the model's learned representations, and showed that this segmentation ability could transfer to Estonian speech even though the model was only trained on English. Furthermore, we demonstrated that the emergent word discovery ability that is also present in the model could be applied in a zero-shot transfer fashion to segment words in non-English languages, achieving state-of-the-art segmentation performance for several languages in the Zerospeech Challenge benchmark. In our future work, we plan to apply our syllable discovery method to tokenize speech waveforms and use these tokenizations in various textless speech processing tasks such as spoken language modeling and speech-to-speech translation, as well as unsupervised speech recognition. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & Prec. & Rec. & F1 & R-val. & Purity & DS \\ \hline HuBERT ft.[2] & 43.8 & 49.4 & 46.4 & 51.5 & 29.0 & 519 \\ HuBERT [2] & 43.8 & 46.5 & 45.1 & 52.0 & 30.1 & 522 \\ VG-HuBERT\({}_{\text{cls}}\)[7] & 58.7 & 37.1 & 45.5 & 54.3 & 66.1 & 751 \\ Oscillator [17] & 52.0 & 64.6 & 57.6 & 57.4 & - & - \\ VG-HuBERT\({}_{\text{testSSM}}\) & 57.4 & 63.6 & **60.3** & **64.3** & 45.8 & **902** \\ \hline \hline \end{tabular} \end{table} Table 1: Syllable segmentation performance of different models on SpokenCOCO test set. DS denotes detected syllables. \begin{table} \begin{tabular}{l c c c c} \hline \hline Approach & Mand. & French & Engl. & German & Wolof \\ \hline PUTW [54] & 4.4 & 5.1 & 4.1 & 2.9 & 4.2 \\ ES-KMeans [18] & 8.1 & 6.3 & 19.2 & 14.5 & 10.9 \\ SEA [55] & 12.1 & 6.3 & 6.6 & 6.3 & 12.6 \\ DP-Parse [22] & 16.0 & 15.3 & 21.9 & 13.4 & **17.5** \\ DPDP [23] & **26.3** & 12.2 & 19.2 & 9.0 & 15.0 \\ VG-HuBERT\({}_{\text{cls}}\) & 19.5 & **15.5** & **26.6** & **15.8** & 7.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Word segmentation performance on the Zerospeech Challenge. Token F1 is a stricter metric than boundary F1 where a word is considered a hit only when both it’s start and end boundaries are successfully predicted. Figure 4: Visualizations of VG-HuBERT’s CLS attention on unseen languages - Estonian and Mandarin. Thin dashed lines denote syllable boundaries, thick vertical line denotes word boundaries. Word boundaries are also syllable boundaries.
2306.08434
Construction of Antisymmetric Variational Quantum States with Real-Space Representation
Electronic state calculations using quantum computers are mostly based on second quantization, which is suitable for qubit representation. Another way to describe electronic states on a quantum computer is first quantization, which is expected to achieve smaller scaling with respect to the number of basis functions than second quantization. Among basis functions, a real-space basis is an attractive option for quantum dynamics simulations in the fault-tolerant quantum computation (FTQC) era. A major difficulty in first quantization with a real-space basis is state preparation for many-body electronic systems. This difficulty stems from of the antisymmetry of electrons, and it is not straightforward to construct antisymmetric quantum states on a quantum circuit. In the present paper, we provide a design principle for constructing a variational quantum circuit to prepare an antisymmetric quantum state. The proposed circuit generates the superposition of exponentially many Slater determinants, that is, a multi-configuration state, which provides a systematic approach to approximating the exact ground state. We implemented the variational quantum eigensolver (VQE) to obtain the ground state of a one-dimensional hydrogen molecular system. As a result, the proposed circuit well reproduced the exact antisymmetric ground state and its energy, whereas the conventional variational circuit yielded neither an antisymmetric nor a symmetric state. Furthermore, we analyzed the many-body wave functions based on quantum information theory, which illustrated the relation between the electron correlation and the quantum entanglement.
Takahiro Horiba, Soichi Shirai, Hirotoshi Hirai
2023-06-14T11:11:31Z
http://arxiv.org/abs/2306.08434v1
# Construction of Antisymmetric Variational Quantum States ###### Abstract Electronic state calculations using quantum computers are mostly based on second quantization, which is suitable for qubit representation. Another way to describe electronic states on a quantum computer is first quantization, which is expected to achieve smaller scaling with respect to the number of basis functions than second quantization. Among basis functions, a real-space basis is an attractive option for quantum dynamics simulations in the fault-tolerant quantum computation (FTQC) era. A major difficulty in first quantization with a real-space basis is state preparation for many-body electronic systems. This difficulty stems from of the antisymmetry of electrons, and it is not straightforward to construct antisymmetric quantum states on a quantum circuit. In the present paper, we provide a design principle for constructing a variational quantum circuit to prepare an antisymmetric quantum state. The proposed circuit generates the superposition of exponentially many Slater determinants, that is, a multi-configuration state, which provides a systematic approach to approximating the exact ground state. We implemented the variational quantum eigensolver (VQE) to obtain the ground state of a one-dimensional hydrogen molecular system. As a result, the proposed circuit well reproduced the exact antisymmetric ground state and its energy, whereas the conventional variational circuit yielded neither an antisymmetric nor a symmetric state. Furthermore, we analyzed the many-body wave functions based on quantum information theory, which illustrated the relation between the electron correlation and the quantum entanglement. ## I Introduction Quantum computers are currently attracting increasing attention as promising hardware for materials computations [1; 2; 3], and a number of studies on a variety of material systems have been conducted [4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. Such materials computations are mostly based on second quantization [14; 15; 16], which is suitable for describing electronic states on quantum computers. An alternative way to describe electronic states on quantum computers is first quantization, in which a wave function is specified by the expansion coefficients of the basis functions. With quantum computers, it is possible to obtain a wave function represented by an exponential number of basis functions with a polynomial number of qubits [17]. Therefore, first quantization offers the possibility of achieving smaller scaling with respect to the number of basis functions than second quantization [18; 19; 20; 21; 22]. Among basis functions, a real-space basis is an attractive option because a systematic improvement of computational accuracy and a rapid convergence to the continuum limit can be expected by increasing the number of qubits [23]. In addition, a real-space basis can be applied to systems with a variety of boundary conditions [24; 23], and thus is suitable for quantum dynamics calculations [25]. Recently, Chan et al. proposed quantum circuits for computing real-space quantum dynamics based on first quantization [21], which represented a promising blueprint of quantum simulations in the fault-tolerant quantum computing (FTQC) era. However, first quantization has a significant challenge, namely state preparation. Let us consider preparing the ground state of the system, which is a typical choice of initial state for dynamics calculations. As a state preparation method, we consider quantum phase estimation (QPE) [26] which has been employed in several studies on first quantization [19; 21]. By using QPE, it is possible to distill the ground state from an input state which has sufficient overlap with the ground state [19]. Thus, the problem is how to prepare such an input state. In first quantization with a real-space basis, preparing an input state takes a tremendous number of gate operations because probability the amplitudes of the many-body wave function need to be encoded into a state vector of a quantum circuit. Although several amplitude-encoding methods have been proposed [27; 28; 29; 30], it is not at all straightforward to prepare an approximate ground state with any of them. This state preparation problem is often avoided by using oracle circuits. Compared to constructing such oracle circuits, variational methods such as the variational quantum eigensolver (VQE) [14; 15; 16] are considered to be relatively feasible approaches. In fact, a number of studies have proposed preparing an input state using the VQE in second quantization [31; 32]. Unfortunately, it is also not straightforward to implement the VQE in first quantization. This is due to the antisymmetry of electrons, which are Fermi particles. In second quantization, antisymmetry does not need to be considered for variational quantum states, because antisymmetry is naturally introduced as the anticommutation relations of creation and annihilation operators. By contrast, in first quantization, antisymmetry is imposed on the many-body wave function itself, and thus variational quantum states must satisfy antisymmetry. Nevertheless, there is no guarantee that quantum states generated by conventional variational quantum circuits will satisfy antisymmetry, which raises the possibility that the VQE will yield non-antisymmetric states. Therefore, a variational quantum circuit that generates only anti-symmetric quantum states is required in order to obtain the electronic ground state by the VQE. In the present paper, we provide a design principle for constructing such an antisymmetrized variational quantum circuit. Our proposed circuit consists of two types of variational circuits that achieve antisymmetry-preserving transformations of a state vector on a Hilbert space. It is noteworthy that the proposed circuit generates a multi-configuration (MC) state. That is, it is possible to generate superpositions of an exponentially large number of Slater determinants by alternately layering these two types of variational circuits. This scheme provides a systematic approach to approximating the exact ground state. To verify the validity of our method, we performed the VQE calculation for a one-dimensional hydrogen molecular (1D-H\({}_{2}\)) system, and demonstrated that the proposed circuit well reproduced the exact antisymmetric, or fermionic, ground state and its energy. In addition to implementing the VQE, we analyzed the many-body wave functions based on quantum information theory. Such an analysis reveals the microscopic electronic structure of a many-body wave function represented in real space and illustrates the relation between the electron correlation and the quantum entanglement. ## II Method In this section, we introduce the first-quantized VQE framework based on the real-space representation and describe our proposed variational quantum circuit. We also describe the setting of the numerical experiments for a 1D-H\({}_{2}\) system. ### First-quantized VQE with a real-space basis To begin with, we briefly describe the first-quantized formulation of the many-body electron problem. The many-body Schrodinger equation for an \(\eta\)-electron molecular system is expressed by the following equation: \[H\ket{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}=E\ket{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}, \tag{1}\] where \(\ket{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}\) is the antisymmetric many-body wave function to be constructed on a quantum circuit. The molecular Hamiltonian \(H\) with the Born-Oppenheimer approximation is expressed in atomic units as follows: \[H=\sum_{i=1}^{\eta}\left[-\frac{{\mathbf{\nabla_{i}}}^{2}}{2}-\sum_{p}\frac{Z_{p} }{|\mathbf{r_{i}}-\mathbf{R_{p}}|}\right]+\sum_{i<j}\frac{1}{|\mathbf{r_{i}}-\mathbf{r_{j}}|}, \tag{2}\] where \(r_{i}\) is the \(i\)th electron coordinate, \(R_{p}\) is the \(p\)th nuclear coordinate, and \(Z_{p}\) is the atomic number of the \(p\)th nucleus. The VQE in this study is based on this first-quantized Hamiltonian and the many-body wave function represented in real space. Next, we introduce the real-space basis. Let us consider the expansion of the many-body wave function by the real-space basis \(\ket{\delta(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}\) (delta functions located at grid points \(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}}\)). The expansion coefficient of a many-body wave function is given as \[\psi(\mathbf{r_{1}^{\prime}},\mathbf{r_{2}^{\prime}},\cdots,\mathbf{r_{\eta}^{\prime}})= \bra{\delta(\mathbf{r_{1}^{\prime}},\mathbf{r_{2}^{\prime}},\cdots,\mathbf{r_{\eta}^{ \prime}})}\ket{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}. \tag{3}\] The real-space basis of an \(\eta\)-electron system consists of that of one-electron system \(\ket{\delta(\mathbf{r_{i}})}\), as follows: \[\ket{\delta(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}=\ket{\delta(\mathbf{r_{1 }})}\ket{\delta(\mathbf{r_{2}})}\cdots\ket{\delta(\mathbf{r_{\eta}})}. \tag{4}\] \(\ket{\delta(\mathbf{r_{i}})}\) consists of three-dimensional spatial coordinates \(x,y,z\) and one spin coordinate \(s\). Assuming that \(L\) qubits are assigned to each spatial dimension and one qubit to spin, \(\ket{\delta(\mathbf{r_{i}})}\) is expressed as \[\ket{\delta(\mathbf{r_{i}})}=\ket{x_{1}{}^{(i)}\cdots x_{L}{}^{(i)}}\ket{y_{1}{}^{ (i)}\cdots y_{L}{}^{(i)}}\ket{z_{1}{}^{(i)}\cdots z_{L}{}^{(i)}}\ket{s^{(i)}},\] where \(x_{k}{}^{(i)},y_{k}{}^{(i)},z_{k}{}^{(i)},s^{(i)}\in\{0,1\},\forall k\in[1,L]\). The number of qubits that constitute \(\ket{\delta(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}\) is \(\eta(3L+1)\), and thus the many-body wave function is expanded in \(2^{\eta(3L+1)}\) basis functions. Quantum computers are expected to realize such an exponentially large number of basis functions with a polynomial number of qubits, which is a significant advantage over classical computers. In order to implement the VQE, it is necessary to measure the energy expectation value of a quantum state \(\ket{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}\). The energy expectation value of molecular Hamiltonian \(H\) in Eq. 2 is expressed as follows: \[E =\bra{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})}\ket{H} \ket{\psi(\mathbf{r_{1}},\mathbf{r_{2}},\cdots,\mathbf{r_{\eta}})} \tag{6}\] \[=E_{K}+E_{V}^{e-n}+E_{V}^{e-e},\] where \(E_{K}\) is the electron kinetic energy, \(E_{V}^{e-n}\) is the electron-nuclei Coulomb attraction energy, and \(E_{V}^{e-e}\) is the electron-electron Coulomb repulsion energy. The kinetic and Coulomb energy operators of the Hamiltonian are diagonal in momentum space and real space, respectively. The momentum space basis is obtained by the quantum Fourier transformation (QFT) of the real-space basis. Letting \(\ket{\psi(\mathbf{k_{i}})}=U_{\rm QFT}\ket{\psi(\mathbf{r_{i}})}\) be the one-body momentum space basis, the kinetic energy \(E_{K}\) is expressed as \[E_{K}=\sum_{i=1}^{\eta}-\frac{{\mathbf{k_{i}}}^{2}}{2}|\psi(\mathbf{k_{i}})|^{2}. \tag{7}\] The Coulomb energies \(E_{V}^{e-n},E_{V}^{e-e}\) are expressed in terms of one-body and two-body real-space bases as follows: \[\begin{split} E_{V}^{e-n}&=-\sum_{i=1}^{\eta}\sum_{p} \frac{Z_{p}}{|\mathbf{r_{i}}-\mathbf{R_{p}}|}|\psi(\mathbf{r_{i}})|^{2},\\ E_{V}^{e-e}&=\sum_{i<j}\frac{1}{|\mathbf{r_{i}}-\mathbf{r_{ j}}|}|\psi(\mathbf{r_{i}},\mathbf{r_{j}})|^{2}.\end{split} \tag{8}\] Probability distributions \(|\psi(\mathbf{k_{i}})|^{2}\), \(|\psi(\mathbf{r_{i}})|^{2}\), and \(|\psi(\mathbf{r_{i}},\mathbf{r_{j}})|^{2}\) can be obtained by measuring the output of the (QFT applied) quantum circuit in the computational basis (Pauli \(Z\) basis). Note that, as pointed out by Chen et al., this method is not efficient in terms of sampling cost and is sensitive to discretization errors of grids [21]. According to their work, QPE should be used for measuring energy expectation values in future quantum calculations, but this naive method was adopted in the present study because of a lack of sufficient computational resources to simulate QPE on a classical computer. The remaining issue in the first-quantized VQE is how to construct a variational quantum circuit that generates antisymmetric ansatz states. In the following, we describe the design principle for constructing a variational quantum circuit to settle the above issue. ### Antisymmetrized variational quantum circuit To explain the design principles of circuits, we consider the one-dimensional two-electron system, which is the minimum model required to describe the antisymmetry of electrons. In the following part of this section, we discuss two-electron systems, but we would like to note that the proposed method is applicable not only to two-electron systems but more generally to systems with larger numbers of electrons. The overall circuit architecture is shown in Fig. 1. The proposed circuit consists of three parts: (1) seed state preparation, (2) one-body and two-body space variational circuits, and (3) measurement of energy expectations. Since the measurement procedure of energy expectations has already been described, we here describe only the parts of the circuit that generate antisymmetric ansatz states. In the first part of the circuit, some antisymmetric state, i.e., seed state, is prepared, at which the state vector of the circuit belongs to the antisymmetric subspace. The subsequent one-body and two-body space variational circuits transform the state vector into the ground state while keeping it in the antisymmetric subspace. Such a preparation of an antisymmetric state and antisymmetry-preserving transformations are the main design principle of the proposed circuit. In the following, we describe each part of the circuit. #### ii.2.1 Seed state preparation The simplest antisymmetric state of a two-electron system is the Greenberger-Horne-Zeilinger (GHZ) state, as follows: \[|\psi_{\text{GHZ}}\rangle=\frac{1}{\sqrt{2}}(\ket{0}_{1}\ket{1}_{2}-\ket{1}_{ 1}\ket{0}_{2}). \tag{9}\] This state can be generated by a series of an X (NOT) gate, an H (Hadamard) gate, and a CNOT (controlled NOT) gate as shown in Fig. 1. In this circuit, the spin coordinates \(|s^{(i)}\rangle\) are antisymmetrized, meaning that this state is a singlet state in which the two spins are antiparallel \(\downarrow_{1}\uparrow_{2}-\uparrow_{1}\downarrow_{2}\) (\(0\) is assumed to be \(\downarrow\) and \(1\) to be \(\uparrow\)). Figure 1: Quantum circuit to generate an antisymmetrized variational quantum state for one-dimensional two-electron systems This state is expressed as follows, including the spatial coordinates (the normalization constant \(1/\sqrt{2}\) is omitted): \[\left|\psi_{\text{seed}}(\mathbf{r_{1}},\mathbf{r_{2}})\right\rangle=\left|\mathbf{0}\right\rangle _{1}\left|0\right\rangle_{1}\otimes\left|\mathbf{0}\right\rangle_{2}\left|1\right\rangle _{2}-\left|\mathbf{0}\right\rangle_{1}\left|1\right\rangle_{1}\otimes\left|\mathbf{0} \right\rangle_{2}\left|0\right\rangle_{2}, \tag{10}\] where the spatial coordinate of each electron is \(\left|\mathbf{0}\right\rangle=\left|00\cdots 0\right\rangle\). Obviously, this state is antisymmetric under the exchange of two electrons and can be used as a seed state. For more-than-two-electron systems, seed states cannot be constructed in such a simple way. As an example, for three-electron systems, the following Slater determinant consisting of the three states \(\left|00\right\rangle\), \(\left|01\right\rangle\), and \(\left|10\right\rangle\) is one of the antisymmetric states (the normalization constant \(1/\sqrt{3!}\) is omitted). \[\left|\psi_{00,11,0}\right\rangle =\begin{vmatrix}\left|00\right\rangle_{1}&\left|01\right\rangle_{ 1}&\left|10\right\rangle_{1}\\ \left|00\right\rangle_{2}&\left|01\right\rangle_{2}&\left|10\right\rangle_{2} \\ \left|00\right\rangle_{3}&\left|01\right\rangle_{3}&\left|10\right\rangle_{3} \end{vmatrix} \tag{11}\] \[=\left|00\right\rangle_{1}\left|01\right\rangle_{2}\left|10 \right\rangle_{3}+\left|01\right\rangle_{1}\left|10\right\rangle_{2}\left|00 \right\rangle_{3}\] \[\quad+\left|10\right\rangle_{1}\left|00\right\rangle_{2}\left|01 \right\rangle_{3}-\left|10\right\rangle_{1}\left|01\right\rangle_{2}\left|00 \right\rangle_{3}\] \[\quad-\left|01\right\rangle_{1}\left|00\right\rangle_{2}\left|10 \right\rangle_{3}-\left|00\right\rangle_{1}\left|10\right\rangle_{2}\left|01 \right\rangle_{3}.\] Previous research has proposed methods for constructing quantum circuits to prepare such a Slater determinant, though it is not as simple as that for the GHZ state. Thanks to the work of Berry et al., an implementation of the Fisher-Yates shuffle [33] on a quantum circuit has been provided and can generate the superpositions of an exponential number of permutation elements with polynomial computational complexity [19]. Following their method, it is possible to systematically generate seed states for more-than-two-electron systems. Another way to prepare a seed state is a variational approach, which achieves a seed state using conventional variational circuits. In this approach, the Hadamard test (swap test) can be employed as an objective function. Letting SWAP\({}_{ij}\) be the swap operator acting on the subspace of the \(i\)th and \(j\)th electrons, the output of the Hadamard test for the quantum state \(\left|\psi\right\rangle\) becomes \[p_{0}=\frac{1+\left\langle\psi\middle|\text{SWAP}_{ij}\middle|\psi\right\rangle }{2},p_{1}=\frac{1-\left\langle\psi\middle|\text{SWAP}_{ij}\middle|\psi\right\rangle }{2}, \tag{12}\] where \(p_{0}\) and \(p_{1}\) are the measurement probabilities of the \(\left|0\right\rangle\) and \(\left|1\right\rangle\) of an ancilla qubit. Suppose \(\left|\psi\right\rangle\) is an antisymmetric wave function, SWAP\({}_{ij}\left|\psi\right\rangle=-\left|\psi\right\rangle\); then the output of the Hadamard test becomes \(p_{0}=0,p_{1}=1\) (for symmetric wave functions, \(p_{0}=1,p_{1}=0\)). Therefore, an approximated antisymmetric state can be obtained by updating variational parameters to maximize the output of the Hadamard tests for all two-electron swap operations. By repeatedly performing the Hadamard test on an approximated antisymmetric state, the pure antisymmetric states can eventually be distilled, which can then be used as a seed state. #### ii.2.2 One-body space variational circuit The proposed variational quantum circuit consists of two components: a one-body space variational circuit and a two-body (\(\eta\)-body) space variational circuit. We first explain the one-body space variational circuit. A one-body space variational circuit consists of unitary operators that act on the subspace of each electron (one-body space). Consider \(\mathcal{U}^{(1)}(\theta)=\left[U^{(1)}(\theta)\right]^{\otimes\eta}\) as the one-body space variational circuit, where \(U^{(1)}(\theta)\) is a unitary operator (variational circuit) acting on a one-body space. Since \(U^{(1)}(\theta)\) transforms a state vector equally in each one-body space, \(\mathcal{U}^{(1)}(\theta)\) does not destroy the antisymmetry of the ansatz state. For two-electron systems, \(\mathcal{U}^{(1)}(\theta)=U^{(1)}(\theta)\otimes U^{(1)}(\theta)\) acting on the seed state (GHZ state) \(\left|\psi_{\text{seed}}(\mathbf{r_{1}},\mathbf{r_{2}})\right\rangle\) yields \[\left|\psi(\mathbf{r_{1}},\mathbf{r_{2}})\right\rangle =\mathcal{U}^{(1)}(\theta)\left|\psi_{\text{seed}}(\mathbf{r_{1}}, \mathbf{r_{2}})\right\rangle \tag{13}\] \[=U^{(1)}(\theta)\left|\mathbf{0}\right\rangle_{1}\left|0\right\rangle _{1}\otimes U^{(1)}(\theta)\left|\mathbf{0}\right\rangle_{2}\left|1\right\rangle _{2}\] \[\quad-U^{(1)}(\theta)\left|\mathbf{0}\right\rangle_{1}\left|1\right\rangle _{1}\otimes U^{(1)}(\theta)\left|\mathbf{0}\right\rangle_{2}\left|0\right\rangle _{2}\] \[=\left|\alpha(\theta)\right\rangle_{1}\otimes\left|\beta(\theta) \right\rangle_{2}-\left|\beta(\theta)\right\rangle_{1}\otimes\left|\alpha( \theta)\right\rangle_{2}.\] As can be seen, \(\mathcal{U}^{(1)}(\theta)\) transforms \(\left|\mathbf{0}\right\rangle\left|0\right\rangle,\left|\mathbf{0}\right\rangle \left|1\right\rangle\) into \(\left|\alpha(\theta)\right\rangle=U^{(1)}(\theta)\left|\mathbf{0}\right\rangle \left|0\right\rangle\), \(\left|\beta(\theta)\right\rangle=U^{(1)}(\theta)\left|\mathbf{0}\right\rangle \left|1\right\rangle\) in each one-body space, and antisymmetry is preserved under this transformation. The resulting state can be expressed by a single Slater determinant consisting of \(\left|\alpha(\theta)\right\rangle\) and \(\left|\beta(\theta)\right\rangle\) as follows: \[\left|\psi_{\text{SD}}(\mathbf{r_{1}},\mathbf{r_{2}})\right\rangle =\left|\begin{vmatrix}\left|\alpha(\theta)\right\rangle_{1}& \otimes\left|\beta(\theta)\right\rangle_{2}-\left|\beta(\theta)\right\rangle_{1 }\otimes\left|\alpha(\theta)\right\rangle_{2}\\ &=\left|\begin{vmatrix}\left|\alpha(\theta)\right\rangle_{1}&\left|\beta( \theta)\right\rangle_{1}\\ \left|\alpha(\theta)\right\rangle_{2}&\left|\beta(\theta)\right\rangle_{2} \end{vmatrix}\right|. \tag{14}\] This indicates that a one-body space variational circuit can only explore quantum states within the HF approximation. By using the two-body (\(\eta\)-body) space variational circuit described in the following, we can explore quantum states beyond the HF approximation. #### ii.2.3 Two-body space variational circuit The usual strategy to go beyond the HF approximation is based on configuration interaction (CI) theory, in which a many-body wave function is approximated by a linear combination, or superposition, of multiple Slater determinants. As previously mentioned, a one-body space circuit generates an electronic state expressed by a single Slater determinant. Therefore, a superposition of multiple one-body space circuits is expected to generate a superposition of multiple Slater determinants, that is, an MC state. For a two-electron system, consider two different operators \(U_{a}\otimes U_{a},U_{b}\otimes U_{b}\), where \(U_{a}\) and \(U_{b}\) are unitary operators acting on a one-body space. Their superposition is expressed as \[U^{(2)}(\theta)=c_{a}(\theta)\cdot U_{a}\otimes U_{a}+c_{b}(\theta)\cdot U_{b} \otimes U_{b}, \tag{15}\] where \(c_{a}(\theta)\), \(c_{b}(\theta)\) are superposition coefficients parametrized by \(\theta\). Operator \(U^{(2)}(\theta)\) acting on a single Slater determinant \(\ket{\psi_{\mathrm{SD}}}\) yields \[\begin{split}& U^{(2)}(\theta)\ket{\psi_{\mathrm{SD}}}\\ &=c_{a}(\theta)\ket{U_{a}\ket{\alpha}_{\alpha}}_{2}\;U_{a}\ket{ \beta}_{2}\end{split}+c_{b}(\theta)\ket{U_{b}\ket{\alpha}_{1}\;U_{b} \ket{\beta}_{1}}\\ &=c_{a}(\theta)\ket{\psi_{\mathrm{SD}}^{a}}+c_{b}(\theta)\ket{ \psi_{\mathrm{SD}}^{b}}.\end{split} \tag{16}\] As expected, a superposition of two different Slater determinants \(\ket{\psi_{\mathrm{SD}}^{a}}\), \(\ket{\psi_{\mathrm{SD}}^{b}}\), is generated by \(U^{(2)}(\theta)\). One of the simplest implementations of such an operator is the Ising gate [34]. Ising gates such as \(R_{xx}(\theta)\), \(R_{yy}(\theta)\), and \(R_{zz}(\theta)\) are represented by a superposition of \(I\otimes I\) and \(P\otimes P\) (\(I\) is the identity operator, \(P\) is the Pauli operator \(X\), \(Y\), \(Z\)); thus, they can be employed as \(U^{(2)}(\theta)\). For example, \(R_{zz}(\theta)\) shown in Fig. 2 is represented by \[\begin{split} R_{zz}(\theta)&=\begin{pmatrix}e^{-i \theta/2}&0&0&0\\ 0&e^{i\theta/2}&0&0\\ 0&0&e^{i\theta/2}&0\\ 0&0&0&e^{-i\theta/2}\end{pmatrix}\\ &=\cos\frac{\theta}{2}\cdot I\otimes I-i\sin\frac{\theta}{2}\cdot Z \otimes Z.\end{split} \tag{17}\] As can be seen from Fig. 2, the Ising gates act on a two-body space; thus, we refer to \(U^{(2)}(\theta)\) as a two-body space variational circuit. For \(\eta\)-electron systems, an \(\eta\)-body space variational circuit \(U^{(\eta)}(\theta)\) is represented by \(U^{(\eta)}(\theta)=\sum_{i}c_{i}(\theta){U_{i}}^{\otimes\eta}\). Such operators can be easily implemented using Ising gates with cascaded CNOT gates across the \(\eta\)-body space [35; 36]. The Ising gate is not the only option as a two-body space variational circuit. For example, the real-valued symmetry-preserving (RSP) ansatz [37] shown in Fig. 2 can also be used as a two-body space variational circuit. The RSP ansatz is represented by the superposition of the following operators: \[\begin{split} U_{\mathrm{RSP}}(\theta)&=\begin{pmatrix} \cos\theta&0&0&-\sin\theta\\ 0&0&1&0\\ 0&1&0&0\\ \sin\theta&0&0&\cos\theta\end{pmatrix}\\ &=\frac{1}{2}\left[X\otimes X+Y\otimes Y+\cos\theta(I\otimes I+Z\otimes Z) \right.\\ &\left.-i\sin\theta(X\otimes Y+Y\otimes X)\right].\end{split} \tag{18}\] Here, terms \(X\otimes Y,Y\otimes X\) appear as tensor products of different operators \(X\) and \(Y\). Either of these terms alone destroys the antisymmetry of ansatz states, but pairs of them preserve antisymmetry as follows: \[\begin{split}&(X\otimes Y+Y\otimes X)\ket{\psi_{\mathrm{SD}}} \\ &=\begin{bmatrix}X\ket{\alpha}_{1}&Y\ket{\beta}_{1}\\ X\ket{\alpha}_{2}&Y\ket{\beta}_{2}\end{bmatrix}+\begin{bmatrix}Y\ket{ \alpha}_{1}&X\ket{\beta}_{1}\\ Y\ket{\alpha}_{2}&X\ket{\beta}_{2}\end{bmatrix}\\ &=\ket{\psi_{\mathrm{SD}}^{\prime}}+\ket{\psi_{\mathrm{SD}}^{\prime }}.\end{split} \tag{19}\] In Eq. 18, the RSP ansatz is represented by a real-valued operator, which is convenient for describing real-valued eigenstates of one-dimensional systems. Unfortunately, however, it is not obvious how to extend this ansatz to a more-than-two-body space. Although the Ising gates are considered a more suitable option for generalizing to a system with a larger number of electrons, in the present study, the RSP ansatz was employed for convenience. As illustrated in Fig. 2, a single Slater determinant is split each time \(U^{(2)}(\theta)\) acts on it. Therefore, it is expected that an exponentially large number of Slater determinants will be generated by repeatedly applying the two-body space variational circuit followed by the one-body space variational circuit. However, such consecutive application of two-body variational circuits does not Figure 2: Implementation of \(U^{(2)}(\theta)\). Top: implementations of Ising gate and real-valued symmetry-preserving (RSP) ansatz. Bottom: conceptual picture of generation of superposed HF states by an \(R_{zz}(\theta)\) gate. increase the number of superposed Slater determinants as one might expect. This is because a product of multiple Pauli operators reduces to a single Pauli operator due to the commutation relation among Pauli operators (\(P_{i}^{2}=I,P_{1}P_{2}=iP_{3}\)). This reduction of Pauli operators can be prevented by using the alternating layered structure of one-body and two-body space variational circuits as shown in Fig. 1. If the one-body space variational circuit is noncommutative with the Pauli operators, then the reduction of Pauli operators between two-body space variational circuits is prevented. Therefore, by repeatedly applying one-body and two-body space variational circuits alternately to a seed state, the number of superposed Slater determinants can be increased exponentially, and so is a systematic approach to approximating the exact ground state. During the optimization process of the VQE framework, variational parameters in one-body and two-body circuits are simultaneously optimized so as to minimize the energy expectation values of an ansatz state. As can be seen from Eqs. 14-16, optimizing one-body and two-body circuits corresponds to optimizing the electronic states composing each Slater determinant (electronic orbitals) and the superposition coefficients of the Slater determinants (CI coefficients), respectively. Therefore, this framework can be regarded as the multi-configuration self-consistent field (MCSCF) method, which is one of the post-Hartree-Fock _ab initio_ quantum chemistry methods. A notable advantage of the proposed method is its ability to perform MCSCF calculations based on an exponential number of electron configurations with polynomial computational complexity. ### Settings for Numerical Simulations To verify the validity of our method, we performed the VQE calculation with the proposed circuit to obtain the ground state of a 1D-H\({}_{2}\) system. The Hamiltonian of this system is expressed as follows: \[\begin{split} H=\sum_{i=1}^{2}&\left[-\frac{1}{2} \frac{d^{2}}{dr_{i}^{2}}-\sum_{p=1}^{2}\frac{1}{|r_{i}-R_{p}+\epsilon|}\right] \\ &+\frac{1}{|r_{1}-r_{2}+\epsilon|}+\frac{1}{|R_{1}-R_{2}|}.\end{split} \tag{20}\] where \(r_{1},r_{2}\) and \(R_{1},R_{2}\) are the positions of electrons and protons, respectively. To avoid zero division in the Coulomb interaction terms, a small real value \(\epsilon\) is added to the denominators of those terms (soft Coulomb potential). Note here that the ground states of fermions and bosons are completely degenerate. Figure 3 shows four degenerate ground states obtained by the exact diagonalization of Eq. 20 at the interatomic distance \(|R_{1}-R_{2}|=0.5\) bohrs. As shown in Fig. 3, the ground states of the fermions are the singlet state (\(S^{2}=0,S_{z}=0\)) whose spin symmetry is antisymmetric and spatial symmetry is symmetric. On the other hand, the ground states of the bosons are the triplet state (\(S^{2}=1,S_{z}=0,\pm 1\)) whose spin and spatial symmetry are symmetric. Since the Hamiltonian of the system depends only on the spatial coordinate, there is no energy difference between fermionic and bosonic ground states whose spatial symmetries are the same. Therefore, without taking into account the symmetry of the variational quantum circuit, the quantum states obtained by the VQE can be a mixture of fermionic and bosonic states, even though an accurate energy value is obtained. We demonstrate such convergence to a symmetry-neglected state in the numerical simulations. The following describes the calculation conditions of the system and the quantum circuits used for numerical simulations. Six qubits (5 qubits for spatial coordinate and 1 qubit for spin coordinate) represent the one-body space, and thus 12 qubits were used for the two-electron system. Spatial coordinates of electrons \(r_{1},r_{2}\) ranged from -0.5 bohrs to 0.5 bohrs and the grid width \(\Delta r\) was \(1/2^{5}\) bohrs = 0.03125 bohrs. \(\epsilon\) was set Figure 3: Probability amplitudes \(|\psi(r_{1},r_{2})\rangle\) of degenerated ground states (one fermionic state (\(S^{2}=0,S_{z}=0\)) and three bosonic states (\(S^{2}=1,S_{z}=0,\pm 1\))) at interatomic distance \(|R_{1}-R_{2}|=0.5\) bohrs (\(R_{1},R_{2}\) are shown by pink dots). Each many-body wave function is shown in the four subspaces corresponding to the spin configurations \(\uparrow_{1}\uparrow_{2},\uparrow_{1}\downarrow_{2},\downarrow_{1}\uparrow_{2 },\downarrow_{1}\downarrow_{2}\). Note that the correspondence between colors and values is different for each plot. to \(\Delta r/2\). The spatial grid employed in this study was limited to a very coarse one, due to the computational resources of our classical computer, but the realization of the FTQC is expected to allow calculations with a grid fine enough to reproduce the electronic structure of real materials. For variational circuits, the hardware efficient (HE) ansatz [38] and the real-valued symmetry preserving ansatz (RSP) [37] were employed. The HE ansatz consists of Ry gates and CNOT gates [39], which in the present study had 6 layers. In order to demonstrate the effects of the antisymmetrization and the multi-configuration, we implemented the VQE with three different circuits. The architectures of these circuits are shown in Fig. 4. The first architecture is the symmetry-neglected (SN) architecture shown in Fig. 4, consisting of consecutive HE ansatz blocks acting on the two-body space. In this architecture, the antisymmetry of the seed state is not considered to be preserved, and thus it is expected that the VQE will yield a neither antisymmetric (fermionic) nor symmetric (bosonic) state. The number of blocks of the HE ansatz was 6. The second architecture is the Hartree-Fock (HF) architecture shown in Fig. 4, consisting of consecutive one-body space variational circuits. In this architecture, the antisymmetry of the seed state is preserved, but the VQE yields a ground state within the HF approximation. The third architecture is the multi-configuration (MC) architecture shown in Fig. 4, consisting of alternately layered one-body and two-body space variational circuits. In this architecture, the VQE can achieve a ground state beyond the HF approximation, and its energy is expected to be lower than that obtained with the HF architecture. To confirm this energy stabilization, potential energy curves were calculated for the HF and MC architectures, at 16 points of interatomic distance \(|R_{1}-R_{2}|\) ranging from \(\Delta r\) to \(16\Delta r=0.5\) bohrs. The number of one-body space variational circuits was 15 for both architectures and that of two-body space variational circuits was 14 for the MC architecture. For the optimization of variational parameters, the steepest descent method with the Adam optimizer [40] was employed and the number of optimization steps was set to 10000. All calculations for quantum circuits were performed by a noiseless state vector simulator. ## III Result and Discussion ### Results of the VQE Calculations Figure 5 shows the many-body wave functions obtained by the VQE with the different architectures at an interatomic distance \(|R_{1}-R_{2}|=0.5\) bohrs. In the case of the wave function obtained with the SN architecture shown in Fig. 5, the spatial coordinate is symmetric under the exchange of two electrons, but its spin coordinate is neither antisymmetric nor symmetric (\(\uparrow_{1}\uparrow_{2}+\uparrow_{1}\downarrow_{2}\neq\uparrow_{1}\uparrow_{ 2}+\downarrow_{1}\uparrow_{2}\)). This indicates that unless the symmetry for the variational quantum circuit is not considered, the resulting state of the VQE can converge to such a symmetry-neglected state. This is the notable difference from the second-quantized VQE. In contrast, as shown in Figs. 5, the wave functions obtained with the HF and MC architectures are the antisymmetric singlet states, as expected. The shape of the wave function obtained with the MC architecture well reproduces that of the exact fermionic ground state shown in Fig. 3, while that with the HF architecture is apparently different from it. As will be described later, this difference in shape of the wave functions indicates the difference in representability of the quantum states between the HF approximation and CI theory. The difference between them is also reflected in their ground state energies. Figure 6 shows the potential energy curves obtained with the HF and MC architectures. It can be seen that the ground state energies obtained with the MC architecture reproduce the results of the exact diagonalization well, whereas those obtained with the HF architecture converge to higher values. This demonstrates the energy stabilization due to the multi-configuration character, in other words, the effect of electron correlation, which is the pillar of the quantum chemistry theories [41; 42]. Figure 4: Circuit architectures employed by VQE. (a) Symmetry-neglected (SN) architecture; (b) Hartree-Fock (HF) architecture; (c) Multi-configuration (MC) architecture. HE in the figure denotes the hardware efficient ansatz with 6 layers and RSP denotes the real-valued symmetry preserving ansatz. ### Analysis of Many-Body States based on Quantum Information Theory Although the numerical experiments in this section confirmed that the proposed method can reproduce the exact fermionic ground state well, the obtained many-body wave function provides little insight into the microscopic electronic structure. The electronic structure is mostly understood from the electron orbital picture; however, since our method is based on a real-space basis, the orbital picture of the obtained state is not obvious. Furthermore, a lack of an orbital picture makes it impossible to quantitatively evaluate the multi-configuration character of the obtained state. In order to get a clear orbital picture and evaluate the multi-configuration character of obtained states, we analyzed many-body quantum states from the perspective of quantum information theory. Quantum information has been attracting increasing attention in recent years as a key to understanding electron dynamics in chemical reactions and strongly correlated materials [44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. We will first quantify the multi-configuration character of many-body states. To evaluate the multi-configuration character, we employed the entanglement entropy, which is typically employed to measure the degree of entanglement between subsystems. We shall now briefly explain the definition of entanglement entropy. A many-body wave function of the two-electron system can be decomposed into multiple product states by the Schmidt decomposition, as follows: \[\ket{\psi(r_{1},r_{2})}=\sum_{i}\lambda_{i}\ket{\mu_{i}(r_{1})}\otimes\ket{ \chi_{i}(r_{2})}, \tag{21}\] where \(\lambda_{i}\) are the Schmidt coefficients (superposition coefficients of product states) and \(\ket{\mu_{i}(r_{1})},\ket{\chi_{i}(r_{2})}\) are the Schmidt basis on subsystems. The entanglement entropy \(S\) is defined by the Shannon entropy of the probability distribution \(\ket{\lambda_{i}}^{2}\) as \[S=-\sum_{i}|\lambda_{i}|^{2}\log_{2}|\lambda_{i}|^{2}. \tag{22}\] Considering the fact that the Shannon entropy of a normal distribution is proportional to the logarithm of its variance, the larger the variance of probability distribution \(\ket{\lambda_{i}}^{2}\), that is, the more product states are superposed, the larger the entanglement entropy. Since the entanglement entropy of a single Slater determinant is 1, that of an MC state is expected to be larger than 1. Figure 6(b) shows the entanglement entropies calculated for the ground states obtained by the exact diagonalization and the MC architecture. As expected, the entanglement entropy of the MC state is larger than 1 and increases as interatomic distance increases, which is in good agreement with the exact result. This behavior of entanglement entropy indicates that the ground state becomes more multi-configurational as the interatomic distance approaches the dissociation limit, which is a well-known fact in the field of conventional quantum chemistry [41; 42]. We can now extract the electron orbital picture from the many-body state represented in real space. From the expression of the Schmidt decomposition in Eq. 21, Schmidt basis \(\ket{\mu_{i}(r_{1})},\ket{\chi_{i}(r_{2})}\) can be regarded as one-body wave functions, that is, the electron orbitals of the electrons, and the Schmidt coefficients as their contributions to the many-body state. Therefore, the Schmidt basis and the Schmidt coefficients provide insight into the electron orbital picture of many-body wave functions. Figure 7 shows the results of the Schmidt decomposition performed for one anti-parallel spin subspace \(\ket{\psi(r_{1\downarrow},r_{2\uparrow})}\) of the exact ground state (upper left part Figure 5: Many-body wave functions obtained by the VQE with (a) the SN architecture, (b) the HF architecture and (c) the MC architecture at the interatomic distance \(\ket{R_{1}-R_{2}}=0.5\) bohrs (\(R_{1},R_{2}\) are shown by pink dots). of Fig. 3) and the MC state (Fig. 5(c)). The left side of the figure shows the first three Schmidt coefficients \(\lambda_{0},\lambda_{1},\lambda_{2}\) of each interatomic distance. As shown, the contribution of the zeroth product state \(\lambda_{0}\) is nearly constant at all interatomic distances, whereas that of the first product state \(\lambda_{1}\) increases as interatomic distance increases, which is consistent with the behavior of the entanglement entropy shown in Fig. 6(b). This suggests that the configuration interaction between the zeroth and first product states, that is, their superposition, explains the energy stabilization due to the multi-configuration character. The right side of the figure shows the product states constituting the MC state at the interatomic distance \(|R_{1}-R_{2}|=0.5\) bohrs. The electron orbitals constituting each product state are indicated by black lines in each figure. Using these electron orbitals, we will illustrate the energy stabilization due to the multi-configuration character in the following discussion. The electron orbitals constituting the zeroth and first product states have peaks at the proton positions \(R_{1},R_{2}\) (pink dots in figure), and these orbitals correspond to bonding and antibonding orbitals. Following the typical explanation of the linear combination of atomic orbitals (LCAO) approximation, let \(|p_{i}(r)\rangle\) be the electron orbitals distributed around the \(i\)th proton. Then the bonding and antibonding orbitals are represented as \(\ket{\psi_{\sigma}(r)}=\ket{p_{1}(r)}+\ket{p_{2}(r)}\) and \(\ket{\psi_{\sigma^{*}}(r)}=\ket{p_{1}(r)}-\ket{p_{2}(r)}\), respectively. The bonding state of two electrons \(\ket{\psi_{\sigma\sigma}(r_{1},r_{2})}\) is expressed as the tensor product of the bonding orbitals \(\ket{\psi_{\sigma}(r)}\) as follows: \[\begin{split}&\ket{\psi_{\sigma\sigma}(r_{1},r_{2})}\\ &=\ket{\psi_{\sigma}(r_{1})}\otimes\ket{\psi_{\sigma}(r_{2})}\\ &=\ket{p_{1}(r_{1})}\otimes\ket{p_{2}(r_{2})}+\ket{p_{2}(r_{1})} \otimes\ket{p_{1}(r_{2})}\\ &\qquad+\ket{p_{1}(r_{1})}\otimes\ket{p_{1}(r_{2})}+\ket{p_{2}(r_ {1})}\otimes\ket{p_{2}(r_{2})},\end{split} \tag{23}\] The four terms in the above expression correspond to the four peaks in the zeroth product state and in the HF ground state shown in Fig. 5(b); thus, \[\ket{\psi_{\sigma\sigma}(r_{1},r_{2})}\sim\ket{\mu_{0}(r_{1})}\otimes\ket{\chi _{0}(r_{2})}\sim\ket{\psi_{\text{HF}}(r_{1},r_{2})}. \tag{24}\] Figure 6: (a) Potential energy curves and (b) entanglement entropy obtained by exact diagonalization (black dashed lines) and VQE with the MC architecture (blue solid lines) and the HF architecture (red solid lines). The calculated potential energy curves are significantly different from those of the three-dimensional hydrogen molecular system, which is attributed to the difference in dimensions between the systems [43]. Figure 7: Results of the Schmidt decomposition of \(\ket{\psi(r_{1\downarrow},r_{2\uparrow})}\). Left: First three Schmidt coefficients \(\lambda_{0},\lambda_{1},\lambda_{2}\) at each interatomic distance. Right: Resulting product-state wave function at interatomic distance \(\ket{R_{1}-R_{2}}=0.5\) bohrs (\(R_{1},R_{2}\) are shown by pink dots), and corresponding one-electron orbitals \(\ket{\mu_{i}(r_{1\downarrow})},\ket{\chi_{i}(r_{2\uparrow})}\) (black lines). The one-electron orbitals are scaled for visibility. The antibonding state, or two-electron excited state, \(\ket{\psi_{\sigma^{*}\sigma^{*}}(r_{1},r_{2})}\) is expressed as the tensor product of the antibonding orbitals \(\ket{\psi_{\sigma^{*}}(r)}\) as follows: \[\begin{split}&\ket{\psi_{\sigma^{*}\sigma^{*}}(r_{1},r_{2})}\\ &=-\ket{\psi_{\sigma^{*}}(r_{1})}\otimes\ket{\psi_{\sigma^{*}}(r_ {2})}\\ &=\ket{p_{1}(r_{1})}\otimes\ket{p_{2}(r_{2})}+\ket{p_{2}(r_{1})} \otimes\ket{p_{1}(r_{2})}\\ &\qquad-\ket{p_{1}(r_{1})}\otimes\ket{p_{1}(r_{2})}-\ket{p_{2}(r_ {1})}\otimes\ket{p_{2}(r_{2})},\end{split} \tag{25}\] where \(-1\) is taken as a global phase. The four terms in the above expression correspond to the four peaks in the first product state; thus, \[\ket{\psi_{\sigma^{*}\sigma^{*}}(r_{1},r_{2})}\sim\ket{\mu_{1}(r_{1})}\otimes \ket{\chi_{1}(r_{2})}. \tag{26}\] Superposing bonding and antibonding states equally yields \[\begin{split}&\ket{\psi_{\sigma\sigma+\sigma^{*}\sigma^{*}}(r_{1}, r_{2})}\\ &=\frac{1}{2}\ket{\psi_{\sigma\sigma}(r_{1},r_{2})}+\frac{1}{2} \ket{\psi_{\sigma^{*}\sigma^{*}}(r_{1},r_{2})}\\ &=\ket{p_{1}(r_{1})}\otimes\ket{p_{2}(r_{2})}+\ket{p_{2}(r_{1})} \otimes\ket{p_{1}(r_{2})}.\end{split} \tag{27}\] It can be seen that energetically unfavorable electron configurations \(\ket{p_{1}(r_{1})}\otimes\ket{p_{1}(r_{2})}\), \(\ket{p_{2}(r_{1})}\otimes\ket{p_{2}(r_{2})}\), where both electrons are distributed around the same proton, are cancelled out and a lower energy state is achieved. The two terms in the above expression correspond to the two peaks in the MC state \(\ket{\psi_{\text{MC}}(r_{1},r_{2})}\) shown in Fig. 5(c); thus, \[\ket{\psi_{\sigma\sigma+\sigma^{*}\sigma^{*}}(r_{1},r_{2})}\sim\ket{\psi_{\text {MC}}(r_{1},r_{2})}. \tag{28}\] This state represents the electron configuration where electrons are distributed around different protons to avoid each other, reminiscent of correlated, or entangled, electrons. To summarize, a multi-configuration state is inherently an entangled state consisting of multiple product states, and electron correlation can be interpreted as quantum entanglement in the electron system. Although this discussion on electron orbitals may be just a textbook subject [41; 42], it is worth noting that such an orbital picture can be extracted from a real-space represented by many-body states by using the Schmidt decomposition. In addition, the minor orbital contributions, other than bonding and antibonding orbitals, can be considered in our method. Indeed, as shown in the left part of Fig. 7, the exact ground state and the MC state include the contribution of the second product state \(\lambda_{2}\ket{\mu_{2}(r_{1\downarrow})}\otimes\ket{\chi_{2}(r_{2\uparrow})}\) whose electron orbitals have nodes at proton positions and are distributed in the interstitial region. Since a real-space basis is a complete basis set of a discrete space, it can represent both localized atomic orbitals and such delocalized orbitals, freeing us from the need for careful selection of basis functions in typical quantum chemical calculations. From this point, a real-space basis is an attractive option for the quantum simulation in the FTQC era. The results shown in this section demonstrate one possible framework of quantum chemistry in the FTQC era, where we understand electronic structures based on electron orbitals derived from many-body wave functions represented in real space. In this study, many-body wave functions were analyzed by a classical computer; however, since it is almost impossible to obtain a state vector of even a quantum circuit of a few dozen qubits, this analysis must be performed on a quantum computer. Fortunately, several methods have been proposed to perform singular value decomposition on quantum computers [54; 55; 56], which may allow such analysis to be performed on a quantum computer with reasonable computational resources. The problem concerned in our method is the barren plateaus [57; 58; 59] in the optimization process of the variational state. In fact, the disappearance of gradients of variational parameters was observed during the optimization process, which required a large number of optimization steps. Such barren plateaus are the most serious problem in most variational quantum algorithms. Nevertheless, as mentioned in the Introduction, since by using the QPE, the true ground state can be distilled from the approximated ground state prepared by the VQE, the incomplete convergence of the VQE due to the barren plateaus will be tolerated to some extent. The complementary combination of the VQE and the QPE will allow the state preparation for first-quantized methods. ## IV Summary and Conclusion In this paper, we propose a state preparation method for the first-quantized quantum simulations based on the real-space representation. We employ a variational method for preparing ground states, and provide a design principle for constructing antisymmetrized variational quantum circuits. Our proposed circuits are capable of generating a superposition of exponentially large number of Slater determinants, that is, MC states with polynomial numbers of quantum gates. We performed VQE calculations for a 1D-H\({}_{2}\) system and confirmed that the proposed circuit reproduces well the exact fermionic ground state. In addition to performing VQE, we quantitatively evaluated the multi-configuration character of many-body states by using the entanglement entropy between two electrons, and extracted the electronic orbital picture from many-body states represented in real space by the Schmidt decomposition. Quantum computers, as demonstrated in this study, have great potential to simulate many-body electron systems and shed light on their quantum information. We believe that our proposed method will contribute to realizing the first-quantized simulation for electron dynamics and will bring a deeper understanding of electron systems to materials science in the FTQC era.
2304.06260
Systematic construction of topological-nontopological hybrid universal quantum gates based on many-body Majorana fermion interactions
Topological quantum computation by way of braiding of Majorana fermions is not universal quantum computation. There are several attempts to make universal quantum computation by introducing some additional quantum gates or quantum states. However, there is an embedding problem that $M$-qubit gates cannot be embedded straightforwardly in $N$ qubits for $N>M$. This problem is inherent to the Majorana system, where logical qubits are different from physical qubits because braiding operations preserve the fermion parity. By introducing $2N$-body interactions of Majorana fermions, topological-nontopological hybrid universal quantum computation is shown to be possible. Especially, we make a systematic construction of the C$^{n}$Z gate, C$^{n}$NOT gate and the C$^{n}$SWAP gate.
Motohiko Ezawa
2023-04-13T04:41:29Z
http://arxiv.org/abs/2304.06260v2
# Topological quantum gates and topological entangled states ###### Abstract We investigate various quantum gates and entangled states generated solely by braidings Majorana fermions in a one-dimensional chain. The coefficients of these quantum gates and entangled states are exactly fixed owing to the nature of topological quantum computation and hence they are topologically protected. The cat states and the Bell states can be constructed from the initial states \(|0\rangle\) and \(|00\rangle\), respectively. The Deutsch algorithm is executable. The Hadamard transformation gate as well as the Pauli gates are generated for an arbitrary number of qubits. The equal-coefficient states are constructible for an arbitrary number of qubits. Furthermore, it is possible to execute a simplified Deutsch-Jozsa algorithm for an arbitrary number of qubits. Then, we present a no-go theorem on the construction of the quantum gates based on the determinant of the braiding operators. It impossible to construct C\({}^{k}\) Z gates, C\({}^{k}\)NOT gates for \(k\geq 2\) and C\({}^{k}\)SWAP gates for \(k\geq 1\) including the CCZ gate, the Toffoli gate, the Fredkin gate. In addition, it is impossible to construct to construct quantum Fourier transformations except for the Hadamard gate. ## I Introduction A quantum computer is a promising next generation computer[1; 2; 3]. Quantum gates and entangled states are keys. In order to execute any quantum algorithms, universal quantum computation is necessary[4; 5; 6]. There are various approach to realize universal computation including superconductors[7], photonic systems[8], quantum dots[9], trapped ions[10] and nuclear magnetic resonance[11; 12]. However, there is a decoherence problem for nontopological quantum computers, degrading entangled states. In order to make a precise quantum gate, it is necessary to make a fine tuning of gate operations. On the other hand, topological quantum computation is free from this fidelity degradation due to the topological protection. Braidings of Majorana fermions are the most promising method for topological quantum computation[13; 14; 15; 16]. It is a key to solve decoherence problem existing in conventional quantum computation. There are various approach to materialize Majorana fermions such as fractional quantum Hall effects[17; 18; 19; 15], topological superconductors[20; 21; 22; 23; 24; 25; 26] and Kitaev spin liquids[27; 28]. However, it is not universal quantum computation but can generate only a part of Clifford gates[29; 30]. The entire Clifford gates are generated for two qubits but this is not the case for more than three qubits[30]. Furthermore, only the Clifford gates are not enough to exceed classical computers, which is known as the Gottesman-Knill theorem[31; 32; 33]. Although universal computation is possible based on Majorana fermions with the aid of magic state distillation[22; 34], the topological protection is broken in this approach. So far, the only known application of the braidings of Majorana fermions is a simplified two-qubit Deutsch-Jozsa algorithm[35]. It is hard to find local operators more than three-qubit quantum gates based on only braidings of Majorana fermions[36; 37]. It is an interesting problem to search what quantum gates and entangled states can be made solely by braiding Majorana fermions. In this paper, we show that braidings of Majorana fermions are powerful for quantum computation although it is not universal. This is because many basic entangled states and quantum gates have the coefficients \(i^{n}\) with \(n=0,1,2,3\), which are likely to be generated by braidings. Especially, we show that entangled states such as the cat states, the Bell states and the equal-coefficient states are constructed solely by braidings of Majorana fermions on a one-dimensional chain. We also show that the Pauli gates and the Hadamard transformation gate are generated for an arbitrary number of qubits only by braidings. It makes possible the generation of the equal-coefficient state for an arbitrary number of qubits. We argue that these entangled states constructed by braidings are digital, where the coefficients of each bit state are restricted to be \(i^{n}\) with \(n=0,1,2,3\). They are robust against decoherence because a small perturbation is unable to change their coefficients. This is also the case for quantum gates. Hence, we call them topological entangled states and topological quantum gates. We also discuss the embedding problem of quantum gates. We find that all of the Pauli gates and the \(i\)SWAP gate can be embedded in an arbitrary number of qubits. We show a criteria what quantum gates cannot be implemented based on the determinant of the unitary transformation. It is shown that the C\({}^{k}\)Z gates. the C\({}^{k}\)NOT gates with \(k\geq 2\) and the C\({}^{k}\)SWAP gates with \(k\geq 1\) cannot be implemented solely by braidings. Furthermore, it is impossible to construct the quantum Fourier transformation except for the Hadamard gate. Finally, we show that the Deutsch algorithm is executable, and that a simplified Deutsch-Jozsa algorithm is executable for an arbitrary number of qubits solely by braiding Majorana fermions. This paper is composed as follows. In sec. II, we make a concise review of the basic properties of braidings of Majorana fermions. In sec. III, we study how to construct one logical qubit from two physical qubits made of four Majorana fermions. We propose to construct the cat states by braidings. In sec. IV, we proceed to discuss two logical qubits. After reviewing some known two qubit quantum gates, we present newly found braiding representations of various quantum gates. In particular, the Bell states are constructed by braidings. In sec. V, we present a no-go theorem that the CCZ gate, the CNOT gate and the CSWAP gate cannot be made by braidings due to the constraint of the determinant on braidings. Some embedding of quantum gates are discussed. In sec. VI, we generalize these results to the system containing an arbitrary number of qubits. In sec. VII, we show that quantum Fourier transformation cannot be made except for the Hadamard gate by braidings. In sec. VIII, we study algorithms executable by braidings. In sec. IX, we discuss universal digital quantum computation based on Majorana fermions. Sec. X is devoted to discussions. ## II Majorana fermions and their braiding The Majorana fermions are described by operators \(\gamma_{\alpha}\) satisfying the anticommutation relations \[\{\gamma_{\alpha},\gamma_{\beta}\}=2\delta_{\alpha\beta}. \tag{1}\] The braid operators are defined by[13] \[\mathcal{B}_{\alpha\beta}=\exp\left[\frac{\pi}{4}\gamma_{\beta}\gamma_{\alpha }\right]=\frac{1}{\sqrt{2}}\left(1+\gamma_{\beta}\gamma_{\alpha}\right). \tag{2}\] It satisfies \(\mathcal{B}_{\alpha\beta}^{4}=1\) and there is a corresponding antibraid operator \(\mathcal{B}_{\alpha\beta}^{-1}=\mathcal{B}_{\alpha\beta}^{3}\). The qubit basis is defined by[13] \[\left|n_{N+1}n_{N}\cdots n_{2}n_{1}\right\rangle_{\text{physical}}\] \[\equiv\left(c_{1}^{\dagger}\right)^{n_{1}}\left(c_{2}^{\dagger} \right)^{n_{2}}\cdots\left(c_{N}^{\dagger}\right)^{n_{N}}\left(c_{N+1}^{ \dagger}\right)^{n_{N+1}}\left|0\right\rangle, \tag{3}\] where ordinary fermion operators are constructed form two Majorana fermions as \[c_{j}=\frac{1}{2}\left(\gamma_{2j-1}+i\gamma_{2j}\right). \tag{4}\] The braiding operation preserves the fermion parity \(P_{\alpha\beta}\equiv i\gamma_{\beta}\gamma_{\alpha}\), which commutes with the braid operator \(\mathcal{B}_{\alpha\beta}\), \[\left[\mathcal{B}_{\alpha\beta},P_{\alpha\beta}\right]=0. \tag{5}\] \(2N+4\) Majorana fermions constitute \(N+1\) physical qubits. The braiding operation preserves the fermion parity. It means that if we start with the even parity state \(\left|00\cdots 0\right\rangle_{\text{physical}}\), the states after any braiding process should have even fermion parity. Therefore, \(N+1\) physical qubits \(\left|n^{\prime}_{N+1}\cdots n^{\prime}_{2}n^{\prime}_{1}\right\rangle_{ \text{physical}}\) is necessary[38; 39; 40] for \(N\) logical qubits \(\left|n_{N}\cdots n_{2}n_{1}\right\rangle_{\text{logical}}\). There are \(N!\) correspondences between the logical and physical qubits. However, we adopt the following unique correspondence. When a logical qubit \(\left|n_{N}\cdots n_{2}n_{1}\right\rangle_{\text{logical}}\) is given, we associate to it a physical qubit \(\left|n_{N}\cdots n_{2}n_{1}n_{0}\right\rangle_{\text{physical}}\) by adding one qubit \(n_{0}\) uniquely so that \(\sum_{j=0}^{N}n_{j}=0\text{ mod }2\). Alternatively, when a physical qubit \(\left|n_{N}\cdots n_{2}n_{1}n_{0}\right\rangle_{\text{logical}}\) is given, we associate to it a logical qubit \(\left|n_{N}\cdots n_{2}n_{1}\right\rangle_{\text{physical}}\) just by eliminating the qubit \(n_{0}\). An example reads as follows, \[\begin{pmatrix}\overbrace{\left|0,\cdots,0,0,0\right\rangle}^{N}\\ \left|0,\cdots,0,0,1\right\rangle\\ \left|0,\cdots,0,1,0\right\rangle\\ \left|0,\cdots,0,1,1\right\rangle\\ \left|0,\cdots,1,0,0\right\rangle\\ \left|0,\cdots,1,0,1\right\rangle\\ \left|0,\cdots,1,0,0\right\rangle\\ \left|0,\cdots,1,0,1\right\rangle\\ \cdots\end{pmatrix}_{\text{logical}}=\begin{pmatrix}\overbrace{\left|0, \cdots,0,0,0,0\right\rangle}^{N+1}\\ \left|0,\cdots,0,0,1,1\right\rangle\\ \left|0,\cdots,0,1,0\right\rangle\\ \left|0,\cdots,0,1,1,0\right\rangle\\ \left|0,\cdots,1,0,1,0\right\rangle\\ \cdots\end{pmatrix}_{\text{physical}}. \tag{6}\] Actually, the correspondence in the present work is different from the previous works[35; 36; 37; 39; 40]. Accordingly, the detailed braiding process for quantum gates are slightly different from the previous works[35; 36; 37; 39; 40]. We consider a one-dimensional chain of Majorana fermions and only consider the braidings between adjacent Majorana fermions. We denote \(\mathcal{B}_{\alpha}\equiv\mathcal{B}_{\alpha,\alpha+1}\). The braid operators \(\mathcal{B}_{\alpha}\) satisfies the Artin braid group relation[41], \[\mathcal{B}_{\alpha}\mathcal{B}_{\beta} =\mathcal{B}_{\beta}\mathcal{B}_{\alpha}\qquad\text{for}\quad \left|i-j\right|\geq 2, \tag{7}\] \[\mathcal{B}_{\alpha}\mathcal{B}_{\alpha+1}\mathcal{B}_{\alpha} =\mathcal{B}_{\alpha+1}\mathcal{B}_{\alpha}\mathcal{B}_{\alpha+1}. \tag{8}\] The second equation is known as the Yang-Baxter equation. ## III One logical qubit We discuss how to construct one logical qubit[13]. Two ordinary fermions \(c_{1}\) and \(c_{2}\) are introduced from four Majorana fermions as \[c_{1}=\frac{1}{2}\left(\gamma_{1}+i\gamma_{2}\right),\qquad c_{2}=\frac{1}{2} \left(\gamma_{3}+i\gamma_{4}\right). \tag{9}\] The basis of physical qubits are given by \[\Psi_{\text{physical}} =\left(\left|0\right\rangle,c_{1}^{\dagger}\left|0\right\rangle, c_{2}^{\dagger}\left|0\right\rangle,c_{1}^{\dagger}c_{2}^{\dagger}\left|0 \right\rangle\right)^{t}\] \[\equiv\left(\left|0,0\right\rangle_{\text{physical}},\left|0,1 \right\rangle_{\text{physical}},\left|1,0\right\rangle_{\text{physical}}, \left|1,1\right\rangle_{\text{physical}}\right)^{t}. \tag{10}\] By taking the even parity basis as \[\left(\begin{array}{c}\left|0\right\rangle\\ \left|1\right\rangle\end{array}\right)_{\text{logical}}=\left(\begin{array}{c }\left|0,0\right\rangle\\ \left|1,1\right\rangle\end{array}\right)_{\text{physical}}, \tag{11}\] one logical qubit is constructed from two physical qubits. ### Quantum gates for one logical qubit The braid operator \(\mathcal{B}_{1}\) is written in terms of fermion operators, \[\mathcal{B}_{1}=\frac{1}{\sqrt{2}}\left(1+\gamma_{2}\gamma_{1}\right)=\frac{1} {\sqrt{2}}\left(1+ic_{1}^{\dagger}c_{1}-ic_{1}c_{1}^{\dagger}\right), \tag{12}\] which operates on two physical qubits (10) as[13] \[\mathcal{B}_{1}\Psi_{\text{physical}}=e^{-i\pi/4}\left(\begin{array}{cccc}1&0&0 &0\\ 0&i&0&0\\ 0&0&1&0\\ 0&0&0&i\end{array}\right)\left(\begin{array}{c}|0,0\rangle\\ |0,1\rangle\\ |1,0\rangle\\ |1,1\rangle\end{array}\right)_{\text{physical}}. \tag{13}\] Taking the even parity basis, the action is \[\mathcal{B}_{1}\Psi_{\text{logical}}=e^{-i\pi/4}\left(\begin{array}{cccc}1&0 \\ 0&i\end{array}\right)\left(\begin{array}{c}|0\rangle\\ |1\rangle\end{array}\right)_{\text{logical}}, \tag{14}\] where the basis for the logical qubit is defined by \[\Psi_{\text{logical}}\equiv\left(|0\rangle\,,c_{1}^{\dagger}c_{2}^{\dagger} \,|0\rangle\right)^{t}. \tag{15}\] The braid operation is written as \[\mathcal{B}_{1}=e^{-i\pi/4}U_{\text{S}}, \tag{16}\] in terms of the S gate defined by \[U_{\text{S}}\equiv\text{diag.}\left(1,i\right). \tag{17}\] The braid operator \(\mathcal{B}_{2}\) is written in terms of fermion operators, \[\mathcal{B}_{2} =\frac{1}{\sqrt{2}}\left(1+\gamma_{3}\gamma_{2}\right)\] \[=\frac{1}{\sqrt{2}}\left(1+ic_{2}c_{1}^{\dagger}+ic_{2}^{\dagger }c_{1}^{\dagger}-ic_{2}c_{1}-ic_{2}^{\dagger}c_{1}\right). \tag{18}\] It operates on two physical qubits (10) as[13], \[\mathcal{B}_{2}\Psi_{\text{physical}}=\frac{1}{\sqrt{2}}\left(\begin{array}[] {cccc}1&0&0&-i\\ 0&1&-i&0\\ 0&-i&1&0\\ -i&0&0&1\end{array}\right)\left(\begin{array}{c}|0,0\rangle\\ |0,1\rangle\\ |1,0\rangle\\ |1,1\rangle\end{array}\right)_{\text{physical}}, \tag{19}\] where we define the matrix representation of \(\mathcal{B}_{2}\), \[U_{\text{Mix}}^{(2)}\equiv\frac{1}{\sqrt{2}}\left(I_{2}\otimes I_{2}-iU_{ \text{X}}\otimes U_{\text{X}}\right). \tag{20}\] In the even parity basis, the action is \[\mathcal{B}_{2}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&-i\\ -i&1\end{array}\right)=\frac{1}{\sqrt{2}}\left(I_{2}-iU_{\text{X}}\right) \equiv U_{\text{Mix}}^{(1)}. \tag{21}\] It has the relation \[\mathcal{B}_{2}=e^{-i\pi/4}U_{\sqrt{\text{X}}}, \tag{22}\] where \(U_{\sqrt{\text{X}}}\) is the square-root of X gate defined by \[U_{\sqrt{\text{X}}}\equiv\frac{1}{2}\left(\begin{array}{cc}1+i&1-i\\ 1-i&1+i\end{array}\right). \tag{23}\] The corresponding braidings are shown in Fig.1(a). The braid operator \(\mathcal{B}_{3}\) is written in terms of fermion operators \[\mathcal{B}_{3}=\frac{1}{\sqrt{2}}\left(1+\gamma_{4}\gamma_{3}\right)=\frac{1 }{\sqrt{2}}\left(1+ic_{2}^{\dagger}c_{2}-ic_{2}c_{2}^{\dagger}\right), \tag{24}\] which operates on two physical qubits (10) as[13] \[\mathcal{B}_{3}\Psi_{\text{physical}}=e^{-i\pi/4}\left(\begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&i&0\\ 0&0&0&i\end{array}\right)\left(\begin{array}{c}|0,0\rangle\\ |0,1\rangle\\ |1,0\rangle\\ |1,1\rangle\end{array}\right)_{\text{physical}}. \tag{25}\] In the even parity basis, the action is the same as (16), \[\mathcal{B}_{3}=e^{-i\pi/4}U_{\text{S}}, \tag{26}\] where the S gate is defined by (17). The corresponding braidings are shown in Fig.1(b). The determinants of braidings are \[\det\left(\mathcal{B}_{1}\right)=\det\left(\mathcal{B}_{3}\right)=i,\qquad \det\left(\mathcal{B}_{2}\right)=1. \tag{27}\] Accordingly, the unitary transformation \(U^{(1)}\) in one logical qubit generated by braidings must satisfy \[\det\left(U^{(1)}\right)=i^{n}, \tag{28}\] with \(n=0,1,2,3\). It is impossible to construct the T gate \[U_{\text{T}}\equiv\text{diag.}\left(1,e^{i\pi/4}\right), \tag{29}\] which is one of the basic ingredient for universal quantum computation based on the Solovay-Kitaev theorem[4; 5; 6] because \[\det U_{\text{T}}=e^{i\pi/4}\neq i^{n}. \tag{30}\] It is because the braid group for \(2N+2\) Majorana fermions is equivalent with \(\pi/2\) rotation in the SO\((2N+2)\)[38; 39; 40] and cannot generate the phase \(e^{i\pi/4}\) by braidings. The Pauli Z gate is given by double braidings of \(\mathcal{B}_{3}\), \[U_{\text{Z}}\equiv\text{diag.}\left(1,-1\right)=U_{\text{S}}^{2}=i\mathcal{B}_{ 3}^{2}. \tag{31}\] The corresponding braidings are shown in Fig.1(c). The Pauli X gate (NOT gate) is given[15] by double braidings of \(\mathcal{B}_{2}\), \[U_{\text{X}}\equiv\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)=i\mathcal{B}_{2}^{2}. \tag{32}\] The corresponding braidings are shown in Fig.1(d). Figure 1: (a) Square-root of NOT gate, (b) S gate, (c) Pauli Z gate, (d) Pauli X gate, (e) Pauli Y gate and (f) Hadamard gate. Then, the Pauli Y gate is given by sequential applications of \(\mathcal{B}_{2}\) and \(\mathcal{B}_{3}\), \[U_{\text{Y}}\equiv\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right)=iU_{\text{X}}U_{\text{Z}}=-\mathcal{B}_{2}^{2}\mathcal{B }_{3}^{2}. \tag{33}\] The corresponding braidings are shown in Fig.1(e). The Hadamard gate is defined by \[U_{\text{H}}\equiv\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right). \tag{34}\] It is known to be generated by triple braids as[35; 36] \[U_{\text{H}}=i\mathcal{B}_{2}\mathcal{B}_{3}\mathcal{B}_{2}. \tag{35}\] The corresponding braidings are shown in Fig.1(f). ### One logical qubit entangled states The even cat state is made by applying the Hadamard gate (34) as \[U_{\text{H}}\left|0\right\rangle_{\text{logical}}=i\mathcal{B}_{2}\mathcal{B }_{3}\mathcal{B}_{2}\left|0\right\rangle_{\text{logical}}=\frac{1}{\sqrt{2}} \left(\left|0\right\rangle_{\text{logical}}+\left|1\right\rangle_{\text{ logical}}\right). \tag{36}\] However, a double braiding is enough for the construction of the even cat state \(\left|\psi\right\rangle_{\text{even-cat}}\), \[e_{1}^{i\pi/4}\mathcal{B}_{1}\mathcal{B}_{2}\left|0\right\rangle_{\text{ logical}}=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle_{\text{logical}}+\left|1 \right\rangle_{\text{logical}}\right)\equiv\left|\psi\right\rangle_{\text{ even-cat}}. \tag{37}\] On the other hand, the odd cat state \(\left|\psi\right\rangle_{\text{odd-cat}}\) is made as \[e^{-i\pi/4}\mathcal{B}_{1}^{-1}\mathcal{B}_{2}\left|0\right\rangle_{\text{ logical}}=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle_{\text{logical}}-\left|1 \right\rangle_{\text{logical}}\right)\equiv\left|\psi\right\rangle_{\text{ odd-cat}}. \tag{38}\] Only 6 states can be constructed by braidings in one qubit. The state \(\left(\left|0\right\rangle_{\text{logical}}\pm i\left|1\right\rangle_{\text{ logical}}\right)/\sqrt{2}\) is constructed by a single braiding, while the states \(\left|1\right\rangle_{\text{logical}}\) and \(\left(\left|0\right\rangle_{\text{logical}}\pm\left|1\right\rangle_{\text{ logical}}\right)/\sqrt{2}\) are constructed by double braidings. No further states can be constructed by further braidings. ## IV Two logical qubits In order to construct two logical qubits, we use six Majorana fermions \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\), \(\gamma_{4}\), \(\gamma_{5}\) and \(\gamma_{6}\). Three ordinary fermion operators are given by \[c_{1} =\frac{1}{2}\left(\gamma_{1}+i\gamma_{2}\right),\quad c_{2}= \frac{1}{2}\left(\gamma_{3}+i\gamma_{4}\right),\] \[c_{3} =\frac{1}{2}\left(\gamma_{5}+i\gamma_{6}\right). \tag{39}\] The basis of physcal qubits are given by \[\Psi_{\text{physical}}\] \[=\left(\left|0\right\rangle,c_{1}^{\dagger}\left|0\right\rangle, c_{2}^{\dagger}\left|0\right\rangle,c_{1}^{\dagger}c_{2}^{\dagger}\left|0 \right\rangle,c_{3}^{\dagger}\left|0\right\rangle,\right.\] \[\left.c_{1}^{\dagger}c_{3}^{\dagger}\left|0\right\rangle,c_{2}^{ \dagger}c_{3}^{\dagger}\left|0\right\rangle,c_{1}^{\dagger}c_{2}^{\dagger}c_{3 }^{\dagger}\left|0\right\rangle\right)^{t}\] \[\equiv\left(\left|0,0,0\right\rangle_{\text{physical}},\left|0,0, 1\right\rangle_{\text{physical}},\left|0,1,0\right\rangle_{\text{physical}}, \left|0,1,1\right\rangle_{\text{physical}},\right.\] \[\left.\left|1,0,0\right\rangle_{\text{physical}},\left|1,0,1 \right\rangle_{\text{physical}},\left|1,1,0\right\rangle_{\text{physical}}, \left|1,1,1\right\rangle_{\text{physical}}\right)^{t}. \tag{40}\] The explicit braid actions on the physical qubits are \[\mathcal{B}_{1} =U_{\text{S}}\otimes I_{2}\otimes I_{2}, \tag{41}\] \[\mathcal{B}_{2} =U_{\text{Mix}}^{(2)}\otimes I_{2},\] (42) \[\mathcal{B}_{3} =I_{2}\otimes U_{\text{S}}\otimes I_{2},\] (43) \[\mathcal{B}_{4} =I_{2}\otimes U_{\text{Mix}}^{(2)},\] (44) \[\mathcal{B}_{5} =I_{2}\otimes I_{2}\otimes U_{\text{S}}. \tag{45}\] The two logical qubits are constructed from three physical qubits as \[\left(\begin{array}{c}\left|0,0\right\rangle\\ \left|0,1\right\rangle\\ \left|1,0\right\rangle\\ \left|1,1\right\rangle\end{array}\right)_{\text{logical}}=\left(\begin{array}[ ]{c}\left|0,0,0\right\rangle\\ \left|0,1,1\right\rangle\\ \left|1,0,1\right\rangle\\ \left|1,1,0\right\rangle\end{array}\right)_{\text{physical}}. \tag{46}\] Figure 2: The braiding process for Pauli gates. (a) Pauli Z gate embedded into the first qubit, (b) Pauli Z gate embedded into the second qubit, (c) Two Pauli Z gates are embedded into the first and the second qubits, (d) Pauli X gate embedded into the first qubit, (e) Two Pauli X gates are embedded into the first and the second qubits, (f) Pauli X gate embedded into the second qubit, (g) Hadamard gate embedded into the first qubit and (h) Hadamard gate embedded into the second qubit. In the logical qubit basis, the action of the braidings are \[\mathcal{B}_{1} =e^{-i\pi/4}\text{diag.}\left(1,i,i,1\right), \tag{47}\] \[\mathcal{B}_{2} =\left(\begin{array}{cc}U_{\text{Mix}}^{(1)}&0\\ 0&U_{\text{Mix}}^{(1)}\end{array}\right),\] (48) \[\mathcal{B}_{3} =e^{-i\pi/4}\text{diag.}\left(1,i,1,i\right),\] (49) \[\mathcal{B}_{4} =U_{\text{Mix}}^{(2)}.\] (50) \[\mathcal{B}_{5} =e^{-i\pi/4}\text{diag.}\left(1,1,i,i\right). \tag{51}\] where \(U_{\text{Mix}}^{(1)}\) is defined by (21) and \(U_{\text{Mix}}^{(2)}\) is defined by (20). The determinants of braidings are \[\det\left(\mathcal{B}_{1}\right) =\det\left(\mathcal{B}_{3}\right)=\det\left(\mathcal{B}_{5} \right)=-1, \tag{52}\] \[\det\left(\mathcal{B}_{2}\right) =\det\left(\mathcal{B}_{4}\right)=1. \tag{53}\] The unitary transformation \(U^{(2)}\) in two logical qubits constructed by braidings must satisfy \[\det\left(U^{(2)}\right)=\pm 1. \tag{54}\] For example, it is impossible to construct the controlled S (CS) gate \[U_{\text{CS}}\equiv\text{daig.}\left(1,1,1,i\right), \tag{55}\] because \(\det\left(U_{\text{CS}}\right)=i\). It is natural because the CS gate is not a Clifford gate. ### One logical qubit embedded in two logical qubits The direct product of two one-qubit quantum gates \(U_{1}\) and \(U_{2}\), denoted as \[U_{2}\otimes U_{1}, \tag{56}\] acts on the two-qubit state \(\left|n_{2},n_{1}\right\rangle\), where \(U_{1}\) acts on the first qubit \(n_{1}\) and \(U_{2}\) acts on the second qubit \(n_{2}\). #### iv.1.1 Pauli gates The two-qubit Pauli gates are defined by \[\sigma_{k_{2}}\otimes\sigma_{k_{1}}, \tag{57}\] where \(k_{1}\) and \(k_{2}\) take \(0,x,y\) and \(z\). The Pauli Z gates are generated by braidings \(\mathcal{B}_{2k+1}\) with odd indices, \[I_{2}\otimes\sigma_{\text{Z}}=i\mathcal{B}_{3}^{2},\quad\sigma_{\text{Z}} \otimes I_{2}=i\mathcal{B}_{5}^{2},\quad\sigma_{\text{Z}}\otimes\sigma_{\text {Z}}=-\mathcal{B}_{5}^{2}\mathcal{B}_{3}^{2}. \tag{58}\] They are summarized as \[\left(\sigma_{\text{Z}}\right)^{n_{2}}\otimes\left(\sigma_{\text{Z}}\right)^ {n_{1}}=\left(i\mathcal{B}_{5}^{2}\right)^{n_{2}}\left(i\mathcal{B}_{3}^{2} \right)^{n_{1}}, \tag{59}\] where \(n_{1}\) and \(n_{2}\) take \(0\) or \(1\). The Pauli X gates are generated by braidings with even indices \(\mathcal{B}_{2k}\), \[I_{2}\otimes\sigma_{\text{X}}=i\mathcal{B}_{2}^{2},\quad\sigma_{\text{X}} \otimes\sigma_{\text{X}}=i\mathcal{B}_{4}^{2},\quad I_{2}\otimes\sigma_{\text {X}}=-\mathcal{B}_{4}^{2}\mathcal{B}_{2}^{2}. \tag{60}\] It should be noted that \(\mathcal{B}_{4}^{2}\) does not generate \(I_{2}\otimes\sigma_{\text{X}}\) but generate \(\sigma_{\text{X}}\otimes\sigma_{\text{X}}\). We show the braidings for Pauli gates in Fig.2. Pauli Y gates are generated by sequential applications of Pauli X gates and Pauli Z gates based on the relation \(U_{\text{Y}}=iU_{\text{X}}U_{\text{Z}}\). Thus, all of Pauli gates for two qubits can be generated by braidings. #### iv.1.2 Hadamard gates The Hadamard gate acting on the first qubit can be embedded as \[I_{2}\otimes U_{\text{H}}=i\mathcal{B}_{2}\mathcal{B}_{3}\mathcal{B}_{2}. \tag{61}\] The Hadamard gate acting on the second qubit can be embedded as \[U_{\text{H}}\otimes I_{2}=-\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{3} \mathcal{B}_{4}\mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1}. \tag{62}\] It requires more braidings than the previous results[35; 37], where three braidings are enough. It is due to the choice of the correspondence between the physical and logical qubits. ### Quantum gates for two logical qubits It is known that the controlled-Z (CZ) gate \[U_{\text{CZ}}=\text{diag.}\left(1,1,1,-1\right) \tag{63}\] is generated as[37] \[U_{\text{CZ}}=e^{-i\pi/4}\mathcal{B}_{5}^{-1}\left(\mathcal{B}_{3}\right)^{-1 }\mathcal{B}_{1}. \tag{64}\] It is also known that the controlled-NOT (CNOT) gate \[U_{\text{CNOT}}=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{array}\right) \tag{65}\] is generated by 7 braidings[30; 36; 37], where braidings are given by \[U_{\text{CNOT}}=-e^{-i\pi/4}\mathcal{B}_{5}^{-1}\mathcal{B}_{1}\mathcal{B}_{2} \mathcal{B}_{3}\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{1}. \tag{66}\] On the other hand, there is a quantum circuit decomposition formula \[U_{\text{CNOT}}=\left(I_{2}\otimes U_{\text{H}}\right)U_{\text{CZ}}\left(I_{2} \otimes U_{\text{H}}\right), \tag{67}\] which involves 9 braidings. The SWAP gate is defined by \[U_{\text{SWAP}}\equiv\left(\begin{array}{cccc}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\end{array}\right), \tag{68}\] which is realized by 7 braidings as \[U_{\text{SWAP}}=e^{i\pi/4}\left(\mathcal{B}_{3}\right)^{-1}\left(\mathcal{B}_{4} \right)^{-1}\left(\mathcal{B}_{5}\right)^{-1}\mathcal{B}_{3}\mathcal{B}_{4} \mathcal{B}_{3}\mathcal{B}_{1}. \tag{69}\] This is smaller than the previous result using 15 braidings[37] based on the quantum circuit decomposition \[U_{\text{SWAP}} =\left(I_{2}\otimes U_{\text{H}}\right)U_{\text{CZ}}\left(I_{2} \otimes U_{\text{H}}\right)\left(U_{\text{H}}\otimes I_{2}\right)U_{\text{CZ} }\left(U_{\text{H}}\otimes I_{2}\right)\] \[\left(I_{2}\otimes U_{\text{H}}\right)U_{\text{CZ}}\left(I_{2} \otimes U_{\text{H}}\right). \tag{70}\] We list up various quantum gates generated by braidings, which are newly found. The \(i\)SWAP gate is defined by \[U_{i\text{SWAP}}\equiv\left(\begin{array}{cccc}1&0&0&0\\ 0&0&i&0\\ 0&i&0&0\\ 0&0&0&1\end{array}\right), \tag{71}\] which is realized by the six braidings \[U_{i\text{SWAP}}=-\mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{5}\mathcal{B}_{3 }\mathcal{B}_{4}\mathcal{B}_{3}. \tag{72}\] The anti-CNOT gate is defined by[42] \[U_{\text{CX}}\equiv\left(\begin{array}{cccc}0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right), \tag{73}\] which is generated by 7 braidings \[U_{\text{CX}}=e^{i\pi/4}\mathcal{B}_{5}^{-1}\mathcal{B}_{1}^{-1}\mathcal{B}_ {2}^{-1}\mathcal{B}_{3}\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{1}. \tag{74}\] It can be decomposed into \(U_{\text{CX}}=\left(I_{2}\otimes U_{\text{X}}\right)U_{\text{CNOT}}\). If we use this relation, 9 braidings are necessary. Figure 3: Braiding process for various two-qubit quantum gates. (a) CZ gate, (b) CNOT gate, (c) SWAP gate, (d) anti-CX gate, (e) iSWAP gate, (f) DCNOT gate, (g) Molmer-Sorensen gate, (h) cross-resonance gate (i) Barenco gate and (j) Hadamard gate. The double CNOT gate is defined by[43] \[U_{\text{D}\text{CNOT}}\equiv\left(\begin{array}{cccc}1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 0&1&0&0\end{array}\right), \tag{75}\] which is realized by \[U_{\text{D}\text{CNOT}}=\mathcal{B}_{2}^{-1}\mathcal{B}_{3}^{-1}\mathcal{B}_{4 }^{-1}\mathcal{B}_{5}^{-1}\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{3} \mathcal{B}_{4}. \tag{76}\] The Molmer-Sorensen gate is defined by[44] \[U_{\text{MS}}\equiv\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}1&0&0&i\\ 0&1&-i&0\\ 0&-i&1&0\\ i&0&0&1\end{array}\right), \tag{77}\] which is realized by \[U_{\text{MS}}=-i\mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{5}\mathcal{B}_{4} \mathcal{B}_{3}\mathcal{B}_{1}\mathcal{B}_{1}. \tag{78}\] The cross-resonance gate is defined by[45] \[U_{\text{CR}}\equiv\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}0&0&1&i\\ 0&0&i&1\\ 1&-i&0&0\\ -i&1&0&0\end{array}\right), \tag{79}\] which is realized by \[U_{\text{CR}}=-\mathcal{B}_{4}\mathcal{B}_{4}\mathcal{B}_{1}\mathcal{B}_{2} \mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1}. \tag{80}\] The Barenco gate is defined by[46] \[U_{\text{Barenco}}\left(\alpha,\phi,\theta\right)\equiv\left(\begin{array}[] {cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&e^{i\alpha}\cos\theta&-ie^{(\alpha-\phi)}\sin\theta\\ 0&0&-ie^{(\alpha+\phi)}\sin\theta&e^{i\alpha}\cos\theta\end{array}\right). \tag{81}\] We consider a special case \[U_{\text{Barenco}}\left(0,\frac{\pi}{2},\frac{\pi}{2}\right)\equiv\left( \begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{array}\right), \tag{82}\] which is realized as \[U_{\text{Barenco}}\left(0,\frac{\pi}{2},\frac{\pi}{2}\right)=\mathcal{B}_{2}^ {-1}\mathcal{B}_{1}^{-1}\mathcal{B}_{3}\mathcal{B}_{2}. \tag{83}\] We define the entangled Hadamard gate by \[U_{\text{H}}^{(2)}=\left(\begin{array}{cccc}1&1&1&1\\ 1&-1&1&-1\\ 1&-1&-1&1\\ 1&1&-1&-1\end{array}\right), \tag{84}\] which is realized by \[U_{\text{H}}^{(2)}=-e^{-i\pi/4}\mathcal{B}_{5}\mathcal{B}_{4}\mathcal{B}_{3} \mathcal{B}_{2}\mathcal{B}_{1}. \tag{85}\] It is different from the cross product of the Hadamard gates \[U_{\text{H}}^{(2)}\neq U_{\text{H}}\otimes U_{\text{H}}. \tag{86}\] We note that it is obtained by a permutation of the third and fourth columns of the cross product of the Hadamard gates given by \[U_{\text{H}}\otimes U_{\text{H}}=\left(\begin{array}{cccc}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\end{array}\right), \tag{87}\] which leads to a relation \[U_{\text{H}}\otimes U_{\text{H}}=U_{\text{CNOT}}U_{\text{H}}^{(2)}. \tag{88}\] Hence, it is realized by \[U_{\text{H}}\otimes U_{\text{H}}=-\mathcal{B}_{5}^{-1}\mathcal{B}_{1} \mathcal{B}_{2}\mathcal{B}_{3}\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{1} \mathcal{B}_{5}\mathcal{B}_{4}\mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1}. \tag{89}\] Both \(U_{\text{H}}^{(2)}\) and \(U_{\text{H}}\otimes U_{\text{H}}\) are the Hadamard gates and they are useful for various quantum algorithms. ### Two logical qubit entangled states The Bell states are constructed as \[e^{i\frac{\pi}{2}}\mathcal{B}_{5}\mathcal{B}_{4}\left|0,0\right\rangle_{\text{ logical}}= \frac{1}{\sqrt{2}}\left(\left|0,0\right\rangle_{\text{logical}}+\left|1,1 \right\rangle_{\text{logical}}\right), \tag{90}\] \[-e^{-i\frac{\pi}{2}}\mathcal{B}_{5}^{-1}\mathcal{B}_{4}\left|0,0 \right\rangle_{\text{logical}}= \frac{1}{\sqrt{2}}\left(\left|0,0\right\rangle_{\text{logical}}- \left|1,1\right\rangle_{\text{logical}}\right), \tag{91}\] \[\mathcal{B}_{5}\mathcal{B}_{4}\mathcal{B}_{2}^{2}\left|0,0\right\rangle_{\text {logical}}= \frac{1}{\sqrt{2}}\left(\left|0,1\right\rangle_{\text{logical}}+\left|1,0 \right\rangle_{\text{logical}}\right), \tag{92}\] \[e^{i\frac{\pi}{2}}\mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{2}^{2 }\left|0,0\right\rangle_{\text{logical}}= \frac{1}{\sqrt{2}}\left(\left|0,1\right\rangle_{\text{logical}}- \left|1,0\right\rangle_{\text{logical}}\right). \tag{93}\] The equal-coefficient state is constructed as \[i\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{3}\mathcal{B}_{4} \mathcal{B}_{5}\left|0,0\right\rangle_{\text{logical}}\] \[=\frac{1}{2}\left(\left|0,0\right\rangle_{\text{logical}}+\left|0,1 \right\rangle_{\text{logical}}+\left|1,0\right\rangle_{\text{logical}}+\left|1,1 \right\rangle_{\text{logical}}\right)\] \[\equiv\frac{1}{2}\left(\left|0\right\rangle_{\text{logical}}^{ \text{decimal}}+\left|1\right\rangle_{\text{logical}}^{\text{decimal}}+ \left|2\right\rangle_{\text{logical}}^{\text{decimal}}+\left|3\right\rangle_{ \text{logical}}^{\text{decimal}}\right), \tag{94}\] where \(\left|j\right\rangle_{\text{logical}}^{\text{decimal}}\) is a decimal representation of qubits. It is a fundamental entangled state for two qubits. ## V Three logical qubits We use eight Majorana fermions in order to construct three logical qubits, \[c_{1}=\frac{1}{2}\left(\gamma_{1}+i\gamma_{2}\right), c_{2}=\frac{1}{2}\left(\gamma_{3}+i\gamma_{4}\right),\] \[c_{3}=\frac{1}{2}\left(\gamma_{5}+i\gamma_{6}\right). c_{4}=\frac{1}{2}\left(\gamma_{7}+i\gamma_{8}\right). \tag{95}\] The three logical qubits are constructed from four physical qubits as \[\left(\begin{array}{c}|0,0,0\rangle\\ |0,0,1\rangle\\ |0,1,0\rangle\\ |1,0,1\rangle\\ |1,0,0\rangle\\ |1,0,1\rangle\\ |1,1,0\rangle\\ |1,1,1\rangle\\ \end{array}\right)_{\text{logical}}=\left(\begin{array}{c}|0,0,0,0\rangle\\ |0,0,1,1\rangle\\ |0,1,0,1\rangle\\ |0,1,1,0\rangle\\ |1,0,0,1\rangle\\ |1,0,1,0\rangle\\ |1,0,0\rangle\\ |1,1,1,1\rangle\\ \end{array}\right)_{\text{physical}}. \tag{96}\] Explicit matrix representations for braidings are \[\mathcal{B}_{1} =e^{-i\pi/4}\text{diag.}\left(1,i,i,1,i,1,1,i\right), \tag{97}\] \[\mathcal{B}_{2} =\text{diag.}\left(U_{\text{Mix}}^{(1)},U_{\text{Mix}}^{(1)},U_{ \text{Mix}}^{(1)}\right),\] (98) \[\mathcal{B}_{3} =e^{-i\pi/4}\text{diag.}\left(1,i,1,i,1,i,1,i\right),\] (99) \[\mathcal{B}_{4} =\text{diag.}\left(U_{\text{Mix}}^{(2)},U_{\text{Mix}}^{(2)}\right),\] (100) \[\mathcal{B}_{5} =e^{-i\pi/4}\text{diag.}\left(1,1,i,i,1,i,i\right),\] (101) \[\mathcal{B}_{6} =I_{8}-iU_{\text{X}}\otimes U_{\text{X}}\otimes I_{2},\] (102) \[\mathcal{B}_{7} =e^{-i\pi/4}\text{diag.}\left(1,1,1,1,i,i,i\right). \tag{103}\] Their determinants are \[\det\left(\mathcal{B}_{n}\right)=1, \tag{104}\] for \(1\leq n\leq 7\). The unitary transformation \(U^{(3)}\) generated by braidings must satisfy \[\det\left(U^{(3)}\right)=1. \tag{105}\] It is impossible to construct various important quantum gates for three qubits solely by braidings although they are Clifford gates. Examples read as follows. The CCZ gate is \[U_{\text{CCZ}}\equiv\text{diag.}\left(1,1,1,1,1,1,-1\right), \tag{106}\] whose determinant is \[\det\left(U_{\text{CCZ}}\right)=-1. \tag{107}\] The CCNOT gate or the Toffoli gate is \[U_{\text{CCNOT}}\equiv\left(\begin{array}{cccccccc}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&1&0\\ \end{array}\right), \tag{108}\] whose determinant is \[\det\left(U_{\text{CCNOT}}\right)=-1. \tag{109}\] The CSWAP gate or the Fredkin gate is \[U_{\text{CSWAP}}\equiv\left(\begin{array}{cccccccc}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&1\\ \end{array}\right), \tag{110}\] whose determinant is \[\det\left(U_{\text{CSWAP}}\right)=-1. \tag{111}\] ### One-qubit quantum gates embedded in three-qubit quantum gates The direct product of three one-qubit quantum gates \(U_{1}\), \(U_{2}\) and \(U_{3}\), denoted as \[U_{3}\otimes U_{2}\otimes U_{1}, \tag{112}\] acts on the three-qubit states \(\left|n_{3},n_{2},n_{1}\right\rangle\), where \(U_{1}\) acts on the first qubit \(n_{1}\), \(U_{2}\) acts on the second qubit \(n_{2}\) and \(U_{3}\) acts on the third qubit \(n_{3}\). #### vi.1.1 Pauli gates The three-qubit Pauli gates are defined by \[\sigma_{k_{3}}\otimes\sigma_{k_{2}}\otimes\sigma_{k_{1}}, \tag{113}\] where \(k_{1}\), \(k_{2}\) and \(k_{3}\) take \(0,x,y\) and \(z\). The Pauli Z gates are generated by braidings \(\mathcal{B}_{2k+1}\) with odd indices \[I_{2}\otimes I_{2}\otimes\sigma_{\text{Z}}=i\mathcal{B}_{3}^{2},\quad I_{2} \otimes\sigma_{\text{Z}}\otimes I_{2}=i\mathcal{B}_{5}^{2},\quad\sigma_{\text{ Z}}\otimes I_{2}\otimes I_{2}=i\mathcal{B}_{7}^{2}, \tag{114}\] They are summarized as \[\left(\sigma_{\text{Z}}\right)^{n_{3}}\otimes\left(\sigma_{\text{Z}}\right)^{n _{2}}\otimes\left(\sigma_{\text{Z}}\right)^{n_{1}}=\left(i\mathcal{B}_{7}^{2} \right)^{n_{3}}\left(i\mathcal{B}_{5}^{2}\right)^{n_{2}}\left(i\mathcal{B}_{3} ^{2}\right)^{n_{1}}, \tag{115}\] where \(n_{1}\)\(n_{2}\) and \(n_{3}\) take \(0\) or \(1\). The Pauli X gates are generated by braidings with even numbers, \[I_{2}\otimes I_{2}\otimes\sigma_{\text{X}}=i\mathcal{B}_{2}^{2},\quad I_{2} \otimes\sigma_{\text{X}}\otimes\sigma_{\text{X}}=i\mathcal{B}_{4}^{2},\quad \sigma_{\text{X}}\otimes\sigma_{\text{X}}\otimes I_{2}=i\mathcal{B}_{6}^{2}. \tag{116}\] We show the corresponding braidings in Fig.4. The other Pauli gates can be generated by sequential applications of the above Pauli gates. #### vi.1.2 Diagonal braidings We first search braidings for the quantum gates generated by odd double braidings, \[U_{\text{diag}}=\left(\mathcal{B}_{7}^{2}\right)^{n_{3}}\left(\mathcal{B}_{5} ^{2}\right)^{n_{2}}\left(\mathcal{B}_{3}^{2}\right)^{n_{1}}. \tag{117}\] There are eight patterns represented by the Pauli Z gates \[\text{diag.}\left(1,1,1,1,1,1,1\right) =I_{2}\otimes I_{2}\otimes I_{2}, \tag{118}\] \[\text{diag.}\left(1,-1,1,-1,1,-1,1,-1\right) =I_{2}\otimes I_{2}\otimes\sigma_{\text{Z}}=i\mathcal{B}_{3}^{2},\] (119) \[\text{diag.}\left(1,1,-1,-1,1,1,-1\right) =I_{2}\otimes\sigma_{\text{Z}}\otimes I_{2}=i\mathcal{B}_{5}^{2},\] (120) \[\text{diag.}\left(1,1,1,-1,-1,-1\right) =\sigma_{\text{Z}}\otimes I_{2}\otimes I_{2}=i\mathcal{B}_{7}^{2},\] (121) \[\text{diag.}\left(1,-1,-1,1,1,-1,1\right) =I_{2}\otimes\sigma_{\text{Z}}\otimes\sigma_{\text{Z}}=-\mathcal{B }_{5}^{2}\mathcal{B}_{3}^{2},\] (122) \[\text{diag.}\left(1,1,-1,-1,-1,-1,1,1\right) =\sigma_{\text{Z}}\otimes\sigma_{\text{Z}}\otimes I_{2}=-\mathcal{ B}_{7}^{2}\mathcal{B}_{5}^{2},\] (123) \[\text{diag.}\left(1,-1,1,-1,-1,1,-1\right) =\sigma_{\text{Z}}\otimes I_{2}\otimes\sigma_{\text{Z}}=-\mathcal{ B}_{7}^{2}\mathcal{B}_{3}^{2},\] (124) \[\text{diag.}\left(1,-1,-1,1,-1,1,1,-1\right) =\sigma_{\text{Z}}\otimes\sigma_{\text{Z}}\otimes\sigma_{\text{Z }}=-i\mathcal{B}_{7}^{2}\mathcal{B}_{5}^{2}\mathcal{B}_{3}^{2}. \tag{125}\] Next, we search real and diagonal gates obtained by the following odd braidings \[\left(\mathcal{B}_{7}\right)^{n_{3}}\left(\mathcal{B}_{5}\right)^{n_{2}}\left( \mathcal{B}_{3}\right)^{n_{1}}. \tag{126}\] We search states whose components are \(\pm 1\). There are four additional quantum gates, whose traces are zero \(\text{Tr}U_{\text{diag}}=0\), \[\text{diag.}\left(1,-1,-1,-1,1,1,1,-1\right) =i\mathcal{B}_{4}^{-1}\mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1}, \tag{127}\] \[\text{diag.}\left(1,-1,1,1,-1,-1,1,-1\right) =i\mathcal{B}_{3}^{-1}\mathcal{B}_{4}\mathcal{B}_{2}\mathcal{B}_{1},\] (128) \[\text{diag.}\left(1,1,-1,1,-1,1,-1,-1\right) =i\mathcal{B}_{2}^{-1}\mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{1},\] (129) \[\text{diag.}\left(1,1,1,-1,1,-1,-1,-1\right) =-i\mathcal{B}_{4}^{-1}\mathcal{B}_{3}^{-1}\mathcal{B}_{2}^{-1} \mathcal{B}_{1}. \tag{130}\] In addition, there are additional quantum gates, whose traces are nonzero \(\text{Tr}U_{\text{diag}}\neq 0\), \[\text{diag.}\left(1,-1,-1,-1,-1,-1,-1,1\right) =-\mathcal{B}_{4}\mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1}, \tag{131}\] \[\text{diag.}\left(1,-1,1,1,1,1,-1,1\right) =\mathcal{B}_{4}^{-1}\mathcal{B}_{3}^{-1}\mathcal{B}_{2}\mathcal{B }_{1},\] (132) \[\text{diag.}\left(1,1,-1,1,1,-1,1,1\right) =\mathcal{B}_{4}^{-1}\mathcal{B}_{2}^{-1}\mathcal{B}_{3}\mathcal{B }_{1},\] (133) \[\text{diag.}\left(1,1,1,-1,-1,1,1,1\right) =\mathcal{B}_{3}^{-1}\mathcal{B}_{2}^{-1}\mathcal{B}_{4}\mathcal{B }_{1}. \tag{134}\] It is natural to anticipate that the CZ gate and the CCZ gate are generated by even braidings because they are diagonal gates. However, this is not the case by checking all \(4^{3}\) patterns of braidings. As a result, the even braidings do not generate the CZ gates \[I_{2}\otimes U_{\text{CZ}} =\text{diag.}\left(1,1,1,-1,1,1,-1\right), \tag{135}\] \[U_{\text{CZ}}\otimes I_{2} =\text{diag.}\left(1,1,1,1,1,1,-1,-1\right), \tag{136}\] and the CCZ gate \[U_{\text{CCZ}}=\text{diag.}\left(1,1,1,1,1,1,-1\right). \tag{137}\] #### v.1.3 Hadamard gates The Hadamard gate can be embedded in the first qubit as \[I_{2}\otimes I_{2}\otimes U_{\text{H}}=i\mathcal{B}_{2}\mathcal{B}_{3} \mathcal{B}_{2}, \tag{138}\] as in the case of (61). We also find that the Hadamard gate can be embedded in the third qubit as \[U_{\text{H}}\otimes I_{2}\otimes I_{2}=-i\mathcal{B}_{1}\mathcal{B}_{2} \mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{5}\mathcal{B}_{6}\mathcal{B}_{5} \mathcal{B}_{4}\mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1}. \tag{139}\] ### Two-qubit quantum gates embedded in three-qubit quantum gates The \(i\)SWAP gate can be embedded into a three-qubit topological gate because it does not involve \(\mathcal{B}_{1}\) and is given by \[I_{2}\otimes U_{i\text{SWAP}}=-\mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{5} \mathcal{B}_{3}\mathcal{B}_{4}\mathcal{B}_{3}. \tag{140}\] We also find the \(i\)SWAP gate can be embedded as \[U_{i\text{SWAP}}\otimes I_{2}=-\mathcal{B}_{5}\mathcal{B}_{6}\mathcal{B}_{7} \mathcal{B}_{5}\mathcal{B}_{6}\mathcal{B}_{5}. \tag{141}\] ### Three-qubit quantum gates We find that the three-qubit Hadamard transformation is generated as \[U_{\text{H}}^{(3)} =-\mathcal{B}_{7}\mathcal{B}_{6}\mathcal{B}_{5}\mathcal{B}_{4} \mathcal{B}_{3}\mathcal{B}_{2}\mathcal{B}_{1} \tag{142}\] \[=\left(\begin{array}{cccccccc}1&1&1&1&1&1&1&1&1\\ 1&-1&1&-1&1&-1&1&-1\\ 1&-1&-1&1&1&-1&-1&1\\ 1&1&-1&-1&1&1&-1&-1\\ 1&-1&-1&1&-1&1&1&-1\\ 1&1&-1&-1&-1&-1&1&1\\ 1&1&1&1&-1&-1&-1&-1\\ 1&-1&1&-1&1&-1&-1&1\end{array}\right). \tag{143}\] Figure 4: Pauli gates embedded in three qubits. (a) Pauli Z gate embedded into the first qubit, (b) Pauli Z gate embedded into the second qubit, (c) Pauli Z gate embedded into the third qubit, (d) Pauli X gate embedded into the first qubit, (e) Two Pauli X gates are embedded into the first and second qubits and (f) Two Pauli X gates are embedded into the second third qubits. It is different from the cross-product of the Hadamard gate \[U_{\text{H}}\otimes U_{\text{H}}\otimes U_{\text{H}}=\left(\begin{array}{cccccccc} 1&1&1&1&1&1&1&1\\ 1&-1&1&-1&1&-1&1&-1\\ 1&1&-1&-1&1&1&-1&-1\\ 1&-1&-1&1&1&-1&-1&1\\ 1&1&1&1&-1&-1&-1&1\\ 1&-1&1&-1&-1&1&-1&1\\ 1&1&-1&-1&-1&1&1&1\\ 1&-1&-1&1&-1&1&1&-1\end{array}\right). \tag{144}\] There is a relation \[U_{\text{H}}\otimes U_{\text{H}}\otimes U_{\text{H}}=-U_{\text{3p}}U_{\text{H} }^{(3)}, \tag{145}\] where \(U_{\text{3p}}\) is defined by \[U\equiv\left(\begin{array}{cccccccc}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&1&0&0&0\end{array}\right). \tag{146}\] It is impossible to generate the W state by braidings \[\left|\text{W}\right\rangle_{\text{logical}}=\frac{1}{\sqrt{3}}\left(\left|00 0\right\rangle_{\text{logical}}+\left|010\right\rangle_{\text{logical}}+ \left|100\right\rangle_{\text{logical}}\right), \tag{147}\] because the number of the nonzero terms of the W state is 3, which contradicts the fact that the number of the nonzero terms must be \(1\), \(2\), \(4\) and \(8\) for three-qubit states generated by braidings. ## VI \(N\) logical qubits The braid representation of \(2N+2\) Majorana fermions is equivalent to the \(\pi/2\) rotation in SO\((2N+2)\), suggested by the fact that braid operators are represented by the Gamma matrices[38, 39]. The number of the braid group is given by[18] \[\left|\text{Image}\left(\mathcal{B}_{2n}\right)\right|=\left\{\begin{array}[] {cc}2^{2n-1}\left(2n\right)!&\text{for}&n=\text{even}\\ 2^{2n}\left(2n\right)!&\text{for}&n=\text{odd}\end{array}\right.. \tag{148}\] The number of nonzero component is \(2^{j}\) for \(0\leq j\leq N\). The full entangled state where all of components are nonzero is generated by \(N\) times braidings. The determinant of the braidings is \[\det\left(\mathcal{B}_{n}^{(N)}\right)=1 \tag{149}\] for \(N\geq 3\). It is impossible to generate the C\({}^{k}\)Z gates, C\({}^{k}\)NOT gates with \(k\geq 2\) and C\({}^{k}\)SWAP gates with \(k\geq 1\) because their determinants are \(-1\). #### vi.0.1 Diagonal braidings We consider odd braidings defined by \[\mathcal{B}_{\text{odd}}\left(n_{1},n_{2},\cdots,n_{k}\right)\equiv\mathcal{ B}_{2n_{k-1}}\mathcal{B}_{2n_{k-1}-1}\cdots\mathcal{B}_{2n_{1}-1}, \tag{150}\] where \(n_{k}\) is an integer satisfying \(1\leq n_{k}\leq N+1\). They are Abelian braidings because there are no adjacent braidings. Then, there are only \(4^{k}\) patterns. Especially, we consider odd double braidings defined by \[\left(\mathcal{B}_{\text{odd}}\right)^{2}\equiv\left(i\mathcal{B}_{2n_{k}-1}^ {2}\right)\left(i\mathcal{B}_{2n_{k-1}-1}^{2}\right)\cdots\left(i\mathcal{B}_{ 2n_{1}-1}^{2}\right) \tag{151}\] are interesting because they are identical to \[\left(\mathcal{B}_{\text{odd}}\right)^{2}=\left(\sigma_{\text{Z}}\right)^{m_{k }}\left(\sigma_{\text{Z}}\right)^{m_{2}}\cdots\left(\sigma_{\text{Z}}\right)^{m _{1}}, \tag{152}\] where \(m_{k}=0,1\). Namely, every Pauli gates constructing from the Pauli Z gate can be generated. Next, we consider even braidings defined by \[\mathcal{B}_{\text{even}}\left(n_{1},n_{2},\cdots,n_{k}\right)\equiv\mathcal{ B}_{2n_{k}}\mathcal{B}_{2n_{k-1}}\cdots\mathcal{B}_{2n_{1}}. \tag{153}\] They are also the Abelian braidings, where each braiding commutes each other. We also consider even double braidings defined by \[\left(\mathcal{B}_{\text{even}}\right)^{2}\equiv\left(i\mathcal{B}_{2n_{k}}^{2} \right)\left(i\mathcal{B}_{2n_{k-1}}^{2}\right)\cdots\left(i\mathcal{B}_{2n_{1 }}^{2}\right). \tag{154}\] They generate Pauli gates based on the Pauli X gate, \[\left(\mathcal{B}_{\text{even}}\right)^{2}=\left(\sigma_{\text{X}}\right)^{m_{k }}\left(\sigma_{\text{X}}\right)^{m_{2}}\cdots\left(\sigma_{\text{X}}\right)^{m _{1}}. \tag{155}\] Then, all of the Pauli gates can be generated by using \(\left(\mathcal{B}_{\text{odd}}\right)^{2}\) and \(\left(\mathcal{B}_{\text{even}}\right)^{2}\). Figure 5: (a) Three-qubit Hadamard transformation. (b) and (c) \(i\)SWAP gate embedded into three qubit systems. Hadamard transformation The Hadamard transformation is used for the initial process of various quantum algorithm such as the Kitaev phase estimation algorithm[47], the Deutsch algorithm[4], the Deutsch-Jozsa algorithm[48], the Simon algorithm[49], the Bernstein-Vazirani algorithm[50], the Grover algorithm[51] and the Shor algorithm[52]. It is generated by the braiding \[U_{\text{H}}^{(N)}\propto\mathcal{B}_{2N+1}\mathcal{B}_{2N}\cdots\mathcal{B}_ {2}\mathcal{B}_{1}. \tag{156}\] The equal-coefficient state is generated as \[U_{\text{H}}^{(N)}\left|0,0\right\rangle_{\text{logical}}\propto\sum_{j=1}^{2^{ N}}\left|j\right\rangle_{\text{logical}}, \tag{157}\] where \(\left|j\right\rangle_{\text{logical}}\) is the decimal representation of the qubit. ### Embedding The embedding of a quantum gate defined for \(M\) qubit into an \(N\)-qubit system with \(M<N\) is a nontrivial problem in Majorana systems. There are two solutions. One is setting additional qubits to be \(0\) as ancilla qubits, where every quantum gates can be embedded. The other is not to use the braiding \(\mathcal{B}_{1}\). We discuss both of these in what follows. #### vii.1.1 Ancilla embedding The braiding for \(N-1\) logical qubits is embedded in \(N\) logical qubits if the additional qubit is \(0\), \[\left|0n_{N-1}\cdots n_{2}n_{1}\right\rangle_{\text{logical}}. \tag{158}\] This is because the correspondence between the physical and logical qubits are identical if the \(N\)-th qubit is \(0\). It is assured by the fact that we can use the same even parity basis of the (\(N-1\))-qubit because the \(N\)th qubit is \(0\). On the other hand, the action is different if the additional qubit is \(1\), \[\left|1n_{N-1}\cdots n_{2}n_{1}\right\rangle_{\text{logical}}. \tag{159}\] This is because it is necessary to use odd parity basis in the \(N-1\) physical qubits so that total parity is even in the presence of the \(N\)th qubit. It is still useful because there are many quantum algorithms where ancilla qubits are \(0\). #### vii.1.2 Braid construction In order to differentiate what an \(M\) qubit quantum gates can be embedded into an \(N\) qubit quantum gate with \(M<N\), we check what fundamental braidings are the same or different in the physical-bit and logical-bit correspondence between even and odd bases. The braiding \(\mathcal{B}_{1}\) acts differently on the even and odd bases, \[\mathcal{B}_{1}^{\text{even}}\neq\mathcal{B}_{1}^{\text{odd}}, \tag{160}\] where \[\mathcal{B}_{1}^{\text{even}} =e^{-i\pi/4}\left(\begin{array}{cc}1&0\\ 0&i\end{array}\right), \tag{161}\] \[\mathcal{B}_{1}^{\text{odd}} =e^{-i\pi/4}\left(\begin{array}{cc}i&0\\ 0&1\end{array}\right). \tag{162}\] On the other hand, \(\mathcal{B}_{2}\) and \(\mathcal{B}_{3}\) act on even and odd bases, \[\mathcal{B}_{2}^{\text{even}} =\mathcal{B}_{2}^{\text{odd}}=\frac{1}{\sqrt{2}}\left(\begin{array} []{cc}1&-i\\ -i&1\end{array}\right), \tag{163}\] \[\mathcal{B}_{3}^{\text{even}} =\mathcal{B}_{3}^{\text{odd}}=e^{-i\pi/4}\left(\begin{array}{ cc}1&0\\ 0&i\end{array}\right). \tag{164}\] In the same way, \(\mathcal{B}_{k}\) for \(k\geq 4\) has the same action on the even and odd bases. We find that embedding is possible if we do not use the braiding \(\mathcal{B}_{1}\). Hence, all of the Pauli gates, the Hadamard transformation, the \(i\)SWAP gate can be embedded into the \(N\)-qubit quantum gates. On the other hand, the quantum gates which use braiding \(\mathcal{B}_{1}\) cannot be embedded into larger qubit as it is. For example, the CZ gate is given by the braiding \(e^{-i\pi/4}\mathcal{B}_{5}^{-1}\left(\mathcal{B}_{3}\right)^{-1}\mathcal{B}_{1}\), whose matrix representation is \[\text{diag.}\left(1,1,1,-1,i,-i,-i,-i\right), \tag{165}\] once it is embedded into three logical qubits. It is different from \[e^{-i\pi/4}\mathcal{B}_{5}^{-1}\left(\mathcal{B}_{3}\right)^{-1} \mathcal{B}_{1}\] \[\neq I_{2}\otimes U_{\text{CZ}}=\text{diag.}\left(1,1,1,-1,1,1,1,- 1\right), \tag{166}\] although the action on the first four bits \(\left|000\right\rangle_{\text{logical}}\), \(\left|001\right\rangle_{\text{logical}}\), \(\left|010\right\rangle_{\text{logical}}\) and \(\left|011\right\rangle_{\text{logical}}\) are correct because the third (left-most) qubit is \(0\). The Hadamard gate for the \(N\)-th qubit is given by \[U_{\text{H}}\otimes I_{2N-2}\propto\mathcal{B}_{1}\mathcal{B}_{2}\cdots \mathcal{B}_{2N-1}\mathcal{B}_{2N}\mathcal{B}_{2N-1}\cdots\mathcal{B}_{2} \mathcal{B}_{1}. \tag{167}\] The \(i\)SWAP gate is embedded as \[I_{2}^{k-2}\otimes U_{i\text{SWAP}}\otimes I_{2}^{N-k}\propto\mathcal{B}_{2k +1}\mathcal{B}_{2k+2}\mathcal{B}_{2k+3}\mathcal{B}_{2k+1}\mathcal{B}_{2k+2} \mathcal{B}_{2k+1}. \tag{168}\] ## VII Quantum Fourier transformation The quantum Fourier transformation is defined by[53] \[\left|k\right\rangle=\frac{1}{\sqrt{2^{N}}}\sum_{j=0}^{2^{N}-1}\omega_{N}^{ jk}\left|j\right\rangle=U_{\text{QFT}}\left|j\right\rangle \tag{169}\] with \(\omega_{N}=e^{2\pi i/2^{N}}\). In the matrix representation, it is given by \[U_{\text{QFT}}^{N}=\frac{1}{\sqrt{2^{N}}}\left(\begin{array}{cccc}1&1&1& \cdots&1\\ 1&\omega_{n}&\omega_{n}^{2}&\cdots&\omega_{n}^{2^{N}-1}\\ 1&\omega_{n}^{2}&\omega_{n}^{4}&\cdots&\omega_{n}^{2^{(N-1)}}\\ 1&\omega_{n}^{3}&\omega_{n}^{6}&\cdots&\omega_{n}^{3^{(N-1)}}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&\omega_{n}^{2^{N}-1}&\omega_{n}^{3^{(2^{N}-1)}}&\cdots&\omega_{n}^{(2^{N}-1) (2^{N}-1)}\end{array}\right). \tag{170}\] It is constructed by successive operations of the Hadamard and \(2\pi/2^{N}\) phase-shift gates. In what follow, we show that it is impossible to construct the quantum Fourier transformation with \(N\geq 2\) except for the Hadamard gate. The quantum Fourier transformation for \(N=1\) is the Hadamard gate \[U_{\text{QFT}}^{\left(1\right)}=\frac{1}{2}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right). \tag{171}\] The quantum Fourier transformation for \(N=2\) is explicitly given by \[U_{\text{QFT}}^{\left(2\right)}=\frac{1}{2}\left(\begin{array}{cccc}1&1&1&1 \\ 1&i&-1&-i\\ 1&-1&1&-1\\ 1&-i&-1&i\end{array}\right), \tag{172}\] whose determinant is \[\det\left(U_{\text{QFT}}^{\left(2\right)}\right)=-i. \tag{173}\] It is impossible to construct it because the determinant must satisfy \(\det\left(U^{\left(2\right)}\right)=\pm 1\). The components of the quantum Fourier transformation for \(N\geq 3\) include a form except for \(i^{n}\) with \(n=0,1,2,3\). Indeed, we have \[\det\left(U_{\text{QFT}}^{\left(N\right)}\right)=i, \tag{174}\] for \(N\geq 3\), which contradicts the condition \(\det\left(U^{\left(N\right)}\right)=1\) for \(N\geq 3\). Hence, it is impossible to construct quantum Fourier transformation for \(N\geq 2\). ## VIII Topological quantum algorithm ### Deutsch algorithm The Deutsch algorithm is the first quantum algorithm[4] that has been proved to be faster than any classical algorithms. We consider a quantum oracle function \(f\left(x\right)\), which induces a unitary transformation \[\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus f\left(x\right)\right\rangle, \tag{175}\] where \(\oplus\) denotes the exclusive sum (XOR) satisfying \[0\oplus 0=1\oplus 1=0,\qquad 0\oplus 1=1\oplus 0=1. \tag{176}\] It determines whether a quantum oracle function is constant or balanced. The constant function satisfies \(f\left(0\right)=f\left(1\right)\), while the balanced function satisfies \(f\left(0\right)\neq f\left(1\right)\). The Deutsch algorithm is executed by the Deutsch gate \(U_{\text{Deutsch}}\) given by \[U_{\text{Deutsch}}\equiv\left(U_{\text{H}}\otimes U_{\text{H}}\right)U_{f} \left(U_{\text{H}}\otimes U_{\text{H}}\right)\left(I_{2}\otimes U_{\text{X}} \right), \tag{177}\] whose action is \[U_{\text{Deutsch}}\left|0\right\rangle\left|0\right\rangle=\pm\left|f\left(0 \right)\oplus f\left(1\right)\right\rangle\left|1\right\rangle, \tag{178}\] where the bar \(\overline{f}\) means \[\overline{0}=1,\qquad\overline{1}=0. \tag{179}\] The first qubit gives \[\left|f\left(0\right)\oplus f\left(1\right)\right\rangle=\left|0\right\rangle \tag{180}\] for the constant functions, and \[\left|f\left(0\right)\oplus f\left(1\right)\right\rangle=\left|1\right\rangle \tag{181}\] for the balanced functions. Namely, we can determine whether the quantum oracle function is constant or balanced by observing the first qubit. There are only four types of the quantum oracle functions. 1) The constant case \(f\left(x\right)=0\): The action is \[\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus 0\right\rangle=\left|x\right\rangle\left|y\right\rangle, \tag{182}\] which is an identical action. The quantum circuit is shown in Fig.6(b). The unitary transformation is \[U_{\text{Deutsch}}\left|0\right\rangle\left|0\right\rangle=\left|0\right\rangle \left|1\right\rangle, \tag{183}\] with \[U_{f}=I_{2}\otimes I_{2}. \tag{184}\] 2) The constant case \(f\left(x\right)=1\): The action is \[\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus 1\right\rangle=\left|x\right\rangle\left|\overline{y}\right\rangle. \tag{185}\] We use the Pauli X gate in the first qubit for the quantum oracle function. The quantum circuit is shown in Fig.6(c). The unitary transformation is \[U_{\text{Deutsch}}\left|0\right\rangle\left|0\right\rangle=-\left|0\right\rangle \left|1\right\rangle, \tag{186}\] with \[U_{f}=I_{2}\otimes U_{\text{X}}. \tag{187}\] 3) The balanced case \(f\left(x\right)=x\): The action is \[\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus x\right\rangle. \tag{188}\] We use the CNOT gate for the quantum oracle function. The quantum circuit is shown in Fig.6(d). The unitary transformation is \[U_{\text{Deutsch}}\left|0\right\rangle\left|0\right\rangle=\left|1\right\rangle \left|1\right\rangle, \tag{189}\] with \[U_{f}=U_{\text{CNOT}}. \tag{190}\] 4) The balanced case \(f\left(x\right)=\overline{x}\): The action is \[\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus\overline{x}\right\rangle. \tag{191}\] We use the anti-CX gate for the quantum oracle function. The quantum circuit is shown in Fig.6(e). The unitary transformation is \[U_{\text{Deutsch}}\left|0\right\rangle\left|0\right\rangle=-\left|1\right\rangle \left|1\right\rangle, \tag{192}\] with \[U_{f}=U_{\text{CX}}. \tag{193}\] The Hadamard transformation \(U_{\text{H}}\otimes U_{\text{H}}\), the Pauli X gate \(I_{2}\otimes U_{\text{X}}\) and all of the four quantum oracle functions are generated by braidings. Thus, the Deutsch algorithm is realized by braidings. Explicit braidings are shown in Fig.6(b)-(e). ### Simplified Deutsch-Jozsa algorithm The Deutsch-Jozsa algorithm is the second quantum algorithm[48] after the Deutsch algorithm. It is a generalization of the Deutsch algorithm to \(N\) qubits, where the quantum oracle function is defined by \[f\left(x|x\in\left\{0,1\right\}^{N}\right)\mapsto\left\{0,1\right\}. \tag{194}\] It also determines whether the quantum oracle functions are constant or balanced. The constant function is defined by \(f\left(x\right)=0\) for all \(x\), or \(f\left(x\right)=1\) for all \(x\). On the other hand, the balanced function has equal number of \(0\) and \(1\). It needs \(N+1\) qubits. The quantum oracle function needs CNOT gates embedded in \(N\) qubits. However, it is a nontrivial problem to embed CNOT gates in \(N\) qubits. On the other hand, a simplified Deutsch-Jozsa algorithm for two qubits is proposed based on braidings of Majorana fermions[35], which needs only two qubits. The Deutsch-Jozsa algorithm determines four quantum oracle functions including one constant function, \[I_{2}\otimes I_{2}=\left\{1,1,1,1\right\} \tag{195}\] and three balanced functions, \[U_{\text{Z}}\otimes I_{2} =\left\{1,1,-1,-1\right\}, \tag{196}\] \[I_{2}\otimes U_{\text{Z}} =\left\{1,-1,1,-1\right\},\] (197) \[U_{\text{Z}}\otimes U_{\text{Z}} =\left\{1,-1,-1,1\right\}. \tag{198}\] The quantum circuits are shown in Fig.7. We generalize it to \(N\)-qubit systems. We consider quantum oracle functions \[U_{\text{oracle}}^{\left(N\right)}\equiv\bigotimes U_{\text{Z}}^{n_{k}} \tag{199}\] and determine whether they are constant or balanced. There is one constant function and \(2^{N}-1\) balanced functions. The corresponding quantum circuit is \[U_{\text{H}}^{\left(N\right)}U_{\text{oracle}}^{\left(N\right)}U_{\text{H}}^{ \left(N\right)}\left|0\right\rangle^{N}. \tag{200}\] The output is \(\left|0\right\rangle^{N}\) if and only if \(U_{\text{oracle}}^{\left(N\right)}\) is the constant function. We show quantum circuits for three-qubit systems, where balanced quantum oracle functions are \[I_{2}\otimes I_{2}\otimes U_{\text{Z}},\quad I_{2}\otimes U_{ \text{Z}}\otimes I_{2}, \tag{201}\] \[I_{2}\otimes U_{\text{Z}}\otimes U_{\text{Z}},\quad U_{\text{Z} }\otimes I_{2}\otimes I_{2},\] (202) \[U_{\text{Z}}\otimes I_{2}\otimes U_{\text{Z}},\quad U_{\text{Z} }\otimes U_{\text{Z}}\otimes I_{2},\] (203) \[U_{\text{Z}}\otimes U_{\text{Z}}\otimes U_{\text{Z}}. \tag{204}\] If the third bit is \(0\), the quantum oracle function is the constant function. On the other hand, if the third qubit is \(1\), the quantum oracle function is the balanced function. The direct product of the Hadamard gate \(U_{\text{Z}}\otimes U_{\text{Z}}\otimes U_{\text{Z}}\) is not necessary but the entangled Hadamard gate \(U_{\text{H}}^{\left(3\right)}\) is enough for the Deutsch algorithm because it is enough to differentiate whether the sum of the coefficients of the state is nonzero for the constant function or zero for the balanced function. Similarly, quantum circuits are designed by using \(U_{\text{H}}^{\left(N\right)}\) and the Pauli gates for an arbitrary number of qubits. The measurement on the \(N\) qubit dictates that the quantum oracle function is the constant (balanced) function if the measurement result is \(0\) (\(1\)). ## IX Digital quantum computation Topological quantum computation is robust against decoherence because there is no infinitesimal braiding operator. For example, there may be slight modulation in a phase shift gate in a nontopological quantum computer because a phase can take a continuous value. On the other hand, only the S gate is possible as the minimum phase-shift gate in the Majorana system. Next, we show that the states generated from the state \(\left|0\right\rangle_{\text{logical}}\) by braidings are connected by a certain operator. If two states \(\left|A\right\rangle_{\text{logical}}\) and \(\left|B\right\rangle_{\text{logical}}\) are connected to the initial state \(\left|0\right\rangle_{\text{logical}}\) by \[\left|A\right\rangle_{\text{logical}}=\mathcal{B}_{A}\left|0\right\rangle_{ \text{logical}},\quad\left|B\right\rangle_{\text{logical}}=\mathcal{B}_{B} \left|0\right\rangle_{\text{logical}}, \tag{205}\] Figure 6: Quantum circuit for the Deutsch algorithm. (a) Quantum circuit. (b) The constant function \(f\left(x\right)=0\), (b) the constant function \(f\left(x\right)=1\), (c) the balanced function \(f\left(x\right)=x\) and (d) the balanced function \(f\left(x\right)=\overline{x}\). Figure 7: Simplified Deutsch-Jozsa algorithm with three qubits. (a) Quantum circuit representation and (b) braiding representation, where the quantum oracle function is given by \(U_{f}=I_{2}\otimes U_{\text{Z}}\otimes U_{\text{Z}}\). they are connected by \[\left|B\right\rangle_{\text{logical}}=\mathcal{B}_{B}\mathcal{B}_{A}^{-1}\left|A \right\rangle_{\text{logical}} \tag{206}\] because there is always an inverse operator for braid operators. Only a part of the Clifford gates are generated by braidings of Majorana fermions. On the other hand, a quantum computer possessing only the Clifford gates is efficiently simulated by a classical computer, which is known as the Gottesman-Knill theorem. However, the number of quantum states is exponentially large. It means that it can contain exponentially large quantum data. Quantum computation based on Majorana fermions has a merit in comparison to classical computation. ## X Discussions We have constructed various two-qubit quantum gates. We have also constructed entangled Hadamard gates and all of the Pauli gates for an arbitrary \(N\) qubits. The entangled states constructed from braiding of Majorana fermions are topologically protected because the coefficients are quantized. We have also presented a no-go theorem dictating what quantum gates cannot be constructed solely by braidings of Majorana fermions based on the determinant of the braiding operators. Furthermore, we have presented some quantum algorithms executable by braidings. Finally, it is to be emphasized that the explicit braidings shown in this paper is the minimum construction because we have checked all pattern of the braidings. Digital quantum computation is different from a universal quantum computation, where the coefficients of the states are complex. It is similar to an analogue computer, where the variables are continuous. On the other hand, the coefficients takes only 4 digits \(i^{n}\) with \(i=0,1,2,3\) in the present system, which is similar to a digital computer. Hence, we call it a digital quantum computation. A quantum computer suffers from decoherence because the coefficient can take a continuous value and hence it is not robust for an infinitesimal perturbation. It resembles the fact that an analogue classical computer is not commercialized due to the weakness against small fluctuations. On the other hand, the coefficient cannot change its value when it is quantized. It implies that a digital quantum computer is robust against decoherence as in the case of a digital classical computer. It is a merit of topological quantum computation. A comment is in order. Digital quantum computation is a different notion from universal topological computation based on Fibonacci anyons[16; 54]. In the latter case, universal quantum computation is possible in the common sense because the braiding of Fibonacci anyons necessarily make an irrational rotation in the Bloch sphere. However, its demerit is that one needs an infinite number of gate operations in order to make a conventional quantum gate whose components are simple numbers such as \(i^{n}\) with \(i=0,1,2,3\). For example, 30 braidings are necessary for the Hadamard gate with the error 0.00657[55]. A recent experiment[56] shows that 15 braidings realize the Hadamard gate with the fidelity 0.9718. It is naturally understood that we need an infinite number of irrational rotations to make a rational rotation. On the other hand, every states can be mapped by a finite number of quantum gates in digital quantum computation based on Majorana fermions. This work is supported by CREST, JST (Grants No. JPMJCR20T2).
2307.00341
Effective temperatures of classical Cepheids from line-depth ratios in the H-band
The technique of line depth ratios (LDR) is one of the methods to determine the effective temperature of a star. They are crucial in the spectroscopic studies of variable stars like Cepheids since no simultaneous photometry is usually available. A good number of LDR-temperature relations are already available in the optical domain, here we want to expand the number of relations available in the near-infrared in order to fully exploit the capabilities of current and upcoming near-infrared spectrographs. We used 115 simultaneous spectroscopic observations in the optical and the near-infrared for six Cepheids and optical line depth ratios to find new pairs of lines sensitive to temperature and to calibrate LDR-temperature relations in the near-infrared spectral range. We have derived 87 temperature calibrations valid in the [4800-6500] K range of temperatures. The typical uncertainty for a given relation is 60-70 K, and combining many of them provides a final precision within 30-50 K. We found a discrepancy between temperatures derived from optical or near-infrared LDR for pulsations phases close to phi ~ 0.0 and we discuss the possible causes for these differences. Line depth ratios in the near-infrared will allow us to spectroscopically investigate highly reddened Cepheids in the Galactic centre or in the far side of the disk.
V. Kovtyukh, B. Lemasle, N. Nardetto, G. Bono, R. da Silva, N. Matsunaga, A. Yushchenko, K. Fukue, E. K. Grebel
2023-07-01T13:38:24Z
http://arxiv.org/abs/2307.00341v1
# Effective temperatures of classical Cepheids from line-depth ratios in the \(H\)-band ###### Abstract The technique of line depth ratios (LDR) is one of the methods to determine the effective temperature of a star. They are crucial in the spectroscopic studies of variable stars like Cepheids since no simultaneous photometry is usually available. A good number of LDR-temperature relations are already available in the optical domain, here we want to expand the number of relations available in the near-infrared in order to fully exploit the capabilities of current and upcoming near-infrared spectrographs. We used 115 simultaneous spectroscopic observations in the optical and the near-infrared for six Cepheids and optical line depth ratios to find new pairs of lines sensitive to temperature and to calibrate LDR-temperature relations in the near-infrared spectral range. We have derived 87 temperature calibrations valid in the [4 800-6 500] K range of temperatures. The typical uncertainty for a given relation is 60-70 K, and combining many of them provides a final precision within 30-50 K. We found a discrepancy between temperatures derived from optical or near-infrared LDR for pulsations phases close to \(\phi\)\(\approx\)0.0 and we discuss the possible causes for these differences. Line depth ratios in the near-infrared will allow us to spectroscopically investigate highly reddened Cepheids in the Galactic centre or in the far side of the disk. keywords: Stars: fundamental parameters - stars: late-type - stars: supergiants - stars: variables: Cepheids ## 1 Introduction The effective temperature \(T_{\rm eff}\)is a fundamental parameter of stellar atmospheres. Therefore, deriving the effective temperature of a star is the most important step in the analysis of a stellar spectrum, which enables the determination of the chemical composition of the star and of its evolutionary status. Since by definition, the effective temperature is the temperature of a black body that produces the same total power per unit area as the observed star, \(T_{\rm eff}\) can be derived directly by knowing the stellar luminosity and radius (e.g., Davis and Webb, 1974), making interferometric techniques the best tool at our disposal. Unfortunately, interferometric measurements are currently limited to nearby stars that do not cover yet the entire parameter space. Alternatively, the infrared flux method (IRFM, Blackwell and Shallis, 1977) also provides \(T_{\rm eff}\) and the angular radius of the star by combining its integrated flux and the infrared flux in a given band. Also, the Surface Bright ness color relation in used to derive the angular diameter variation and the distance of Cepheids (Nardetto et al., 2023). It becomes then possible to calibrate (spectro-)photometric techniques, for instance measuring the Paschen continuum (3647-8206 A) to determine \(T_{\rm eff}\) from stellar fluxes. Another (robust) method relies on \(T_{\rm eff}\)-color calibrations (e.g., Alonso et al., 1996; Bessell et al., 1998). Photometric techniques to derive \(T_{\rm eff}\) are however sensitive to the other atmospheric parameters of the star (for instance its metallicity [Fe/H] or its surface gravity log \(g\)). Moreover, it is always difficult to obtain an accurate determination of the interstellar reddening, especially for faint, distant objects in highly extincted regions, for which we have started to obtain high-resolution spectra, in particular in the near-infrared (NIR) spectral domain. Purely spectroscopic methods might then be preferred. For instance, fitting the profile of Balmer lines (e.g., Gehren, 1981) provides a good \(T_{\rm eff}\) diagnostic (although only below \(\sim\) 8 000 K) thanks to their low sensitivity on log \(g\). Metal line diagnostics enable us to determine simultaneously the atmospheric parameters \(T_{\rm eff}\), log \(g\), and [Fe/H] (and microturbulent velocity \(V_{\rm f}\) for 1D-analyses), either by the means of their curves of growth (e.g., Cayrel & Cayrel, 1963) or by ensuring that abundances of various lines from the same element show no trend with their excitation potentials (to constrain \(T_{\rm eff}\)) or with their equivalent width (to constrain \(V_{\rm f}\)). They can be applied even in the case of pulsating variable stars like classical Cepheids, see for instance Kovtyukh & Andrievsky (1999). Such techniques require however accurate determinations of the atomic parameters of the line (e.g., their oscillator strengths and damping constants) and they are sensitive to departures from the Local Thermodynamical Equilibrium (LTE). Continuous progress in our knowledge of the physics of stellar atmospheres and increased computing power now allows us to directly compare an observed spectrum with grids of synthetic (e.g., Recio-Blanco et al., 2006) or empirical (e.g., Ness et al., 2015) spectra. The line depth ratios (LDR) method, which is based on the ratio of the depths of two lines having different sensitivity to \(T_{\rm eff}\)(Gray & Johanson, 1991; Gray, 1994) presents the advantage of being free from reddening effects and provides a high internal precision (\(\approx\)10 K). In FGK stars, the depths of low-excitation lines of neutral atoms are highly responsive to \(T_{\rm eff}\), while those of high-excitation lines are relatively insensitive to \(T_{\rm eff}\)(Gray, 2005). LDR calibrations are available for dwarf and giant stars (e.g., Strassmeier & Schordan, 2000; Caccin et al., 2002; Kovtyukh et al., 2003; Biazzo et al., 2004, 2006; Kovtyukh et al., 2006; Biazzo et al., 2007). Combining a large number of calibrations improves the precision of the temperature determination significantly. The concept of LDR has recently been expanded to flux ratios (FR) by Hanke et al. (2018), focussing on small wavelength domains rather than the core of absorption lines, and with exquisite absolute calibration. They have been adapted to the specifics of Cepheids by Lemasle et al. (2020). Kovtyukh & Gorlova (2000); Kovtyukh (2007); Proxawf et al. (2018) (see also Biazzo et al., 2004, 2006) calibrated LDR for Cepheids in the optical domain. Vasilyev et al. (2017, 2018) have confirmed the validity of the line depth ratios approach using 2D numerical models of Cepheid-like variable stars, where non-local, time-dependent convection is included from first principles. Line depth ratios of Cepheids have paved the way for studying the distribution of metals in the Milky Way thin disk (Andrievsky et al., 2002a,b, c, 2004; da Silva et al., 2016, 2022; Genovali et al., 2013, 2014, 2015; Kovtyukh, Wallerstein, & Andrievsky, 2005; Kovtyukh et al., 2022; Lemasle et al., 2007, 2008, 2013; Luck et al., 2003, 2006, 2011; Luck & Lambert, 2011; Luck, 2018; Martin et al., 2015; Pedicelli et al., 2010). Cepheids in the Magellanic Clouds also allow us to investigate the distribution of metals in the young population of these galaxies (Lemasle et al., 2017; Romananello et al., 2022). Moreover, since the Large Magellanic Cloud is used to calibrate period-luminosity (PL) relations, LDR play a crucial role in investigating the possible metallicity dependence of PL relations (Romaniello et al., 2008). Finally, LDR have also been applied to old (\(>\)10Gyr) type II Cepheids (Lemasle et al., 2015; Kovtyukh et al., 2018, 2018), opening a new path to investigate thick disk and halo stars. Cepheids' LDR also allowed us to trace temperature variations over the pulsation cycle (Luck & Andrievsky, 2004; Luck et al., 2008; Kovtyukh et al., 2005; Andrievsky et al., 2005), to discover peculiar Cepheids with high lithium content, presumably crossing the instability strip for the first time (e.g., Kovtyukh, Wallerstein, & Andrievsky, 2005; Kovtyukh et al., 2019), and to investigate Cepheids pulsating in two modes simultaneously (Kovtyukh et al., 2016; Lemasle et al., 2018). The LDR method proved to be effective when applied to optical spectra, but it is only high-resolution IR spectroscopy that makes it possible to access the most distant stars in the Galactic disk and thereby understand the structure and evolution of the Milky Way in its innermost region, where interstellar extinction presents a serious problem (Matsunaga, 2017). The primary objects of surveys in this region usually have high luminosity - namely, giants and supergiants. Recently, Fukue et al. (2015) found 9 LDR-\(T_{\rm eff}\) relations using spectra of 8 stars (mainly giants) in the \(H\)-band (14 000-18 000 A) for \(T_{\rm eff}\) ranging from 4 000 to 5 800 K with uncertainties of \(\sim\)60 K. Later, Jian et al. (2019) increased the number of calibrations to 11 and achieved a precision of 35 K for the range 3 700-\(T_{\rm eff}\)\(<\)5 000 K. Recently Afsar et al. (2023) report five new LDR-\(T_{\rm eff}\) relations found in the \(H\)-band region and 21 new relations in the \(K\)-band. Taniguchi et al. (2018) found 81 calibrations for the \(T_{\rm eff}\) within 3 700\(<\)\(T_{\rm eff}\)\(<\)5 400 K, using spectra of 9 giants in the \(Y\)- and \(J\)-band, and Jian et al. (2020) investigated the correlation between those calibrations and log \(g\). Subsequently, Taniguchi et al. (2021) obtained new LDR pairs of Fe i-Fe i lines for red giants and supergiants with \(T_{\rm eff}\) of 3 500-5 500 K. For spectra in the \(Y\)- and \(J\)-band, Matsunaga et al. (2021) developed a method for simultaneously determining \(T_{\rm eff}\) and log \(g\) for FGK stars of all luminosity classes; in so doing, they used 13 calibrations to deduce \(T_{\rm eff}\) and 9 calibrations to derive log \(g\). All those calibrations were originally obtained in the IR range for the \(Y\)-, \(J\)-, \(H\)- and \(K\)-bands; however, they were only valid for rather low temperatures, while classical Cepheids reach \(T_{\rm eff}\) above 6 000 K. In this paper we want to expand the number of relations available for Cepheid studies in the near-infrared range. Sect. 2 describes the near-infrared spectra we used to search for new pairs of lines well-suited as temperature indicators, as described in Sect. 3. The new LDR-\(T_{\rm eff}\) calibrations are then investigated in Sect. 4. Sect. 5 summarizes our results. ## 2 Spectroscopic material A large number of high-resolution spectra of six well-known bright classical Cepheids (Table 1) were obtained with GIANO (Origlia et al., 2014), a NIR cross-dispersed echelle spectrograph, operating at the 3.6m Telescopio Nazionale Galileo (TNG). It covers the wavelength range 9 500-24 500 A and operates at a very high-resolving power (R\(\approx\)50 000). Optical spectra were obtained in parallel with the High Accuracy Radial velocity Planet Searcher North spectrograph (HARPS-N, Cosentino et al., 2012). HARPS-N covers a large fraction of the optical range (\(\Delta\)\(\lambda\)=3 900-6 900 A) at very high resolving power (R\(\approx\)100 000). The observing log is given in Table 1. Five additional spectra (3 of them for the calibrating Cepheids) were obtained in the \(H\)-band with the Infrared Camera and Spectrograph (IRCS) at the Subaru 8.2m telescope with a resolving power of R\(\approx\)20 000 (Kobayashi et al., 2000, see Table 2). Since we have no means to derive a priori their \(T_{\rm eff}\) as we do not have simultaneous optical spectra for those stars, they were only used for testing the newly obtained relations. The spectral analysis (setting the continuum position, measuring line depths and equivalent widths) was carried out using the DECH software package 1. The absorption lines of Cepheids are usually fairly broad due to pressure and Doppler broadening together with a moderate rotation (\(\omega\)\(\leq\)10 km/s), and their a Voigt profile can be approximated by a Gaussian. However, they may become strongly asymmetric at some phases (e.g., Nardetto et al., 2006, 2008). For this reason, we did not fit the entire profile but measured the line depths R\({}_{\lambda}\) (that is, between the continuum and a parabola fit of the line core) as described by Gray (1994). Typical number of data points on which we performed the parabolic fit is 4-5. Footnote 1: [http://www.gazinur.com/DECH-software.html](http://www.gazinur.com/DECH-software.html) The \(H\)-band spectra, in particular, are heavily contaminated by the absorption features caused by the Earth's atmosphere when observed from ground-based facilities. We did not perform a telluric correction, which consists in removing telluric features from the spectra. Instead, we used only wavelength ranges known to be practically free of telluric lines. We cannot exclude, however, that a few spectral lines are slightly contaminated by telluric lines. ## 3 Searching for temperature-sensitive line pairs With the recent development of near-infrared spectrographs, it has become possible to extend the use of line depth ratios as \(T_{\rm eff}\) indicators to this domain. Fukue et al. (2015) were the first to provide calibration relations, in the \(H\)-band (1.50-1.65 \(\mu\)m). However, the paucity of low-excitation lines in this wavelength range, together with the strong molecular bands and numerous telluric lines, limited the number of useful LDR pairs to nine. Such a small number limits the precision in \(T_{\rm eff}\) to \(\approx\)50 K in the most favorable cases, while precisions of the order of 5-15 K can be routinely achieved in the optical thanks to a large number of available LDRs (e.g., Kovtyukh, 2007; Proxault et al., 2018). Later on, Taniguchi et al. (2018) extended this number to 81 covering the \(Y\)- and \(J\)-band. ### Searching for useful lines In this study, we adopted a new approach: first, we selected two spectra of classical Cepheids with temperatures of about 5000 and 6200 K, representative of the range of temperatures reached by this class of stars. Line depths were then measured for all the spectra, regardless of whether the lines were blended or not, also including lines that are not reliably identified. Only lines that could be measured both in the stars with \(T_{\rm eff}\) of 5000 K and 6200 K were kept, in order to ensure that the final relations will be applicable over a broad \(T_{\rm eff}\) range. For these lines, we then computed the ratios of their depths, R\({}_{\rm 6200}\)/R\({}_{\rm 5000}\) and split them into three groups showing significant (1), moderate (2), or slight (3) variations with \(T_{\rm eff}\). Pairs most likely suitable for further testing were chosen from the first and third groups. We set as an additional condition that the distance between two lines composing a given pair should not exceed 300 A. This algorithm yielded 1500 potentially useful line pairs. Finally, the selected lines were measured in all the spectra. The 1500 potential relations were visually inspected and fitted with polynomial relations. Ultimately, only the 87 best calibrations, accurate to within 150 K, were retained. They are shown in the Appendix (Figures 11-15). Examining the atomic parameters of the lines in the selected calibrations allowed us to draw the two following conclusions: - Even lines with similar excitation potentials of the lower level (EPL) can show a good correlation with temperature. This unexpected conclusion can be explained as follows: if we consider two lines with close EPLs, but different oscillator strengths (log \(gf\)), then at a given \(T_{\rm eff}\) the weaker line may be located on the linear part of the curve of growth, while the stronger line lies on the horizontal part. Thus the ratio of the depths of these two lines will be sensitive to \(T_{\rm eff}\). An example for such a pair of lines is given in Fig. 1. As a consequence, such a calibration can only be used for a limited range of \(T_{\rm eff}\); it presents however the advantage of being independent of luminosity (or log \(g\)). Indeed, lines with different EPLs respond differently to log \(g\) variations. - Although one would expect that only unblended lines should be considered (leaving only a small number of them available in the \(H\)-band, for instance), it is nevertheless possible to use strong blends to derive \(T_{\rm eff}\) calibrations, provided that these blends change monotonically, gradually and unequivocally with the \(T_{\rm eff}\) variations. An example for such a line pair is shown in Fig. 2, corresponding to the calibration relation 76 (see Fig. 11). - We note in passing that in case a telluric line would accidentally superimpose on a stellar line (which is more likely to happen in the \(H\)-band), the stellar line is discarded. Spectral lines of supergiants are usually considerably wider than the telluric lines, as shown in Fig. 3. We did not use lines distorted by the influence of telluric lines. ### Calibrating relations For the TNG sample, the \(T_{\rm eff}\) values used to calibrate the LDR relations have been derived from the optical HARPS-N spectra obtained quasi simultaneously. Indeed, the beginning of the exposures is shifted by only a few minutes, which is negligible since our 6 calibrating Cepheids have a period of \(\approx\) 5 days and more. We used the LDR from Kovtyukh (2007) (typically 50-60 of them are available in a given HARPS-N spectrum). This ensures that the \begin{table} \begin{tabular}{c c c c c c} \hline Cepheid & P & \(<\)V\(>\) & \(<\)H\(>\) & [Fe/H] & \(M_{V}\) \\ & day & mag & mag & dex & mag \\ \hline \(\delta\) Cep & 5.3662 & 3.950 & 2.479 & 0.07 & \(-\)3.23 \\ X Cyg & 16.3512 & 6.399 & 3.947 & 0.09 & \(-\)4.52 \\ S Sge & 8.3823 & 5.618 & 3.845 & 0.08 & \(-\)3.75 \\ T Vul & 4.4355 & 5.751 & 4.237 & \(-\)0.05 & \(-\)3.01 \\ S Vul & 68.6510 & 8.972 & 4.806 & 0.09 & \(-\)6.19 \\ SV Vul & 44.8942 & 7.230 & 4.051 & 0.11 & \(-\)5.70 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the calibrating classical Cepheids. new NIR LDR will fall on the exact same scale as those derived in the optical. We retained as calibrating \(T_{\rm eff}\) the mean value of each temperature derived from a single calibrating relation in the optical, and the uncertainty on \(T_{\rm eff}\) is the standard deviation of these measurements, usually around 10-30 K. With both line depth ratios and \(T_{\rm eff}\) values at hand, it is possible to derive analytical formulae for new calibrating relations in the near-infrared. Polynomials offer a simple way to derive analytical relations, but a number of our calibrating relations show specific features such as breaks that cannot be adequately described even by polynomials of the 5\({}^{th}\) or higher degree (see Figs. 11-15). Therefore, we also tried more complicated relations, such as exponential fits, logarithmic fits, power fits, the Hoerf function (\(y=ab^{\rm r}x^{\rm s}\)) and others. The type of function (and the corresponding coefficients) that yielded the lowest root-mean-square deviation \(\sigma\) for a given calibration was ultimately selected. In many cases, the precision of an individual calibration relation varies with \(T_{\rm eff}\). This is related to the fact that the line strengths vary with temperature. For instance, at high \(T_{\rm eff}\), absorption lines with low EPL become weaker, leading to greater uncertainties in the measurement of their depths, until they eventually disappear from the spectrum. To take this effect into account, we have defined as an optimum range for a given ratio the \(T_{\rm eff}\) range within which the mean precision (\(\sigma\)) of the calibration relation remains within 160 K. Figure 1: Variation of the line profile with \(T_{\rm eff}\) for two lines with similar excitation potential of the lower level EPL, but with different oscillator strengths log \(gf\): the Fe i line at 15665.240 Å (EPL=5.979 eV, log \(gf\)=\(-\)336) and the Fe i line at 15677.519 Å (EPL=6.246 eV, log \(gf\)=0.220). These two lines correspond to the calibration relation 80 (see Fig. 11). Figure 2: Variation of the line profile with \(T_{\rm eff}\) for a calibration relation in which one of the lines forming the pair is blended: the first line of the ratio is the Ti Ti line at 15658.545 Å (EPL=5.314 eV, log \(gf\)=\(-\)0.934), while the second line of the ratio is a blend of two lines at 15661.898 Å (Ti i, EPL=5.172 eV, log \(gf\)=\(-\)0.550) and 15662.013 Å (Fe i, EPL=5.828 eV, log \(gf\)=0.371). Together, they form the calibration relation 76 (see Fig. 11). Since various relations have various optimum ranges, we note that only a (large) subset of the 87 relations can be used for a Cepheid at a given temperature. This also holds for optical spectra and explains why the number of optical relations used to determine \(T_{\rm eff}\) from the HARPS-N spectra varies from star to star. Uncertainties on the line-depth measurements mainly arise from uncertainties in setting the continuum position, hence the presence of noise or telluric lines. It can be determined from lines that fall twice on adjacent orders of the echelle spectra. This uncertainty is about 2-6% for spectra with a signal-to-noise ratio of about 100. A complete analysis of the errors associated with measuring line depths in spectra is given in Catalano et al. (2002). Besides, individual stellar parameters such as metallicity, rotation, convection, NLTE effects, magnetic fields, binarity, etc., add to the scatter of the individual calibrations. An analysis of such effects was presented in the studies by Gray (e.g., 1989, 1994); Strassmeier & Schordan (e.g., 2000); Fukue et al. (e.g., 2015). The list of the calibrating relations, including the values for the coefficients, the intrinsic dispersion, and the applicability range, is given in Table 2 (Appendix). They are displayed in Figs. 15-16. and the largest surface gravity), the NIR temperatures near the \(T_{\rm eff}\) peak are systematically lower than those deduced from optical spectra. Conversely, for S Vul, the long-period classical Cepheid with the highest luminosity (lowest surface gravity) in the calibrating sample, the NIR temperatures are higher than those deduced from optical LDR. This points toward a luminosity (or log \(g\)) effect on the line depths ratios. Several (related) explanations can be proposed for such behavior. Jian et al. (2020) already detected the effect of surface gravity on LDR. Indeed, for several pairs of lines, they noticed that the LDR-\(T_{\rm eff}\) relations were offset between dwarfs on one hand, and giants and supergiants on the other hand. They found that the difference between the ionization potentials of lines in a given pair correlates with the sensitivity of this pair to log \(g\). A detailed theoretical analysis of this effect can be found in Gray (2005) and Jian et al. (2020). In order to circumvent this drawback, they suggested calibrating separately dwarfs and giants/supergiants. However, in contrast with Jian et al. (2020), who report no log \(g\) effect within the giants-supergiants group, we find here significant log \(g\) effects (for a narrow range of pulsation phases) for Cepheids, that is, for stars within the giants/supergiants luminosity class. We note however that the range of luminosities for the six Cepheids in our calibrating sample is very wide and amounts to three magnitudes (their absolute magnitudes vary from -3 to -6 \(M_{V}\), see Table 1). This may indicate that the effect in Cepheids is not, or not only, a log \(g\) effect. For instance, \(T_{\rm eff}\) values may also differ due to the differences in the optical depths of the line-forming regions for the optical and IR ranges. These differences can be significant at given pulsation phases, for instance, due to the shock wave passing through the upper atmospheric layers of Cepheids near the maximum compression. Indeed, Nardetto et al. (2018) investigated CRIRES observations of the long-period Cepheid /Car and found, using an hydrodynamical model of this star (Nardetto et al., 2007), that the core of the Na i line at 22 089,69 A is formed at the top of the atmosphere, while the iron lines in the visible are formed much deeper in the atmosphere. They report additional evidence that lines in the infrared are formed closer to the surface of the star than lines in the optical, for instance the infrared radial velocity curve is shifted with respect to its optical counterpart, which they interpret as a manifestation of the Van Hoof effect (van Hoof and Struve, 1953), the delay in the velocities between lines forming in the lower and upper atmosphere. Similarly, the mean radial velocity derived from infrared data differs Figure 5: Temperature variations for the six Cepheids in the calibrating sample. Red squares: HARPS-N (optical) temperatures. A Fourier smoothing through the data points is indicated as a thin red line to guide the eye. Green circles: GIANO (near-infrared) temperatures. Blue squares: SUBARU (near-infrared) temperatures. A typical uncertainty (\(\sigma\)) is shown in the lower-left corner of the figure. The standard errors \(\sigma/\sqrt{N}\) on individual \(T_{\rm eff}\) measurements are smaller than the symbol sizes. by 0.53\(\pm\)0.30 km s\({}^{-1}\) from the optical one, which they interpret as a different impact of granulation on line forming regions in the upper and lower atmosphere (the deeper the line-forming region, the more the radial velocity is blueshifted, see Nardetto et al., 2008; Vasilyev et al., 2017). To wrap things up, it seems established that visible and infrared lines are formed at different depths in the atmosphere of a Cepheid, and thus in environments in which not only temperature and pressure are different, but also the velocity fields (due to the propagation of the compression wave). The latter is clearly visible in the different behaviour of line asymmetries for optical and infrared lines over the pulsation period (Nardetto et al., 2018, their Fig. 6). Since we measure the line depths directly, without fitting a line profile, we assume that the differences between short- and long-period Cepheids at phases \(\phi\)\(\approx\)0.0 we observe in Fig 5 mostly reflect temperature differences rather than uncertainties on measuring line depths related to different line asymmetries. Furthermore, we note that the theoretical analyses described in Gray (2005) and Jian et al. (2020) are made under the Local Thermodynamical Equilibrium (LTE) assumption, while Vasilyev et al. (2018, 2019) have shown that NLTE effects are important in the atmospheres of Cepheids and maximal at the same phases (\(\phi\)\(\approx\)0.0) where the discrepancy between optical and NIR line depth ratios is significant. Finally, it is worth mentioning that long-period Cepheids are known to exhibit cycle-to-cycle variations (e.g., Anderson, 2016), including in their line profiles. However, this phenomenon cannot be invoked here since our optical and NIR have been observed simultaneously. Should long-period Cepheids be excluded from the calibration of the LDR, then their \(T_{\rm eff}\) could not be determined and hence their chemical composition would remain unknown. ## 5 Summary and conclusion In the present study, we have derived 87 temperature calibrations, LDR-\(T_{\rm eff}\), using GIANO high-dispersion near-IR \(H\)-band spectra covering the wavelength range from 14 000 to 16 500 A that contains numerous atomic lines and molecular bands. The temperatures inferred from the optical spectra obtained in parallel with the HARPS-N spectrograph were adopted as original temperatures to derive calibration relations. The resulting temperature relations are based on 115 spectra of six classical Cepheids. The calibrations are valid for supergiants with a near-solar metallicity, \(T_{\rm eff}\) ranging from 4800 to 6500 K and \(M_{V}\) from -3 to -6 mag. The uncertainties due to the effect of luminosity at temperatures above 6200 K are within 150 K. The typical mean uncertainty per calibration relation is 60-70 K (40-45 K for the most precise ones and 140-160 K for the least precise ones). Using about 60-70 calibrations improves the intrinsic precision to within 30-50 K (for spectra with an S/N of 100-150). Employing this method, we can derive temperatures of highly reddened objects (such as stars towards the Galactic centre). Adopting these calibrations has already enabled us to determine the temperatures of four Cepheids in the Galactic centre discovered by Matsunaga et al. (2011, 2013, 2015) in order to derive their chemical composition (Kovtyukh et al., 2022). Since many Cepheids have been detected in highly reddened regions, for instance beyond the Galactic center in the far side of the disk (e.g., Feast et al., 2014; Matsunaga et al., 2016; Chen et al., 2018), the newly determined LDR will allow us to derive their chemical composition using NIR spectra. To our knowledge, only Inno et al. (2019) tackled this problem so far, determining the metallicity of 5 Cepheids candidates in the inner disk by comparing low-resolution (R\(\approx\)3 000) NIR spectra to a pre-computed grid of synthetic spectra. Obtaining spectroscopic time-series for a given star would make it possible to track tiny \(T_{\rm eff}\) variations, potentially related to rotational modulation such as those that have already been detected for dwarf stars - namely, the G8 dwarf \(\xi\) Bootis A (Toner & Gray, 1988) and the K0 dwarf \(\sigma\) Dra (Gray et al., 1992). This technique is already being used to study spot activity in giants (Berdyugina et al., 2005; Fraska et al., 2005; Frasca et al., 2008). In this respect, hemisphere-averaged temperatures of stars with surface inhomogeneities derived from NIR lines simultaneously to optical lines can be of great help for starspot modelling. Indeed, one expects different average temperatures at different wavelengths due to the wavelength dependence of the contribution of starspots to the total flux. It would be interesting to search simultaneously for systematic variations in spectral line asymmetries in order to better understand the physics of pulsations in Cepheids. As far as Cepheids are concerned, simultaneous time-series spectroscopy in the optical and infrared domain are crucial to refine our understanding of the Cepheids' atmosphere dynamics. In the present paper, the calibration sample is confined to objects with a near-solar metallicity [Fe/H] to circumvent the issue of the dependence of calibrations on [Fe/H]. Investigating such a dependence of calibrations on [Fe/H] will be the goal of further studies. ## 6 Data availability This research used the facilities of the Italian Center for Astronomical Archive (IA2) operated by INAF at the Astronomical Observatory of Trieste, programme OPT19A5 (PI: N.Nardetto). The Subaru/IRCS spectra are available at the SMOKA Science Archive [https://smoka.nao.ac.jp/](https://smoka.nao.ac.jp/). ## Acknowledgements We thank our referee, Dr. Antonio Frasca, for his important comments, which improved our manuscript. VK is grateful to the Vector-Stiftung at Stuttgart, Germany, for support within the program "2022-Immediate help for Ukrainian refugee scientists" under grant P2022-0064.
2308.08198
DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting
We introduce DeSCo, a scalable neural deep subgraph counting pipeline, designed to accurately predict both the count and occurrence position of queries on target graphs post single training. Firstly, DeSCo uses a novel canonical partition and divides the large target graph into small neighborhood graphs, greatly reducing the count variation while guaranteeing no missing or double-counting. Secondly, neighborhood counting uses an expressive subgraph-based heterogeneous graph neural network to accurately count in each neighborhood. Finally, gossip propagation propagates neighborhood counts with learnable gates to harness the inductive biases of motif counts. DeSCo is evaluated on eight real-world datasets from various domains. It outperforms state-of-the-art neural methods with 137x improvement in the mean squared error of count prediction, while maintaining the polynomial runtime complexity. Our open source project is at https://github.com/fuvty/DeSCo.
Tianyu Fu, Chiyue Wei, Yu Wang, Rex Ying
2023-08-16T07:58:02Z
http://arxiv.org/abs/2308.08198v2
# DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting ###### Abstract. Subgraph counting is the problem of counting the occurrences of a given query graph in a large target graph. Large-scale subgraph counting is useful in various domains, such as motif counting for social network analysis and loop counting for money laundering detection on transaction networks. Recently, to address the exponential runtime complexity of scalable subgraph counting, neural methods are proposed. However, existing neural counting approaches fall short in three aspects. Firstly, the counts of the same query can vary from zero to millions on different target graphs, posing a much larger challenge than most graph regression tasks. Secondly, current scalable graph neural networks have limited expressive power and fail to efficiently distinguish graphs in count prediction. Furthermore, existing neural approaches cannot predict the occurrence position of queries in the target graph. Here we design DeSCo, a scalable neural deep subgraph counting pipeline, which aims to accurately predict the query count and occurrence position on any target graph after one-time training. Firstly, DeSCo uses a novel _canonical partition_ and divides the large target graph into small neighborhood graphs. The technique greatly reduces the count variation while guaranteeing no missing or double-counting. Secondly, _neighborhood counting_ uses an expressive subgraph-based heterogeneous graph neural network to accurately perform counting in each neighborhood. Finally, _gossip propagation_ propagates neighborhood counts with learnable gates to harness the inductive biases of motif counts. DeSCo is evaluated on eight real-world datasets from various domains. It outperforms state-of-the-art neural methods with \(137\times\) improvement in the mean squared error of count prediction, while maintaining the polynomial runtime complexity. subgraph counting, graph mining, graph neural network + Footnote †: journal: Computer Science and approximate heuristic and GNN methods only focus on estimating the total count of a query in the target graph (Hamilton et al., 2017; Wang et al., 2018; Wang et al., 2019), but not the occurrence positions of the patterns, as shown in Figure 2. Yet such position distribution information is crucial in various applications (Hamilton et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). **Proposed work.** To resolve the above challenges, we propose DeSCo, a GNN-based model that learns to predict both pattern counts and occurrence positions on any target graph. The main idea of DeSCo is to leverage the local information of neighborhood patterns to predict query counts and occurrences in the entire target graph. DeSCo first uses _canonical partition_ to decompose the target graph into small neighborhoods. The local information is then encoded using a GNN with _subgraph-based heterogeneous message passing_. Finally, we perform _gossip propagation_ to use inductive biases to improve counting accuracy over the entire graph. Our contributions are four-fold. **Canonical partition**. Firstly, we propose _canonical partition_ that divides the problem into subgraph counting for individual neighborhoods. We theoretically prove that no pattern will be double counted or missed for all neighborhoods. The algorithm allows the model to make accurate predictions on large target graphs with high count variation. Furthermore, we can predict the pattern position distribution for the first time, as shown in Figure 2. In this citation network, the hotspots represent overlapped linear citation chains, indicating original publications that motivate multiple future directions of incremental contributions (Wang et al., 2019; Wang et al., 2019), which shed light on the research impact of works in this network. **Subgraph-based heterogeneous message passing**. Secondly, we propose a general approach to enhance the expressive power of any MPGNNs by encoding the subgraph structure through heterogeneous message passing. The message type is determined by whether the edge presents in a certain subgraph, e.g., a triangle. We show that this architecture outperforms expressive GNNs, including GIN (Wang et al., 2019) and ID-GNN (Wang et al., 2019), while maintaining the polynomial runtime complexity for scalable subgraph counting. **Gossip propagation**. We further improve the count prediction accuracy by utilizing two inductive biases of the counting problem: homophily and antisymmetry. Real-world graphs share similar patterns among adjacent nodes, as shown in Figure 2. Furthermore, since canonical count depends on node indices, there exists antisymmetry due to canonical partition. Therefore, we propose a _gossip propagation_ phase featuring a learnable gate for propagation to leverage the inductive biases. **Generalization Framework**. We propose a generalization framework that uses the carefully designed synthetic dataset to enable model generalization to different real-world datasets. After training on the synthetic dataset, the model can directly perform subgraph counting inference with high accuracy on real-world datasets. To demonstrate the effectiveness of DeSCo, we compare it against state-of-the-art GNN-based subgraph counting methods (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), as well as approximate heuristic method (Hamilton et al., 2017) on eight real-world datasets from various domains. DeSCo achieves \(137\times\) mean square error reduction of count predictions for both small and large targets, as shown in Figure 1. To the best of our knowledge, it is also the first approximate method to accurately predict pattern position distribution as illustrated in Figure 2. DeSCo also maintains polynomial runtime efficiency, demonstrating orders of magnitude speedup over the heuristic (Hamilton et al., 2017) and exact methods (Wang et al., 2019; Wang et al., 2019). ## 2. Related Works There has been an extensive line of work to solve the subgraph counting problem. **Exact counting algorithms**. Exact methods generally count subgraphs by searching through all possible node combinations and finding the matching pattern. Early methods usually focus on improving the matching phase (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019) Recent approaches emphasize the importance of pruning the search space and avoiding double counting (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). However, exact methods still scale poorly in terms of query size (often no more than five nodes) despite much effort (Wang et al., 2019; Wang et al., 2019). **Approximate heuristic methods**. To further scale up the counting problem, approximate counting algorithms sample from the target graph to estimate pattern counts. Strategies like path sampling (Wang et al., 2019; Wang et al., 2019), random walk (Wang et al., 2019; Wang et al., 2019), substructure sampling (Wang et al., 2019; Wang et al., 2019), and color coding (Hamilton et al., 2017; Wang et al., 2019) are used to narrow the sample space and provides better error bound. However, large and rare queries are still hard to find in the vast sample space, leading to large approximation error (Hamilton et al., 2017). **GNN-based approaches**. Recently, GNNs have been used to attempt counting large queries. (Wang et al., 2019; Wang et al., 2019) use GNNs to embed the query and target graph, and predict subgraph counts via embeddings. (Wang et al., 2019) theoretically analyzes the expressive power of GNNs for counting and proposes an expressive GNN architecture. (Wang et al., 2019) proposes an active learning scheme for the problem. (Wang et al., 2019) proposes expensive edge-to-vertex dual graph transformation to enhance the model expressive power for subgraph counting. Unfortunately, large target graphs have extremely complex structures and a high variation of pattern count, so accurate prediction remains challenging. ## 3. Preliminary Let \(G_{t}=(V_{t},E_{t})\) be a large _target_ graph with vertices \(V_{t}\) and edges \(E_{t}\). Let \(G_{q}=(V_{q},E_{q})\) be the _query_ graph of interest. The _subgraph counting problem_\(C(G_{q},G_{t})\) is to calculate the size of the _set of patterns_\(\mathcal{P}=\{G_{p}|G_{p}\subseteq G_{t}\}\) in the target graph \(G_{t}\) that are isomorphic to the query graph \(G_{q}\), that is, \(\exists\) bijection \(f:V_{p}\mapsto V_{q}\) such that \((f(v),f(u))\in E_{q}\) iff \((v,u)\in E_{p}\), denoted as \(G_{p}\cong G_{q}\). Subgraph counting includes induced and non-induced counting depending on whether the pattern \(G_{p}\) is restricted to induced subgraph (Wang et al., 2019). A \(G_{p}=(V_{p},E_{p})\) is induced subgraph of \(G_{t}\) if \(\forall v\in E_{t}\leftrightarrow\) Figure 2. The total count and the position distribution of the query graph over the CiteSeer Citation Network. The figure compares between ground truth and DeSCo predictions. The hotspots are where the 4-chain patterns appear most often in CiteSeer. \(e\in E_{p}\), denoted as \(G_{p}\subseteq G_{t}\). Without loss of generality, we focus on the connected, induced subgraph counting problem, following modern mainstream graph processing frameworks (Srivastava et al., 2017; Wang et al., 2018) and real-world applications (Wang et al., 2018; Wang et al., 2018). It is also possible to obtain non-induced occurrences from induced ones with a transformation (Wang et al., 2018). Our GNN approach can natively support graphs with node features and edge directions. But in alignment with exact and heuristic methods, we use undirected graphs without node features in experiments to investigate the ability to capture graph topology. ## 4. Desco Pipeline In this section, we introduce the pipeline of DeSCo as shown in Figure 3. To perform subgraph counting, DeSCo first performs **canonical partition** to decompose the target graph to many canonical neighborhood graphs. Then, **neighborhood counting** uses the subgraph-based heterogeneous GNN to embed the query and neighborhood graphs and performs a regression task to predict the canonical count on each neighborhood. Finally, **gossip propagation** propagates neighborhood count predictions over the target graph with learnable gates to further improve counting accuracy. We will first introduce the model objective before elaborating on each step. ### Canonical Count Objective **Motivation**. For commonly seen node-level tasks such as node classification, each node is responsible for predicting its own node value. However, for subgraph counting, since each pattern contains multiple nodes, it is unclear which node should be responsible for predicting the pattern's occurrence. As illustrated in Figure 4, the ambiguity can lead to missing or double-counting of the motif, especially for queries with symmetric nodes, e.g. triangle. So we propose the canonical count objective to eliminate the ambiguity by assigning a specific canonical node responsible for each pattern. The canonical node is used to represent the pattern position. The canonical count is used as the local count prediction objective for the GNN and gossip propagation. To break the symmetry, we randomly assign node indices on the target graph and define the _canonical node_. Definition 4.1 (canonical node).: Canonical node \(v_{c}\) _is the node with the largest node index in the pattern._ \[v_{c}=\max_{I}V_{p} \tag{1}\] Based on the index, we assign the count of the k-node pattern to its _canonical node_ and define _canonical count_. Definition 4.2 (canonical count).: Canonical count \(C_{c}\) _equals the number of patterns that share the same canonical node._ \[C_{c}(G_{q},G_{t},v_{c})=|\{G_{p}\subseteq G_{t}|G_{p}\equiv G_{q},v_{c}=\max_{ I}V_{p}\}| \tag{2}\] The canonical count \(C_{c}(G_{q},G_{t},v_{c})\) differs from the regular count \(C_{c}\), as it takes an additional variable - a node \(v_{c}\) from the target graph. As shown in Figure 4(c), a pattern is only counted by its canonical node in \(C_{c}\). So the summation of \(C_{c}\) over all nodes equals the count of all patterns, \(C\), as stated in Lemma 4.1 and proven in Appendix A.1. Lemma 4.1 ().: _The subgraph count \(C\) of query in target equals the summation of the canonical count of query in target for all target nodes._ \[\mathcal{C}(G_{q},G_{t})=\sum_{v_{c}\in V_{t}}C_{c}(G_{q},G_{t},v_{c}) \tag{3}\] **Advantage**. By predicting the canonical count of each node, DeSCo can naturally get the pattern position distribution. Lemma 4.1 allows the decomposition of the counting problem into multiple canonical count objectives. We use the following canonical partition to minimize the overhead for the decomposition. Figure 4. When counting, (a) double-counts and (b) misses the triangle in the neighborhoods due to symmetry. (c) DeSCo uses the canonical node to break symmetry and correctly count the triangle. (i) are the node indices. Figure 3. DeSCo Pipeline in 3 steps. (a) Step 1. Canonical Partition: Given _query_ and _target_, decompose _target_ into multiple node-induced subgraphs, i.e., _canonical neighborhoods_, based on node indices. Each neighborhood contains a _canonical node_ that has the greatest index in the neighborhood. (b) Step 2. Neighborhood Counting: Predict the _canonical counts_ of each neighborhood via an expressive GNN, and assign the count of the neighborhood to the corresponding _canonical node_. Neighborhood counting is the local count of queries. (c) Step 3. Gossip Propagation: Use GNN prediction results to estimate _canonical counts_ on the _target_ graph through learnable gates. ### Canonical Partition **Motivation**. In Lemma 4.1, each canonical count \(C_{c}\) is obtained with the entire target graph \(G_{t}\). In order to overcome the high computational complexity, we partition the target to reduce the graph size for the canonical count. We observe that each canonical count only depends on some local neighborhood structure as shown in Figure 5(c). So we propose _canonical partition_ to efficiently get the small neighborhood. **Unique challenges of partition for canonical count**. Commonly used graph partition strategies include cutting edges (Beng et al., 2015) and taking d-hop neighborhoods (Shi et al., 2017). However, edge-cutting breaks the pattern structure, leading to incorrect count; D-hop neighborhoods guarantee correctness, yet are unnecessarily large since patterns exist in many overlapping neighborhoods. Thus, we define _canonical partition_. It neglects the neighborhood structure that does not influence the canonical count of each node. Canonical partition uses node indices to filter nodes as illustrated in Figure 5(a), (b). **Definition 4.3** (canonical partition).: Canonical partition \(\mathcal{P}\) _crops the index-restricted d-hop neighborhood around the center node from the target graph. \(\mathcal{D}(G_{t},v_{i},v_{c})\) means the shortest distance between \(v_{i}\) and \(v_{c}\) on \(G_{t}\)._ \[\begin{split}\mathcal{P}(G_{t},v_{c},d)&=G_{c},\\ \mathrm{s.\ t.\ }G_{c}&\subseteq G_{t},V_{c}=\{v_{i} \in V_{t}|\mathcal{D}(G_{t},v_{i},v_{c})\leq d,v_{i}\leq v_{c}\}\end{split} \tag{4}\] The graph \(G_{c}\) obtained by canonical partition is called the _canonical neighborhood_ Canonical neighborhoods can correctly substitute the target graph in canonical count as proven in Appendix A.2. Thus, we derive Theorem 1. **Theorem 1**.: _The subgraph count of query in target equals the summation of the canonical count of query in canonical neighborhoods for all target nodes. Canonical neighborhoods are acquired with canonical partition \(\mathcal{P}\), given any \(d\) greater than the diameter of the query._ \[\begin{split}\mathcal{C}(G_{q},G_{t})&=\sum_{v_{c} \in V_{t}}\mathcal{C}_{c}(G_{q},\mathcal{P}(G_{t},v_{c},d),v_{c}),\\ d&\geq\max_{v_{i},v_{j}\in V_{q}}\mathcal{D}(G_{q},v_{i},v_{j})\end{split} \tag{5}\] In DeSCo, given the target graph \(G_{t}\), it iterates over all nodes \(v\) of the target \(G_{t}\) and divides it into a set of canonical neighborhoods \(G_{v_{c}}\) with **canonical partition**. In practice, we set \(d\) as the maximum diameter of query graphs to meet the requirements of Theorem1. See Appendix A.3 for the implementation of \(\mathcal{P}(G_{t},v_{c},d)\). **Advantage**. Canonical partition dramatically reduces the worst and average complexity of the real-world subgraph counting problem by a factor of \(1/10^{70}\) and \(1/10^{11}\), as discussed in Appendix A.4. Furthermore, diverse target graphs can have similar and limited kinds of canonical neighborhoods. So it boosts the generalization power of DeSCo as shown in Section 5.4. This divide-and-conquer scheme not only greatly reduces the complexity of each GNN prediction, but also makes it possible to predict the count distribution over the entire graph. After the canonical partition, DeSCo uses the following model to predict the canonical count for each decomposed neighborhood. ### Neighborhood Counting After canonical partition, GNNs are used to predict the _canonical count_\(C_{c}(G_{q},G_{v_{c}},v_{c})\) on any canonical neighborhood \(G_{v_{c}}\) in the **neighborhood counting** stage. The canonical neighborhood and the query are separately embedded using GNNs. The embeddings are passed to a multilayer perceptron to predict the canonical count. **Motivation**. Previous work (Kang et al., 2017) shows message passing (MP) GNNs confuse certain graph structures and harm the counting accuracy. To enhance GNN's expressive power while remaining scalable, we propose the Subgraph-based Heterogeneous Message Passing (SHMP) framework. Inspired by (Shi et al., 2017), SHMP incorporates subgraph information to boost the expressive power. In the meantime, SHMP avoids using super-node (Shi et al., 2017) or message permutation (Kang et al., 2017) that are computationally expensive during message passing. **Neighborhood counting with SHMP**. To embed the input graph, SHMP uses small subgraph structures to categorize edges into different edge types, and uses different learnable weights for each edge type. **Definition 4.4** (subgraph-based heterogeneous message passing).: _The SHMP computes each node's representation with equation 6. Here \(k\) denotes the layer, \(\phi_{h}^{k}\) denotes the message function of the \(h\)-th edge type; \(N_{h}(i)\) denotes nodes that connect to node \(i\) with Figure 5. An example of canonical partition and canonical count. (a) Choose node 5 from the target graph as the _canonical node_ (red circle). (b) _Canonical partition_ generates the corresponding _canonical neighborhood_ graph. It performs an ID-restricted breadth-first search to find the induced neighborhood that complies with both Rule 1 and Rule 2. (c) The corresponding _canonical count_ is defined by the number of patterns containing the canonical node in the canonical neighborhood. DeSCo’s _neighborhood counting_ phase predicts the canonical count for each canonical neighborhood. the h-th edge type; AGG and AGG\({}^{\prime}\) are the permutation invariant aggregation function such as sum, mean, or max._ \[\begin{split}\mathbf{x}_{i}^{(k)}&=\gamma^{(k)}\bigg{(} \mathbf{x}_{i}^{(k-1)},AG_{\text{the}H}^{\prime}(M_{\text{h}})\bigg{)}\\ M_{\text{h}}&=AG_{\text{$\in$}\text{$N_{\text{h}}( i)$}}\big{(}\phi_{h}^{(k)}(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)}, \mathbf{e}_{j,i})\big{)}\end{split} \tag{6}\] Note that MP defined by GNN frameworks (Zhou et al., 2017; Zhang et al., 2018) is just a special case of SHMP if only one edge type is derived with the subgraph structure. We prove that SHMP can exceed the upper bound of MP in terms of expressiveness in Appendix B.1. For example, Figure 6 demonstrates that triangle-based heterogeneous message passing has better expressive power. Regular MPGNNs fail to distinguish different d-regular graphs \(G_{1}\) and \(G_{2}\) because of their identical type I messages and embeddings, which is a common problem of MPGNNs (Zhou et al., 2017). SHMP, however, can discriminate the two graphs by giving different embeddings. The edges are first categorized into two edge types based on whether they exist in any triangles (edges are colored purple if they exist in any triangles). Since no triangles exist in \(G_{2}\), all of its nodes still receive type I messages. While some nodes of \(G_{1}\) now receive type II messages with two purple messages and one gray message in each layer. As a result, the model acquires not only the adjacency information between the message sender and receiver, but also information among their neighbors. Such subgraph structural information improves expressiveness by incorporating high-order information in both the query and the target. In DeSCo, the canonical node of the neighborhood is also treated as a special node type in the heterogeneous message passing. **Advantage.** The triangle-based SHMP reduces the typical error of MPGNNs by 68% as discussed in Appendix B.2, while remaining polynomial runtime complexity of \(O(V+E^{3/2})\) as discussed in Appendix F. The comparison with other expressive GNNs are shown in Table 5 and Appendix B.3. The summation of the neighborhood counts (the predicted canonical counts of all canonical neighborhoods) can serve as the final subgraph count prediction. The counts also show the position of patterns. But to further improve counting accuracy, we pass the neighborhood counts to the _gossip propagation_ stage. ### Gossip Propagation Given the count predictions \(\hat{C}_{c}\) output by the GNN, DeSCo uses **gossip propagation** to improve the prediction quality, enforcing different homophily and antisymmetry inductive biases for different queries. Gossip propagation uses another GNN to model the error of neighborhood count. It uses the predicted \(\hat{C}_{c}\) as input, and the canonical counts \(C_{c}\) as the supervision for corresponding nodes in the target graph. **Motivation**. To further improve the counting accuracy, we identify two inductive biases: _Homophily_ and _Antisymmetry_. 1) _Homophily_. Adjacent nodes within graphs share similar graph structures, resulting in analogous canonical counts (Figure 2). This phenomenon, termed _homophily_ of canonical counts, stands out. 2) _Antisymmetry_. Nodes with similar neighborhood structures, per Definition 4.2, exhibit higher canonical counts for those with larger node indices. See right part of Figure 3 for an example. Details are in Appendix C. We observe a negative correlation between _Antisymmetry_ ratio and _Homophily_ in different queries, as depicted in Figure 14 in Appendix C. This observation inspires us to learn this relationship within models. The edges' direction in message passing can control the _homophily_ and _antisymmetry_ properties of the graph. With undirected edges, message propagation is a special low-pass filter (Zhou et al., 2017), enhancing the homophily property of the node values. With directed edges pointing from small-index nodes to large-index nodes, message propagation accumulates value in large-index nodes, which enhances the antisymmetry property. **Gossip propagation with learnable gates**. To learn the edge direction that correctly emphasizes homophily or antisymmetry, we propose the gossip propagation model as shown in Figure 7. It multiplies a learnable gate \(P\) for the message sent from the node with the smaller index, and \(1-P\) for the reversed one. \(P\) is learned from the query embedding. For different queries, \(P\) ranges from 0 to 1 to balance the influence of _homophily_ and _antisymmetry_. When \(P\to 0.5\), messages from the smaller indexed node and the reversed one are weighed equally. So it simulates undirected message passing that stress _homophily_ by taking the average of adjacent node values. When the gate value moves away from 0.5, the message from one end of the edge is strengthened. For example, when \(P\to 1\), the node values only accumulate from nodes with smaller indices to nodes with larger ones. So that it simulates directed message passing that stress _antisymmetry_ of the transitive partial order of node indices. The messages of MPGNNs are multiplied with \(g_{ji}\) on both edge directions. With learnable gates, the model can balance the effects of homophily and antisymmetry for further performance improvement. Figure 6. Proposed SHMP. Embedded with regular MP, graphs \(G_{1}\) and \(G_{2}\) are indistinguishable. While embedded with SHMP, \(G_{2}\) is successfully distinguished with six type II node embeddings, demonstrating better expressive power of SHMP. Figure 7. Proposed learnable gates in the gossip propagation model balance the influence of _homophily_ and _antisymmetry_ by controlling message directions. We compare the performance of DeSCo with state-of-the-art neural subgraph counting methods, as well as the approximate heuristic method. Our evaluation showcases the scalability and generalization capabilities of DeSCo across diverse and larger target datasets, contrasting with prior neural methods that mostly focused on smaller datasets. We also demonstrate the runtime advantage of DeSCo compared to recent exact and approximate heuristic methods. Extensive ablation studies further show the benefit of each component of DeSCo. ### Experimental Setup **Datasets.** While previous neural approaches primarily targeted smaller datasets with limited scalability and generalization, we go beyond those limitations by evaluating on larger real-world datasets from various domains such as chemistry (MUTAG (Kumar et al., 2017), COX2 (Zhou et al., 2017)), biology (ENZYMES (Zhou et al., 2017)), social networks (IMDB-BINARY (Zhou et al., 2017)), computer vision (MSRC-21 (Zhou et al., 2017), FIRSTMM-DB (Zhou et al., 2017)), and citation networks (CiteSeer (Zhou et al., 2017), Cora (Zhou et al., 2017)). To ensure a comprehensive evaluation, we also construct a synthetic dataset using mixed graph generators, which covers diverse graph characteristics. See Table 1 for the statistics of the datasets. For detailed information on target dataset and query characteristics, please refer to the Appendix D. **Generalization framework**. To enable subgraph counting on diverse target graphs, we initially train all neural methods with target-query pairs from the Synthetic dataset and standard queries of size \(3-5\). Neural methods are trained and assessed using the subgraph count objective. Notably, DeSCo only requires a single training phase across various datasets and tasks. **Baselines**. We compare DeSCo with state-of-the-art subgraph counting GNNs, including LRP (Krishnan et al., 2017), DIAMNet (Zhou et al., 2017) and DMPNN (Zhou et al., 2017). We ensure a fair comparison by adopting optimal configurations for both the baselines and DeSCo. For the approximate heuristic counting method, we choose the state-of-the-art MOTIVO (Zhou et al., 2017), which employs color-based sampling with efficient c++ implementation. For exact counting methods, we consider VF2 (Krishnan et al., 2017) and IMSM (Zhou et al., 2017). Refer to Appendix D.4 and F for the specific baseline configurations. **Evaluation metric**. Our evaluation employs mean square error (MSE) and mean absolute error (MAE) of total subgraph count prediction. MSE is normalized by dividing the variance of the ground truth counts. These metrics allow for a comprehensive assessment of DeSCo's performance across different query sizes and datasets. ### Neural Counting **Subgraph counting.** Table 2 summarizes the normalized MSE and MAE for predicting the subgraph count of twenty-nine standard query graphs on diverse target graph datasets. Utilizing canonical partition, neighborhood counting, and gossip propagation, DeSCo demonstrates \(49.7\times\) and \(8.4\times\) improvements of normalized mse and mae over the best neural baseline on average; \(17.5\times\) and \(4.1\times\) improvements of normalized mse and mae over the approximate heuristic method on average. These results underscore the reliability of DeSCo in real-world scenarios, even for smaller queries of size \(3-5\). Further analysis of the relative count error using the q-error metric is provided in Appendix G.1. Note that IMDB-BINARY contains graphs with high density, thus challenging for all neural methods. Despite lower accuracy than heuristic methods, DeSCo still achieves robust and significant improvements over neural methods, while maintaining the linear runtime efficiency. **Position distribution.** DeSCo introduce the accurate pattern position prediction for the first time. The pattern position prediction realizes \(3.8\times 10^{-3}\) normalized MSE as discussed in Appendix E.2. ### Scalability **Setup**. Obtaining ground truth for large queries and targets via exact counting is extremely expensive and can take months, so we only test scalable queries and targets with the following setting in Section 5.3. This allows us to demonstrate that with minimal pre-training, the model efficiently scales up and provides reliable predictions for larger queries and targets. **Large queries.** We select two frequently appearing queries for each query size between \(6\) to \(13\) from ENZYMES. See Appendix D.2 for more details. All the models are pre-trained with standard queries. Except for DeSCo (zero-shot), all models are fine-tuned with larger queries on the synthetic dataset. DeSCo (zero-shot) showcases the generalization power of DeSCo for unseen queries. Figure 8 shows the distributions of the square error of each query-target pair, normalized with the variance of all ground truth counts. Numeric results can be found in Appendix G.2. **Large target.** We also evaluate the models on large target graphs as shown in Table 3. The maximum ground truth count of standard queries reaches up to \(3.8\times 10^{6}\) and \(3.3\times 10^{7}\) on CiteSeer and Cora, respectively, making it a challenging task. DeSCo outperforms other neural methods as shown in Table 3. Notably, prediction result of LRP is infinite and thus not included in the table. \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & \#graphs & Avg. \#nodes & Avg. \#edges \\ \hline Synthetic & 1827 & 134.91 & 381.58 \\ \hline MUTAG & 188 & 17.93 & 19.79 \\ COX2 & 467 & 41.22 & 43.45 \\ ENZYMES & 600 & 32.63 & 62.14 \\ \hline IMDB-BINARY & 1000 & 19.77 & 96.53 \\ MSRC-21 & 563 & 77.52 & 198.32 \\ \hline FIRSTMM-DB & 41 & 1.3K & 3.0K \\ CiteSeer & 1 & 3.3K & 4.5K \\ Cora & 1 & 2.7K & 5.4K \\ \hline \hline \end{tabular} \end{table} Table 1. Graph statistics of datasets used in experiments. ### Generalization Ability **Synthetic Dataset**. We leverage the Synthetic dataset to demonstrate DeSCo's strong generalization ability. We consider a vector of graph statistics, including the number of nodes, number of edges, average degree, clustering coefficient, shortest path length, diameter, and density. As shown in Figure 9 (a), real-world graphs have diverse statistics, especially those from different domains. Fortunately, despite the wide variation in real-world graphs, they often exhibit similarities in terms of small local substructures. This observation is depicted in Figure 9 (b), where we observe similar characteristics in the canonical neighborhoods of graphs from different datasets. Furthermore, Figure 9 (c) demonstrates that our synthetic dataset, consisting of only 1827 graphs, successfully covers most real-world graphs. This finding reinforces the fact that DeSCo can be effectively trained on the synthetic dataset and exhibits strong generalization capabilities across a diverse set of real-world graphs. **Generalization**. To evaluate the generalization performance, we employ the proposed framework and pre-train DeSCo on the Synthetic dataset, followed by direct testing on various real-world graph datasets. For comparison, we adhere to the regular scheme of training DeSCo on existing datasets. Table 4 presents the results, clearly indicating that DeSCo, pre-trained on the synthetic dataset, achieves remarkable accuracy and generalizability across different test datasets. In contrast, pre-training on existing datasets only yields limited generalization ability within the same domain. \begin{table} \begin{tabular}{l c c c|c c|c c} \hline Test-Set & \multicolumn{2}{c|}{MUTAG} & \multicolumn{2}{c|}{MSRC-21} & \multicolumn{2}{c}{FISTMM-DB} \\ Query-Size & 3 & 4 & 5 & 3 & 4 & 5 & 3 & 4 & 5 \\ \hline \hline Existing & 6.5E-3 & 3.4E-3 & 8.7E-2 & 1.1E+1 & 1.9E+0 & 1.1E+0 & 1.1E-1 & 1.1E-1 & 1.6E-1 \\ Synthetic & **2.3E-3** & **8.4E-4** & **6.5E-3** & **2.5E-3** & **3.8E-3** & **8.7E-2** & **2.1E-3** & **3.6E-2** & **5.4E-2** \\ \hline \end{tabular} \end{table} Table 4. Normalized MSE performance with different training datasets. When pre-training on existing datasets, MSRC-21 uses MUTAG; CiteSeer uses Cora; FIRSTMMB-DB uses CiteSeer. \begin{table} \begin{tabular}{l c c c|c c c|c c c|c c c|c c c} \hline \hline Dataset & \multicolumn{3}{c|}{MUTAG} & \multicolumn{3}{c|}{COX2} & \multicolumn{3}{c|}{ENZYMES} & \multicolumn{3}{c|}{IMDB-BINARY} & \multicolumn{3}{c}{MSRC-21} \\ Query-Size & 3 & 4 & 5 & 3 & 4 & 5 & 3 & 4 & 5 & 3 & 4 & 5 \\ \hline \multicolumn{11}{c}{normalized MSE} \\ MOTIVO & 2.9E-1 & 6.7E-1 & 1.2E+0 & 1.6E-1 & 3.4E-1 & 5.9E-1 & 1.6E-1 & 1.9E-1 & 3.0E-1 & 2.7E-2 & **3.9E-2** & **5.0E-2** & 4.8E-2 & 7.2E-2 & 9.5E-2 \\ LRP & 1.5E-1 & 2.7E-1 & 3.5E-1 & 1.4E-1 & 2.9E-2 & 1.1E-1 & 8.5E-1 & 5.4E-1 & 6.2E-1 & inf & inf & inf & inf & 2.4E+0 & 1.4E+0 & 1.1E+0 \\ DIAMNet & 4.1E-1 & 5.6E-1 & 4.7E-1 & 1.1E+0 & 7.8E-1 & 7.2E-1 & 1.4E+0 & 1.1E+0 & 1.0E+0 & 1.1E+0 & 1.0E+0 & 1.0E+0 & 2.7E+0 & 1.6E+0 & 1.3E+0 \\ DMPNN & 6.1E+2 & 6.6E+2 & 3.0E+2 & 2.6E+3 & 3.2E+3 & 3.0E+3 & 2.9E+3 & 1.4E+3 & 1.2E+3 & 2.1E+4 & 1.3E+2 & 1.4E+2 & 1.1E+4 & 1.3E+3 & 4.1E+2 \\ \hline DeSCo & **2.2E-3** & **7.5E-4** & **6.0E-3** & **6.6E-4** & **6.3E-4** & **4.9E-3** & **5.4E-3** & **5.9E-2** & **5.3E-2** & **8.5E-3** & 2.1E-1 & 4.5E-1 & **2.5E-3** & **3.8E-3** & **8.7E-2** \\ \hline \multicolumn{11}{c}{MAE} \\ MOTIVO & 4.9E-0 & 5.1E+0 & 3.3E+0 & 8.3E+0 & 9.4E+0 & 7.3E+0 & 1.7E+1 & 2.3E+1 & 2.6E+1 & 4.7E+1 & **1.6E+2** & **6.1E+2** & 4.1E+1 & 9.5E+1 & 1.7E+2 \\ LRP & 3.8E+0 & 5.1E+0 & 4.5E+0 & 9.5E+0 & 4.0E+0 & 6.3E+0 & 4.3E+1 & 4.0E+1 & 3.7E+1 & inf & inf & inf & 3.2E+2 & 4.6E+2 & 5.9E+2 \\ DIAMNet & 8.3E+0 & 7.9E+0 & 4.2E+0 & 3.0E+1 & 1.7E+1 & 1.2E+1 & 5.4E+1 & 5.1E+1 & 4.0E+1 & 2.9E+2 & 8.3E+2 & 2.6E+3 & 3.4E+2 & 4.9E+2 & 6.3E+2 \\ DMPNN & 6.8E+2 & 6.9E+2 & 2.4E+2 & 3.6E+3 & 4.3E+3 & 3.8E+3 & 4.8E+3 & 5.8E+3 & 6.0E+3 & 1.7E+5 & 2.2E+5 & 2.8E+5 & 3.4E+4 & 4.6E+4 & 5.7E+4 \\ \hline DeSCo & **5.0E-1** & **1.8E-1** & **2.9E-1** & **6.1E-1** & **4.4E-1** & **7.7E-1** & **3.6E+0** & **1.1E+1** & **9.9E+0** & **2.4E+1** & 3.0E+2 & 1.6E+3 & **1.0E+1** & **2.5E+1** & **1.3E+2** \\ \hline \hline \end{tabular} \end{table} Table 2. Normalized MSE and MAE performance of approximate heuristic and neural methods on subgraph counting of twenty-nine standard queries. \begin{table} \begin{tabular}{l c c c|c c c} \hline \hline Dataset & \multicolumn{3}{c|}{CiteSeer} & \multicolumn{3}{c}{Cora} \\ Query-Size & 3 & 4 & 5 & 3 & 4 & 5 \\ \hline \multicolumn{11}{c}{normalized MSE} \\ DIAMNet & 2.0E+0 & 1.5E+0 & 1.2E+0 & 1.0E+10 & 3.2E+7 & 3.7E+4 \\ DMPNN & 9.5E+4 & 2.5E+2 & 6.8E+1 & 1.8E+5 & 1.1E+2 & 6.7E+1 \\ \hline DeSCo & **3.5E-5** & **9.7E-2** & **1.6E-1** & **4.2E-3** & **2.1E-1** & **6.3E-2** \\ \hline \multicolumn{11}{c}{MAE} \\ DIAMNet & 1.1E+4 & 6.0E+4 & 3.6E+0 & 2.1E+9 & 1.6E+9 & 8.3E+8 \\ DMPNN & 6.1E+6 & 7.6E+6 & 8.7E+6 & 1.8E+7 & 2.4E+7 & 3.0E+7 \\ \hline DeSCo & **6.0E+1** & **1.2E+4** & **1.1E+5** & **1.3E+3** & **7.3E+4** & **5.4E+5** \\ \hline \hline \end{tabular} \end{table} Table 3. Normalized MSE and MAE performance of neural methods on large targets with standard queries. Figure 8. The accumulative distributions of normalized square error of large queries (size up to 13) on three target datasets. The x-axis is clipped at 5. Given any square error tolerance bound (x-axis), DeSCo has the highest percentage of predictions that meet the bound (y-axis). DeSCo(zero-shot) generalizes to unseen queries with competitive performance over specifically trained baselines. ### Ablation Study We explore the effectiveness of each component of DeSCo through the ablation study by removing each component. Figure 10 shows the MAE results on three datasets. The geometric mean of normalized MSE on eight real world datasets are shown in Figure 1. The numeric results are shown in Appendix E. **Ablation of canonical partition.** We remove the canonical partition of DeSCo and instead train it with the objective of subgraph count on the entire target, mirroring the approach of other neural baselines. The divide-and-conquer approach of canonical partition substantially reduces errors, allowing DeSCo to surpass costly transformations used by other SOTA neural baselines (Figure 1). **Ablation of subgraph-based heterogeneous message passing.** We use our proposed SHMP to improve the performance of GraphSAGE by transforming its standard message passing to heterogeneous message passing. We use triangle as the subgraph to categorize heterogeneous edges as shown in Figure 6. Figure 10(b) shows the accuracy improvement brought by SHMP to GraphSAGE. Table 5 shows the superiority over expressive GNNs, including GIN and ID-GNN. **Ablation of gossip propagation.** The normalized MSE of direct summation of neighborhood counts and the summation after gossip propagation are compared to show the effectiveness of gossip propagation. With gossip propagation, the normalized MSE and MAE further reduces 1.8\(\times\) and 1.4\(\times\), respectively. ### Runtime Comparison Figure 11 shows the runtime of each method with four minutes' time-bound. For the exact methods, VF2 and IMSM, the runtime grows exponentially because of the #P hard nature of the subgraph counting problem. For the approximate heuristic method MOTIVO, the exponential growth is primarily attributed to the coloring phase before sampling. In contrast, the neural methods, LRP and DeSCo, demonstrate polynomial scalability. Unlike LRP, DeSCo does not need the heavy permutation of node features, so it further achieves 5.3\(\times\) speedup. More runtime analysis is discussed in Appendix F. ## 6. Conclusion We propose DeSCo, a neural network based pipeline for generalizable and scalable subgraph counting. With canonical partition, subgraph-based heterogeneous message passing, and gossip propagation, DeSCo accurately predicts counts for both large queries and targets. It demonstrates magnitudes of improvements in mean square error and runtime efficiency. It additionally provides the important position distribution of patterns that previous works cannot. Figure 11. The runtime comparison between exact, heuristic approximate, neural methods and DeSCo. All tested on the ENZYMES dataset. Figure 10. MAE performance with and without canonical partition, SHMP and gossip propagation. Figure 9. Visualization of statistics of diverse graph datasets. The embedding is obtained by projecting the vectors of graph statistics via t-SNE. (a) Each point represents a graph. (b) Each point represents a canonical neighborhood. (c) Canonical neighborhoods of the synthetic dataset cover most canonical neighborhoods of real-world graphs in terms of data distribution. \begin{table} \begin{tabular}{l c c|c c c|c c} \hline \hline Dataset & \multicolumn{2}{c|}{MUTAG} & \multicolumn{2}{c|}{MSRC-21} & \multicolumn{2}{c}{Cora} \\ Query-Size & 3 & 4 & 5 & 3 & 4 & 5 & 3 & 4 & 5 \\ \hline GCN & inf & inf & inf & inf & inf & inf & inf \\ SAGE & 4.0E-1 & 9.6E-2 & 5.6E-1 & 1.9E-1 & 6.3E-1 & 3.4E-1 & 9.4E-0 & 5.9E-1 & 1.1E-0 \\ GIN & 8.4E-2 & 7.3E-2 & 1.6E-1 & 2.4E-0 & 1.4E-0 & 1.3E-0 & 1.9E-0 & 1.3E-0 & 1.1E-0 \\ ID-GNN & 3.3E-2 & 2.9E-2 & 1.5E-2 & 3.0E-0 & 1.7E-0 & 1.3E-0 & 2.1E-0 & 1.3E-0 & 1.1E-0 \\ \hline SHMP-SAGE & **1.8E-2** & **1.6E-2** & 1.7E-2 & 7.5E-3 & 6.0E-3 & 6.7E-2 & **3.3E-3** & **2.2E-1** & **7.0E-2** \\ \hline \hline \end{tabular} \end{table} Table 5. Normalized MSE performance with different GNN models for neighborhood counting.
2305.01626
Basic syntax from speech: Spontaneous concatenation in unsupervised deep neural networks
Computational models of syntax are predominantly text-based. Here we propose that the most basic syntactic operations can be modeled directly from raw speech in a fully unsupervised way. We focus on one of the most ubiquitous and elementary properties of syntax -- concatenation. We introduce spontaneous concatenation: a phenomenon where convolutional neural networks (CNNs) trained on acoustic recordings of individual words start generating outputs with two or even three words concatenated without ever accessing data with multiple words in the input. We replicate this finding in several independently trained models with different hyperparameters and training data. Additionally, networks trained on two words learn to embed words into novel unobserved word combinations. To our knowledge, this is a previously unreported property of CNNs trained in the ciwGAN/fiwGAN setting on raw speech and has implications both for our understanding of how these architectures learn as well as for modeling syntax and its evolution from raw acoustic inputs.
Gašper Beguš, Thomas Lu, Zili Wang
2023-05-02T17:38:21Z
http://arxiv.org/abs/2305.01626v2
# Basic syntax from speech: Spontaneous concatenation in unsupervised deep neural networks ###### Abstract Computational models of syntax are predominantly text-based. Here we propose that basic syntax can be modeled directly from raw speech in a fully unsupervised way. We focus on one of the most ubiquitous and basic properties of syntax--concatenation. We introduce _spontaneous concatenation_: a phenomenon where convolutional neural networks (CNNs) trained on acoustic recordings of individual words start generating outputs with two or even three words concatenated without ever accessing data with multiple words in the input. Additionally, networks trained on two words learn to embed words into novel unobserved word combinations. To our knowledge, this is a previously unreported property of CNNs trained on raw speech in the Generative Adversarial Network setting and has implications both for our understanding of how these architectures learn as well as for modeling syntax and its evolution from raw acoustic inputs. ## 1 Introduction Concatenation (or compounding/conjoining elements) is one the most basic operations in human syntax. Many animal communication systems use simple symbols (call/sign\(\sim\)meaning pairs) that are not concatenated (termed "elementary signals" by Nowak and Komarova 2001). In human syntax, on the other hand, individual elements such as words can combine into "compound signals" (Nowak and Komarova, 2001) with compositional meaning. The evolution of concatenation (Progovac, 2015) as well as the existence of related operations that are presumably uniquely human and domain-specific (such as the proposed _Merge_; Chomsky 2014) have been the focus of debates in linguistics and cognitive science. Models of human syntax are predominantly text-based, but hearing human learners acquire syntax from acoustic inputs. Modeling syntax from raw speech also has engineering applications: speech processing increasingly by-passes text (Lakhotia et al., 2021). Understanding syntactic capabilities and limitations of spoken language models can inform architectural choices. Here, we model how compound signals or concatenated words can arise spontaneously in deep neural networks trained on raw speech in a fully unsupervised manner. The sounds of human speech are a measurable, physical property, but they also encode abstract linguistic information such as syntactic, phonological, morphological, and semantic properties. For these reasons, we model basic syntactic dependencies from raw acoustic inputs with CNNs. We train CNN in the Generative Adversarial Network (GAN) setting. CNNs and GANs are uniquely appropriate for modeling linguistic dependencies from raw speech without supervision. These models have been demonstrated to learn disentangled near-categorical representations of linguistically meaningful units at the phonetic, phonological, morphological, and lexical semantic levels from raw acoustic inputs (Begus, 2020, 2021a, 2021b). To test whether CNNs can spontaneously concatenate words, we conduct two sets of experiments. In the two _one-word_ experiments, we train the networks on single-word inputs. Because the networks are trained in the GAN setting, the Generator never accesses the training data directly, but generates innovative outputs. It has been shown that GANs innovate in highly interpretable ways that produce novel words or sound sequences (Begus, 2021a). Here we test whether innovations can produce spontaneously concatenated words. In the second experiment, we train the networks on one-word and two-word inputs (the _two-word_ experiment) and withhold a subset of two-word combinations. We then test whether words can be embedded into novel unobserved combinations in the output. Such a design also mimics one-word and two-word stages in language acquisition (Berk and Lillo-Martin, 2012). Methods ### The model We train the ciwGAN and modified fiwGAN models [1]. ciwGAN/fiwGAN models are information-theoretic extensions of GANs (based on InfoGAN, WaveGAN and DCGAN; Chen et al.2016; Donahue et al.2019; Radford et al.2015) designed to learn from audio inputs. The ciwGAN/fiwGAN architectures involve three networks (Figure 1): the Generator that takes latent codes \(c\) (either one-hot vectors or binary codes) and random latent space variable \(z\) (\(z\sim\mathcal{U}(-1,1)\)) and through six upconvolutional layers generates 2.048s audio (32,768 samples). The audio is then fed to the Discriminator, which evaluates realness of the output via the Wasserstein loss [1]. The unique aspect of the ciwGAN/fiwGAN architecture is a separate Q-network, which is trained to estimate the Generator's hidden code \(c\). During training the Generator learns to generate data such that it increases the Discriminator's error rate and decreases the Q-network's error rate. In other words, the Generator needs to learn to encode unique information into its acoustic inputs, such that the Q-network is able to decode unique information from its generated sounds. The training between the Generator and the Q-network mimics the production-perception loop in speech communication: after the training, the Generator learns to generate individual words given a latent code \(c\) and the Q-network learns to classify unobserved words with the same corresponding codes [1]. Since learning is completely unsupervised, the Generator could in principle encode any information about speech into its latent space, but the requirement to be maximally informative causes it to encode linguistically meaningful properties (both lexical and sublexical information; 1). Such a setting not only replicates the production-perception loop, but is also one of the few architectures featuring traces of communicative intent (between the Generator and the Q-network). Unlike in generative models trained on next sequence prediction or data replication where no communicative intent exists, the training objective between the Generator and the Q-network is to increase mutual information between the latent space and the data such that the Q-network can retrieve the information (latent code) encoded into the speech signal by the Generator. CiwGAN and fiwGAN have been shown to be highly innovative in linguistically interpretable ways. For example, the Generator produces new words or new sound sequences that it never accesses during training. Crucially, the Generator never directly accesses the data: it learns by generating data from noise such that the Discriminator fails to distinguish real and generated data. In this respect, it mimics learning by imitation in human language (rather than replication as is the case with variational autoencoders). ### Data The training dataset consists of sliced lexical items from the TIMIT database of spoken English [1], such that each item is a single spoken word. In the first, _one-second one-word_ experiment, we use 5 lexical items:, _oily, rag, suit, year_ and _water_. In this experiment based on a pre-trained model, the 5-layer Generator outputs only 1.024s of audio and data is never left-padded (only right padded) which controls for the effect of padding on concatenation. We replicate the results with another one-word experiment trained on _box, greasy, suit, under_, and _water_. Here, each item is randomly padded with silence to a length of 2s to produce 100 distinct data points for each class, for a total of 500 data points used in training (the _two-second one-word_ experiment). In the third experiment (_two-second two-word_), we use 3 lexical items: _greasy, suit_, and _water_. 100 data points each 2s in length are generated in an analogous process to the first experiment, but for each combination of two items (i.e. _greasy, suit_, and _water_ alone, _greasy_ followed by _water_, Figure 1: The architecture of ciwGAN used in the two-second one-word experiment. _water_ followed by _greasy_, and so on). However, we withhold the combination _suit/greasy_, such that they do not appear together in the training set in any order, to produce a final training set of 700 data points. For the one-word experiments, we use the ciwGAN model with one-hot encoding and five levels, such that each of the five unique words can be represented with a unique one-hot vector. In the two-word experiment, we use a modified fiwGAN (binary code). The binary code is limited to three bits, but each code can have up to two values of 1 (e.g. [1,0,0] and [1,1,0]). We also train an additional two-word two-second model with the same data but with 6-level one-hot \(c\) in the ciwGAN architecture. ## 3 Results To test whether the models can spontaneously concatenate, we train the networks for 8,011 (pre-trained one-second one-word), 8,956 (two-second one-word), 9,166 (two-second two-word fiwGAN), and 18,247 steps (two-second two-word ciwGAN) and analyze generated data. We use the technique proposed in Begus (2020) to analyze the relationship between linguistically meaningful units and latent space. According to this technique, setting individual latent space variables to values outside of the training range reveals the underlying linguistic value of each variable. ### One-word model In the one-second one-word model, the Generator learns to associate each unique one-hot code with a unique lexical item in a fully unsupervised and unlabeled manner. The Generator's input during training is a one-hot vector with values 0 or 1. For example, the network learns to represent _suit_ with \([1,0,0,0,0]\). To test this observation, we set the one-hot vector to values outside the training range (e.g. \([5,0,0,0,0]\)), which overrides lower-level interactions in the latent space. This causes the Generator to output _suit_ at near categorical levels (9 times out of 10), revealing the underlying value of the code. This further reveals that \([0,1,0,0,0]\) encodes _year_ (8 times out of 10) and \([0,0,1,0,0]\) encodes _water_ (10 times out of 10), since setting latent codes to values greater than 1 results in the model almost categorically outputting the associated word. In addition to lexical learning, we observe a robust pattern: the networks trained on one-word inputs generate two-word outputs when the one-hot values are set to negative values outside of the training range. For example, when the latent code is set to \([0,-2,-2,-2,0]\), the Generator consistently outputs a two-word output _suit year_ (8 times out of 10). For \([-3,-2,-3,-2,2]\), the network consistently outputs _rag year_ (8 times out of 10; Figure 3). These concatenations occur despite the fact that the training data is always left-aligned, the Generator never accesses the data directly and the Discriminator only sees single words. To show that this outcome is not an idiosyncratic property of one model and that it is indeed the negative values that encode concatenated outputs, we also analyze a separately trained two-second one-word model. The inputs to the Discriminator in this case are also single words only, but they are longer (2s) and randomly padded with silence on the left and right. While high positive values occasionally yield two-word outputs in this model, negative values are consistently associated with two-word outputs. For example, \([-50,-50,0,-50,0]\) (with extreme values) consistently encodes _box greasy_ (9 times out of 10), and \([-50,-50,-50,0,0]\) consistently encodes _greasy under_ (10 times out of 10). Positive values of the same codes produce completely unintelligible outputs (noise). In addition to several two-word concatenated outputs, the network even occasionally generates a three-word concatenated output _box under water_ for the latent code \(c\) with all negative values \([-3,-1,-1,-1,-1]\) (2 times out of 10). Figure 2 illustrates the three-word sequence. Figure 2: The three-word concatenated output _box under water_. Independently, the second word (_under_) is somewhat difficult to analyze, but given only five training words, it is clearly the closest output to _under_. ### Two-word model In the two-word experiment, the models get one-word and two-word inputs (thus mimicking the two-word stage). The models are only trained on three words and their combinations, except for the _suit/greasy_ combination withheld. In the ciwGAN two-word model, the Generator consistently outputs the unobserved _greasy suit_ for \([15,0,0,0,0,0]\) (17 times out of 20), which suggests the network learned this unobserved combination as one of the possible sequences and encoded it with a one-hot value. For the code \([-1,4,4]\) (modified fiwgan), the Generator occasionally outputs the three-word output _suit greasy water_ (1 time out of 20; Fig. 4), which contains the unseen _suit greasy_ pair. It appears that the negative values of the latent code \(c\) again encode unobserved novel combinations. We also observe repeated three-word outputs such as _water water suit_ as a consistent output of \([0,50,-50]\) (20 times out of 20). ### Repetition In addition to two-word concatenation and embedding of words into novel combinations, we also observe outputs with repeated words in all our trained models. The training data never includes the same word repeated, yet the models frequently include repeated words. For example, the two-second one-word model consistently outputs _greasy greasy_ for \([0,0,-40,0,0]\) (7 times out of 10; Fig. 4). This is significant because repetition or reduplication is one of the most common processes in human language and language acquisition (Berent et al., 2016; Dolatian and Heinz, 2020). Additionally, full or total reduplication (where the entire word is repeated) is among the most computationally complex morphophonological processes (Dolatian and Heinz, 2020) because it represents unbound copying at the segmental level. It has been shown elsewhere that deep convolutional neural networks can learn partial reduplication (where only a part of the word is repeated) from raw speech and extend it to novel unobserved data (Begus, 2021). Our results suggest that total reduplication (or unbound copying; Dolatian and Heinz 2020) can arise spontaneously in these models. ### Why negative values? The Generator makes use of the unutilized space in the latent code to encode unobserved but linguistically interpretable outputs. During training, the Generator is trained on only two values of the latent code: 0 and 1. In the one-second one-word model, individual codes represent unique individual words, which suggest lexical learning emerges in the positive values in these models. The network never accesses two-word inputs and it never gets negative values in the latent code \(c\) during the training. It appears that the network is biased to concatenate and that it uses the unobserved latent code space to encode unobserved concatenated outputs. ## 4 Conclusion Our results suggest that the Generator network in the ciwGAN architecture not only learns to encode information that corresponds to lexical items in its audio outputs, but also spontaneoulsy concatenates those lexical items into novel unobserved two-word or three-word sequences. The ability of unsupervised deep neural networks trained on raw speech to concatenate words into novel unobserved combinations has far-reaching consequences. It means that we can model basic syntactic properties directly from raw acoustic inputs of spoken language, which opens up potential to model several other syntactic properties directly from speech with deep convolutional neural networks as well as from other architectures. From the perspective of evolution of syntax, the results suggest that a deep neural network architecture with no language-specific properties can spontaneously begin generating concatenated signals from simple signals. The step from one-word stage to two-word stage is necessary both in evolution of human language as well as during language acquisition. Our second experiment mimics the two-word stage. We argue that unsupervised deep learning models not only concatenate single words into multi-word outputs, but are also able to embed words into novel unobserved combinations once the model is trained on multiple-word inputs. Further research into the relationship between basic syntactic properties that spontaneously emerge in these fully unsupervised models trained on raw speech and the structure of the latent space has the potential to yield insights for the study of syntactic theory, language acquisition, and language evolution. By evaluating these models on syntactic properties of spoken language, we should also get a better understanding of computational limits of unsupervised CNNs. ### Limitations This paper models concatenation of acoustic lexical items. Syntax is substantially more complex than concatenation [1]. Exploration of other syntactic properties as well as of compositionality in these models is left for future work. We also train the network on a relatively small number of lexical items (5) and a small number of tokens (100). The small number of lexical items is representative of the earliest stages of language acquisition when the number of lexical items is highly limited [1]. ## Ethics Statement Two models are trained for the purpose of this paper, and one model is pretrained. The three models were trained for 16hrs on a single GPU (NVIDIA 1080ti). We use the TIMIT [1] database for training. The number of parameters is given in the Appendix A. We take the standard hyperparameters (from Donahue et al. 2019 and Begus 2021a). Because the outputs are salient and rarely ambiguous, all transcriptions are performed by the authors. Generated audio files and models' checkpoints are available at the anonymous link: [https://osf.io/przuq/?view_only=9d19a26f0bb84a3ea4db8e6844b37985](https://osf.io/przuq/?view_only=9d19a26f0bb84a3ea4db8e6844b37985).
2302.11492
Hybrid integrated near UV lasers using the deep-UV Al2O3 platform
Hybrid integrated diode lasers have so far been realized using silicon, polymer, and silicon nitride (Si3N4) waveguide platforms for extending on-chip tunable light engines from the infrared throughout the visible range. Here we demonstrate the first hybrid integrated laser using the aluminum oxide (Al2O3) deep-UV capable waveguide platform. By permanently coupling low-loss Al2O3 frequency-tunable Vernier feedback circuits with GaN double-pass amplifiers in a hermetically sealed housing, we demonstrate the first extended cavity diode laser (ECDL) in the near UV. The laser shows a maximum fiber-coupled output power of 0.74 mW, corresponding to about 3.5 mW on chip, and tunes more than 4.4 nm in wavelength from 408.1 nm to 403.7 nm. Integrating stable, single-mode and tunable lasers into a deep-UV platform opens a new path for chip-integrated photonic applications.
C. A. A. Franken, W. A. P. M. Hendriks, L. V. Winkler, M. Dijkstra, A. R. do Nascimento Jr, A. van Rees, M. R. S. Mardani, R. Dekker, J. van Kerkhof, P. J. M. van der Slot, S. M. García-Blanco, K. -J. Boller
2023-02-22T17:00:09Z
http://arxiv.org/abs/2302.11492v1
# Hybrid integrated near UV lasers using the deep-UV Al\({}_{2}\)O\({}_{3}\) platform ###### Abstract Hybrid integrated diode lasers have so far been realized using silicon, polymer, and silicon nitride (Si\({}_{3}\)N\({}_{4}\)) waveguide platforms for extending on-chip tunable light engines from the infrared throughout the visible range. Here we demonstrate the first hybrid integrated laser using the aluminum oxide (Al\({}_{2}\)O\({}_{3}\)) deep-UV capable waveguide platform. By permanently coupling low-loss Al\({}_{2}\)O\({}_{3}\) frequency-tunable Vernier feedback circuits with GaN double-pass amplifiers in a hermtically sealed housing, we demonstrate the first extended cavity diode laser (ECDL) in the near UV. The laser shows a maximum fiber-coupled output power of 0.74 mW, corresponding to about 3.5 mW on chip, and tunes more than 4.4 nm in wavelength from 408.1 nm to 403.7 nm. Integrating stable, single-mode and tunable lasers into a deep-UV platform opens a new path for chip-integrated photonic applications. A wide range of emerging photonic applications in the ultraviolet requires chip-sized integrated laser sources in the ultraviolet with wide tunability and high coherence. Specifically, such lasers would unlock increased integration density and upscaling in integrated quantum photonics [1]-[3], UV spectroscopy [4], [5], UV biophotonics [6], and multiple UV optical clock transitions [7]. Heterogeneous and hybrid integrated lasers provide a wide tuning range and high coherence at longer wavelengths, while maintaining small size and high efficiency. In these devices, photonic integrated circuits (PICs) are used to spectrally filter and couple back light from III-V semiconductor optical amplifiers to impose tunability and improve coherence. In the infrared telecom range, such lasers are now providing tunability across the entire gain bandwidth, and allow for coherence levels which are becoming comparable to advanced bulk laser systems. This evolution in performance became possible by introducing a change of paradigm in chosing the optical materials for PIC fabrication. The initially used semiconductor feedback circuits, intended to address the infrared telecom range, offered only small material bandgaps, such as in Si waveguides (1.1 eV) or InP waveguides (1.3 eV). The situation changed dramatically when low-loss waveguides with a much wider bandgap became available. An example of such a waveguide platform is silicon nitride (Si\({}_{3}\)N\({}_{4}\)) embedded in silicon oxide (SiO\({}_{2}\)) [8], [9] which provides a bandgap as high as 3.3 eV [10]. This reduced both linear and nonlinear material losses, enabled highly frequency selective waveguide circuits and allowed to maximize laser coherence with extended on-chip photon lifetimes. Recently, the wide bandgap also enabled the first realization of hybrid integrated lasers in the visible range with mW-level fiber-coupled output [11], [12]. Employing separate feedback chips for spectral narrowing and frequency pulling with Fabry-Perot lasers resulted in visible tunable output as well. Milliwatt level fiber-coupled powers were obtained in the blue range (450-460 nm), about 500 uW in the violet [13], however, at the very end of the visible the power was restricted to the 100 and 10-uW regimes [14], [15], respectively. For reaching out to shorter wavelengths in the UV range, it has recently been discussed whether employing silicon nitride would still be appropriate [16]. However, a straightforward extension to the UV range using Si\({}_{3}\)N\({}_{4}\) feedback circuits, especially combined with 5-eV GaN diodes at 250 nm can be excluded [17]. Moving into these ranges would increase the waveguide propagation losses strongly, as the photon energy approaches, matches and supersedes the 3.3-eV silicon nitride bandgap (corresponding to 380 nm) [18], [19]. Ultimately, if losses become too high, the feedback circuit loses its filter and feedback function, and control over tuning and coherence fails. Even below the bandgap energy, Si\({}_{3}\)N\({}_{4}\) has shown to be susceptible to nonlinear excitation [20], [21], while UV irradiation creates defects that introduce absorption also at below-bandgap wavelengths [22]. Clearly, these considerations and observations require to introduce a deep-UV capable platform to extend the complex functionalities of laser feedback circuits into the UV, such that the material bandgap remains much wider than the targeted UV wavelength. The most promising materials for this task are aluminum nitride (AlN) and aluminum oxide (Al\({}_{2}\)O\({}_{3}\)) because of their huge bandgaps of 6 eV (200 nm) and 7.6 eV (165 nm) respectively, [23], [24]. As the Al\({}_{2}\)O\({}_{3}\) platform does not suffer from anisotropy and high sidewall roughness due to AlN crystallinity and provides a wider spectral coverage it seems more suitable for deep-UV applications. Further indicators of suitability as a laser feedback platform are that simple low-loss waveguide components have already been fabricated [6], [23], [25], extending the established range of fabrication tools for Al\({}_{2}\)O\({}_{3}\)-based infrared waveguide lasers [26], [27]. Tight guiding in Al\({}_{2}\)O\({}_{3}\) high-Q ridge waveguide resonators has been demonstrated so far only in the near-UV, showing intrinsic qualitity factors of Q \(>\) 470.000 [23]. Comparably low straight propagation losses were found in the ultraviolet as well (3 dB/cm at 360 nm [6]). These features underline the potential of Al\({}_{2}\)O\({}_{3}\) waveguides for spectral control of chip-integrated UV lasers, however, in this wavelength range there are additional challenges. Due to the high photon energy, losses and damage can occur at all UV-exposed surfaces and interfaces such as the facets of semiconductor optical amplifiers, i.e., GaN. The reasons are photo-induced chemical processes with atmospheric moisture and gases that form surface defects or absorptive layers [28], [29]. This degradation needs to be counteracted, such as by hermetic sealing for shielding from ambient humidity. In the infrared, hybrid integration with transparent epoxy for permanent chip-to-chip bonding, heterogeneous integration, and packaging solely aim on long-term frequency stability, robustness and electro-optical connectivity. With UV lasers, however, chip-to-chip bonding, chip-to-fiber bonding, and photonic packaging [30] has to be UV compatible as well. Similarly, photonic wirebonding [31] and heterogeneous integration via direct bonding [32] or transfer printing [33] would have to involve UV-suitable materials, such as AlN as buffer layers [34]. Otherwise, stable long-term laser operation can not be achieved. Here we present a chip-scale hybrid integrated laser, where for the first time the deep-UV capable Al\({}_{2}\)O\({}_{3}\) platform is employed (see in Fig. 1)[35]. Using CMOS compatible technology, Al\({}_{2}\)O\({}_{3}\) waveguides embedded in a SiO\({}_{2}\) cladding are fabricated for tightly guided single-mode propagation, minimized side-wall scattering, and minimized bend radiation loss. For obtaining single wavelength laser oscillation, two sequentially coupled Al\({}_{2}\)O\({}_{3}\) microring resonators are fabricated and connected in Vernier configuration [36], which provides frequency selective feedback to a near UV InGaN double-pass semiconductor amplifier. Thin-film micro-heaters enable thermo-optical laser tuning and provide tunable output coupling. Hybrid integration serves for permanent optical coupling of amplifier, feedback chip and output fibers. The packaged chip assembly is based on hermetic sealing to yield protection against degradation for long-term operation. The laser generates a fiber-coupled output power of 0.74 mW, which corresponds to an on-chip power of 3.5 mW, and is tunable in wavelength more than 4.4 nm from 408.1 nm to 403.7 nm. ### Waveguide cross-section and fabrication We chose to make use of Al\({}_{2}\)O\({}_{3}\) cores embedded in a SiO\({}_{2}\) cladding to provide a maximum electronic bandgap (7.3 eV [23] and 9.3 eV [37], respectively). To identify a suitable core cross-section for fabrication, preceding infrared and UV measurements were consulted and used for scaling. With 400 nm thick and 2000 nm wide straight Al\({}_{2}\)O\({}_{3}\) waveguides we measured losses of 0.05 dB/cm at a wavelength of 1550 nm. In the UV at 377 nm, for a 170 nm thick Al\({}_{2}\)O\({}_{3}\) film deposited with a reactive sputtering process and followed by chemical mechanical polishing, we measured slab propagation losses of 0.6 \(\pm\) 0.3 dB/cm [38]. Choosing a conservative approach for this first-time Al\({}_{2}\)O\({}_{3}\)-based hybrid integrated diode laser, we decided for a near UV target wavelength of 405 nm. In that range, suitable double pass gallium nitride amplifiers, as well as equipment for laser output characterization, was available. Scaling the measured UV and IR losses to the target wavelength Figure 1: _a) Schematic showing the functional design of the laser, with the InGaN amplifier edgecoupled to the \(Al_{2}O_{3}\) feedback chip. b) Cross-section of the \(Al_{2}O_{3}\) waveguide embedded in \(SiO_{2}\) showing the calculated intensity profile of the optical mode (at 405 nm), with a horizontal and vertical modefield diameter (MFD) of 0.28 \(\mu\)m and 0.14 \(\mu\)m respectively. c) Transmission function (solid line) for light resonating with both rings in the Vernier filter. A smoothed curve for the measured amplified spontanous emission (ASE) of the amplifier driven at 55 mA (dashed line) shows the Vernier filter can select a single wavelength in the amplifier emission bandwidth. d) Light at 405 nm injected in an 11.3-cm spiral structure. A series of spirals and paths of different length were used to measure the straight propagation loss of \(\alpha\) = 2.8 \(\pm\) 0.3 dB/cm. e) Top-down microscopic picture of the hybrid laser, showing the waveguides, heaters, wirebonds and location of the InGaN amplifier and fiber array in detail. f) The hybrid integrated and packaged laser in a hermetically sealed nitrogen environment. Here, the laser is driven with a pump current above threshold. The white substance on the bottom of the glass lid is a getter material which absorbs any remaining volatile gasses and moisture in the package._ range predicts losses between 1 and 4 dB/cm for 400 nm thick waveguides. Selecting a thinner waveguide would reduce propagation loss via reduced sidewall scattering while maintaining tight guiding for UV wavelengths. Choosing a proper waveguide cross-section is of central importance to design low-loss and fabrication tolerant waveguide components. This applies to curved waveguides in small ring resonators for spectral filtering with wide free spectral range, where the cross-section and ring radius determines the bending loss and thus the filter \(Q\)-factor. Also the design of directional couplers depends on the cross-section, as the coupling strength is influenced by the coupler length and by the fabrication tolerance of the gap between the waveguide cores. The cross-section to be selected also needs to restrict propagation to a single transverse mode with a polarization matching that of the amplifier (TE\({}_{00}\)), while efficient coupling to the diode amplifier and output fibers should be enabled by modematching through inverse tapering at the selected waveguide thickness. Using measured losses, scaling and numerical simulations (see Methods) a waveguide cross section of a 400 nm wide by 100 nm thick Al\({}_{2}\)O\({}_{3}\) core embedded in a SiO\({}_{2}\) cladding is chosen (see Fig. 1b). The fabrication employs common CMOS compatible techniques on a wafer scale (see Methods for details). With a mask design for a 10-cm wafer, 127 chips are fabricated carrying an extensive range of test structures from waveguide spirals (Fig. 1d) for loss measurements to Mach-Zehnder interferometers (MZI) to test the heater functionality. Various Vernier feedback circuits as shown in Fig. 1a are fabricated with variation in ring radii, tapers and coupling coefficients between ring resonators and bus waveguides. Characterizing the individual test structures and considering the measured loss at the nominal wavelength allows us to fabricate and select suitable feedback circuits for hybrid integration with amplifiers, while requiring only a single wafer run. ### Design of Al\({}_{2}\)O\({}_{3}\) feedback chip Employing Al\({}_{2}\)O\({}_{3}\) waveguides for novel near UV laser feedback circuits requires to adapt the design of crucial circuit functions. One function is providing tunable optical filtering to the amplifier with the goal to impose wavelength tunable laser output with high side-mode suppression. Tuning reliably across the entire gain bandwidth is easily facilitated if the free spectral range of the Vernier filter is wider than or equal to the gain bandwidth. To obtain single-frequency output with high side mode suppression the side peaks of the Vernier filter need to be sufficiently suppressed and the main filter peak should be narrowband. A main condition always remains, that the feedback of the Al\({}_{2}\)O\({}_{3}\) circuit at the main filter peak is high enough for reaching laser threshold with the available small-signal gain. Fulfilling all of these conditions requires a functional circuit design where all circuit parameters are properly set. Such conditions are the free spectral ranges of the micro-ring resonators used in the Vernier filter, the power coupling coefficients of the ring resonators (\(\kappa^{2}\)), the mode matching between the amplifier and feedback chips, and the variable strength of the optical output coupling from the laser resonator. The amplifier's gain bandwidth determines the Vernier filter's free spectral range and the overall allowed feedback loss, while the amplifier's mode profile determines what minimum loss can be achieved in edge coupling and hybrid integration of the chips. Measuring the propagation loss of fabricated test structures determines the optimal coupling strength to the ring resonators that would maximize the peak filter transmission. The selected amplifier is an InGaN/GaN superluminescent diode (SLED) with a center wavelength of 405 nm. Experimental data for the amplifiers are available from the manufacturer (Exalos AG). To work as a double pass amplifier, the diode is highly-reflective coated (vs. air \(>\)95%) on its back facet and anti-reflective coated (vs. air \(<\)0.1%) on the facet facing the Al\({}_{2}\)O\({}_{3}\) chip. When operated without feedback and driven with a current of 78 mA the output is 10 mW of amplified spontaneous emission (ASE) with a 3-dB emission bandwidth of approximately 3.4 nm. The manufacturer specified mode field diameter is 1.87\(\times\)0.6 \(\mu\)m. For quantifying the propagation loss in the fabricated waveguides, we perform single-pass transmission measurements, using a set of waveguide spirals with weak curvature and different lengths (see Methods). The measurements yield a propagation loss of \(\alpha\) = 2.8 \(\pm\) 0.3 dB/cm at a wavelength of 405 nm for the cross-section depicted in Fig. 1b. The ASE emission bandwidth was used to set the radii of the Vernier resonators to (radius R\({}_{1}\) = 150 \(\mu\)m and radius R\({}_{2}\) = 153 \(\mu\)m), to obtain a Vernier spectral range of 4.84 nm (see Fig. 1c). As a direct transmission measurement for fabricated Vernier filters is difficult, we measure instead the fabricated cross sections and tunnel gaps of the couplers using a scanning electron microscope (SEM) and use the retrieved geometry with index data to simulate the strength of couplers and Q-factors of ring resonators. The Vernier filter that was selected for most of the laser characterization experiments we obtain a power coupling of \(\kappa^{2}\) = 5.9%. This Vernier filter would result in the narrowest spectral filtering, while still providing sufficient feedback to the amplifier to bring the laser above threshold. Using both parameters, \(\alpha\) and \(\kappa\), we conclude that both resonators are near-critically coupled with intrinsic and loaded Q-factors of 427.000 and 143.000, respectively, yielding a feedback filter resolution of approximately 3.3 GHz (full-width half maximum of Vernier resonance). The transmission function of this Vernier-filter is plotted in Fig. 1c. It can be seen that the individual resonators with a free spectral range of about 97 pm open a Vernier free spectral range that is much wider, about 4.84 nm. This ensures that the filter passes only a single wavelength within the amplifier emission spectrum, qualitatively represented by the dashed outline of a measured ASE spectrum in Fig. 1c. It can also be seen that the next-adjacent Vernier filter side mode (VFSM, see inset in 1c) is well suppressed (3.3 dB below main peak), to maximize the side mode suppression during laser operation. For light passing resonantly through the Vernier filter, we calculate an optical roundtrip length of the laser cavity of 55.9 mm, which means the nearest cavity mode is at 5.36 GHz from the central lasing mode. For optimum coupling between modes in the InGaN amplifier chip and Al\({}_{2}\)O\({}_{3}\) feedback chip, lateral inverse tapers are designed near the facets of the feedback chip. Limited by the elliptical shape of the amplifier output mode, the maximum theoretical coupling between both chip modes is calculated to be 91%. Fresnel reflections from the interface of the chips into the amplifier are suppressed by letting the waveguides form a slight, index-matched angle with regard to the facet normal. For efficient coupling to output fibers, inverse tapers were designed at the output facet of the feedback chip. Other functionalities included in the design of the feedback chips are an adjustable phase section (PS), phase shifters for fine-tuning the optical length of the ring resonators (R\({}_{1}\) and R\({}_{2}\)), an adjustable Mach-Zehnder interferometer (MZI) outcoupler (OC) formed by two 50% directional couplers, and a phase section behind the outcoupler. Tuning of these circuit components is realized via the thermo-optic effect by placing thin-film resistive microheaters on the chip (as indicated in Fig. 1a). Tuning the phase section allows for spectrally aligning a laser cavity mode with a resonance of the Vernier filter. Tuning of ring resonators (R\({}_{1}\) and R\({}_{2}\)) selects a particular feedback wavelength. The main purpose of variable outcoupling is to enable maximum output power at each drive current of the laser diode and heaters of the Vernier filter. Varying the coupling is achieved by applying a current to one of the heaters at the arms of the MZI in the outcoupler. Characterization of test MZI structures showed that the outcoupling of the laser can be adjusted between approximately 10 and 90%. To reduce ASE noise in the output, the output is taken from the filtered light from the Vernier loop mirror that is directed back to the laser amplifier. The phase section behind the outcoupler may be used to prevent injection locking of the laser frequency via unwanted reflections such as as from chip and fiber facets. ### Hybrid integration and packaging Characterization of separate waveguide circuits can be carried out with manually aligned stages, such as in waveguide loss measurements using free-space fiber-to-chip coupling. Manual alignment of edge coupled feedback chips for spectral narrowing via self-injection locking can reveal ultra-low, Hertz-level intrinsic linewidths, however, vibration and drift of alignment stages typically limits stability to time scales shorter than milliseconds [39]. In contrast, long-term frequency and power stability, and inherent robustness to external perturbations, require mutual bonding of chips, hermetic sealing and temperature stabilization [30]. In infrared lasers, the laser cavity mode is allowed to propagate through bonding materials, and hermetic sealing is not essential. This is demonstrated with hybrid integration [40] and heterogeneous integration [16]. However, at the much higher UV photon energies, bonding materials need to be kept out of the laser mode and hermetic sealing is instrumental for long-term stable output. In total, three lasers were hybrid integrated and packaged with hermetic sealing in a standard 14-pin butterfly housing with electrical wirebonding and with feed-throughs for the output fibers and electrical signals (see Methods). The package contains a thermistor and Peltier element for temperature control of the laser. One packaged variant is sealed in a nitrogen atmosphere and equipped with a glass lid. This allows for visual inspection of the circuit in operation, such as seen in Fig. 1e, 1f and 2a. For the other variants, improved hermetical sealing is realized by sealing the butterfly housing with a seamwelded metal lid which results in a significantly better hermetic sealing. On top of the improved sealing this variant uses an argon atmosphere, which might give an extended lifetime over a nitrogen buffer gas [29]. Feedback chips with two different coupling strengths between bus waveguides and and microring resonators were packaged (\(\kappa^{2}\) = 5.9% and 2.9%). ### Laser characterization When the InGaN-Al\({}_{2}\)O\({}_{3}\) laser is succesfully brought in operation, the intracavity waveguides in the circuit light up brightly, see Fig. 1f and 2a. To record the fiber-coupled output power, for each measurement we make small readjustments of the heaters on the micro-resonators (R\({}_{1}\), R\({}_{2}\)), the phase section (PS) and MZI-based outcoupler (OC). This optimization process is recursive and usually requires a run of one to three times for all parameters to find the maximum output power while maintaining single wavelength operation with high sidemode suppression. All heater adjustments can be carried out automatically via multichannel computer control of the heaters. The laser temperature control, a PID control loop using the thermistor and Peltier element in the package, is set to 20 \({}^{\circ}\)C for all measurements. The fiber-coupled output power is shown in Fig. 2b, we find a laser threshold current of 55.0 mA, above which the output increases linearly with pump current. The fiber-coupled, optical power reaches a value 0.74 \(\pm\) 0.04 mW for the maximum drive current that we apply (90.0 mA). Correcting for losses at the chip-to-fiber coupling interface (estimated at 6.7 \(\pm\) 2.1 dB, see Methods), the laser generates about 3.5 mW (\(\pm\) 50%) on-chip power in its output waveguide. At maximum output power the monitored wavelength is 405.5 nm. We note that the fiber-coupled, near UV output levels are up to two orders of magnitude higher than what has been reported recently with self-injection of stage-coupled Fabry-Perot lasers to silicon nitride feedback chips [15]. Such large difference in output can have various reasons, however we expect that the main contributors to high output power are permanent bonding and much better mode matching between amplifier and feedback chip. Wide-range near UV output spectra are recorded with an optical spectrum analyzer (OSA). Typical emission spectra as in Fig. 2c display single-wavelength operation with high sidemode suppression ratios (SMSR) of about 42 dB and 43 dB, which is two orders of magnitude more than previously reported around this wavelength range [14], [15]. These recordings indicate a full width at half maximum laser linewidth below the resolution limit of the optical spectrum analyzer (about 90 GHz). We perform coarse tuning of the laser by varying the heater power for one of the microring resonators (R\({}_{1}\)). Figure 2d shows superimposed spectra of the laser at 14 coarse steps along its entire tuning range, with the inset showing the tunability of the center wavelength from 408.1 nm to 403.7 nm. The data shows a wavelength coverage of 4.4 nm, not exceeding the designed free spectral range of the Vernier (4.84 nm, Fig. 1c). To verify the promise of long term passive stability inherent to hybrid integration of lasers [41], we characterized the passive wavelength stability of the laser. Next to residual thermal drift, an important question is whether the laser would exhibit longitudinal mode hops. Such mode hops can impose limitations in electronic frequency locking to reference cavities or to absolute reference absorption lines. A mode hop changes the laser frequency by at least one free spectral range of the laser cavity, which is about 5.36 GHz for this laser. Immunity from environmental perturbations can thus be judged by comparing the laser longer-term frequency drift with one cavity free spectral range. To measure the laser stability the wavelength was recorded over time using a high resolution wavelength meter. The result of such a measurement, at an average sampling time of 12 ms, is shown in Fig. 2e. The figure shows that the laser was mode-hop-free for at least 84 minutes, which was also the entire duration of recording, and drifting less than 1.6 GHz. In the final 24 minutes the laser has reached thermal equilibrium with the environment, here a standard optical laboratory, and showed a drift of less than 30 MHz. Recording longer time traces does not appear problematic, however, the degradation times (inherent to all GaN-type diode lasers) are not known and presently not subject to systematic investigations. In order to determine the laser linewidth with higher resolution than 90 GHz (from optical Figure 2: _a) Microscopic picture of the \(\mathrm{Al_{2}O_{3}}\)-based hybrid laser in operation, showing scattered light from intracavity waveguides. Some parts of the waveguides are less bright due to the metal heater layers obscuring parts of the waveguide circuit. b) Fiber-coupled output power versus drive current, showing a laser threshold and slope efficiency of 55 mA and 20.7 \(\pm\) 1 \(\mu\)W/mA, respectively. The fiber-coupled output power reaches a maximum of 0.74 \(\pm\) 0.04 mW, correcting for fiber-to-chip coupling losses (6.7 \(\pm\) 2.1 dB, see Methods) this corresponds to about 3.5 mW (\(\pm\) 50%) on-chip power in the output waveguide. c) Optical spectrum showing single wavelength operation with a sidemode suppression ratios (SMSR) of 43 dB (\(\kappa^{2}\) = 5.9%) and 42 dB (\(\kappa^{2}\) = 2.9%). d) Superimposed laser spectra shown for various heater settings of ring \(R_{1}\), here, the fiber-coupled output power varies between 0.31 and 0.52 mW. The laser tunes in wavelength more than 4.4 nm from 408.1 nm to 403.7 nm, shown by center wavelength versus heating power plotted in the inset. Each tuning step is followed by automated optimization for the output power. The slope of the linear fit is -21.5 \(\pm\) 0.5 pm/m. We) High-resolution wavelength meter recording the laser wavelength over time. Recording begins right after start-up, when the laser is still far from thermal equilibrium. After an initial drift over 1.5 GHz, the frequency becomes constant with small residual fluctuations in the order of a few tens of MHz. No mode-hops are observed and the drift remains well below the cavity FSR. We attribute the high stability to hybrid integration of the chips and hermetic sealing. f) RF beat signal from a delayed self-heterodyne set-up for our laser, measured at 405 nm wavelength using a 7.9 m fiber delay (purple). The noise floor is recorded by blocking the photodiode from any input (blue). From the fringes in the RF beat signal we can extract a laser linewidth in the order of 25 MHz or lower._ spectrum analyzer measurements), we use a delayed self-heterodyne experiment with a 7.9 meter long optical fiber as delay line (see Methods). A typical RF spectrum is displayed in Fig. 2f (purple trace). The presence of fringes proves that the laser coherence time is approximately similar to or longer than delay time provided by the fiber. However, the fringe contrast in the line wings is strongly masked by the noise floor (blue trace). This prevents an evaluation for Lorentzian line shape components [42] and thus requires that the signal-to-noise ratio is increased in further experiments. Nevertheless, the fringe period can be extracted reliably as 40 ns, which corresponds to the 7.9 m fiber delay. This gives a lower bound for the coherence time, i.e., the full width of the laser spectrum is in the order of 25 MHz or lower. ### Summary In this work we have shown the design, fabrication and operation of the first near UV hybrid integrated diode laser using the deep-UV capable Al\({}_{2}\)O\({}_{3}\) material platform. The fabricated Al\({}_{2}\)O\({}_{3}\) chips show a propagation loss of 2.8 \(\pm\) 0.3 dB/cm. For the first time a 405 nm InGaN amplifier was hybrid integrated with an Al\({}_{2}\)O\({}_{3}\) waveguide circuit, ensuring a UV-compatible bonding between both chips in a hermetically sealed environment. This approach maximizes the frequency stability and durability of the device. The laser shows a maximum output power of 0.74 mW, on-chip about 3.5 mW, and tunes more than 4.4 nm from 408.1 nm to 403.7 nm. The laser shows high passive frequency stability, operating without mode hops over more than an hour, settling to residual frequency deviations of a few ten MHz per 20 minutes. Delayed self-heterodyne measurement indicates that the full laser linewidth is in the order of the inverse delay time (25 MHz) or below, presently a low signal-to-noise ratio in heterodyne detection excludes a closer quantification. Continuing this work we aim on improved linewidth measurements via higher signal-to-noise detection. Our experiments show that the Al\({}_{2}\)O\({}_{3}\) platform newly engaged here for near UV, complex and tunable feedback circuits opens the path for chip integrated UV applications based on fully functional hybrid integrated diode lasers. ## Methods ### Design and fabrication Simulation of optical modes (2D finite element methods, Lumerical), sidewall scattering integrals, and mode overlap integrals were performed to select the core cross-section as shown in Fig. 1b. With this cross-section the expected propagation loss from scaling is \(<\)4 dB/cm and calculations show tight bend radii down to 80 \(\mu\)m result in negligible bend radiation loss. The named cross-section was used to design directional couplers with nominal values \(\kappa^{2}\) of 1, 2, 5 and 10, 20 and 50%. Manufacturers specifications on the the modefield diameter for the diode amplifier and UV fibers were consulted to design linear inverse tapers. The Al\({}_{2}\)O\({}_{3}\) waveguides, fabricated by the Integrated Optical Systems group in the MESA\({}^{+}\) cleanroom (University of Twente), starts by depositing a 110 nm Al\({}_{2}\)O\({}_{3}\) layer using an optimized RF reactive sputter deposition process [43], onto an 8 \(\mu\)m thick thermally oxidized 10 cm diameter silicon wafer. A chemical mechanical polishing step is used to reduce the surface roughness of the deposited Al\({}_{2}\)O\({}_{3}\) layer, reducing the layer thickness to the targeted 100 nm. Next, the substrate is coated with negative e-beam resist (AR-N 7520). Using a Raith EBPG5150 e-beam lithography system the waveguide layer is written in the resist. After the e-beam write, the pattern is developed using AR-300-47 developer. The resulting patterns are etched into the Al\({}_{2}\)O\({}_{3}\) using an Oxford PlasmaPro 100 Cobra (reactive ion etching). Afterwards the resist is stripped by oxygen plasma using a TEPLA 300. The resulting waveguides are fully buried by an 8 \(\mu\)m thick SiO\({}_{2}\) cladding. To implement thermo-optic tuning at various locations on the feedback chips, resistive heaters are fabricated, by deposition and structuring of a 10/10 nm Cr/Pt layer topped with a 300 nm Au layer. All three layers are patterned using a lift-off process. To create the heaters, the gold layer is etched away leaving only the two thin and high electrically resistive Cr and Pt layers. For the Al\({}_{2}\)O\({}_{3}\) thin film the material index was measured using an ellipsometer (Woollam M-2000UI) in the wavelength range from 600 - 1600 nm. Cauchy's equation, n(\(\lambda\)) = A + B/\(\lambda^{2}\) with \(\lambda\) in \(\mu\)m, was fitted to the ellipsometer data with A = 1.6848 \(\pm\) 0.009 and B = 0.0119 \(\pm\) 0.002. The function was extrapolated to obtain the material index in the near UV spectral range. ### Characterization of Al\({}_{2}\)O\({}_{3}\) feedback chip Several chips were fabricated with test structures to characterize individual building blocks of the waveguide feedback circuit. First, to characterize the straight propagation loss, several spirals and straight waveguides were investigated. These structures used a minimum bend radius of 150 \(\mu\)m, which ensures negligible bend radiation loss while maintaining a small footprint circuit. Using a 405 nm Fabry-Perot diode laser (QPhotonics) light was fiber coupled (PM-S405-XP) into the chip, passing through a spiral or path (Fig. 1d). At the output side of the chip the transmitted power was measured (Thorlabs, S150C photodiode). The transmittance (P\({}_{\mathrm{out}}\)/P\({}_{\mathrm{in}}\)) as a function of the on-chip propagation length is shown in Fig. 3 and yields an average propagation loss of 2.8 \(\pm\) 0.3 dB/cm and a fiber-to-chip coupling loss of 10.2 \(\pm\) 0.8 dB/facet. To estimate the loss of the inverse tapers, test structures were investigated where light propagates sequentially through a number of adiabatic tapers lined up as a waveguide and showed a loss of the inverse tapers below the measurement accuracy. To determine the phase tuning of light using the fabricated heaters, another test structure with an MZI and a thermal heater on each arm revealed that about 350 mW of electrical power is needed for a phase shift of 2\(\pi\) in the guided light. Scanning electron microscope (SEM) images allowed verification of the cross-section, gaps at directional couplers and tapers on various parts of the photonic chip. These cross-sectional images also verified that the waveguide sidewall is at the designed right angle with the base (approximately 90\({}^{\circ}\)). ### Hybrid integration and packaging The hybrid integration and packaging process was carried out in conjunction with PHIX B.V. and with support from Lionix International B.V. First the individual Al\({}_{2}\)O\({}_{3}\) chips were diced from the wafer and polished. The right side of the feedback chip (Fig. 1d) was polished at an 8\({}^{\circ}\) vertical angle to match the angle of the fiber array, minimizing any back reflections into the laser mode from this interface. Also here, the waveguide width is tapered down to maximize the mode matching with the large and circular 3.3 \(\mu\)m diameter mode of the PM-S405-XP fiber, showing a theoretical coupling efficiency of 59% limited by the elliptical modeshape in the waveguide. Here, a fiber array with five of these fibers was used. For the active temperature control a 10 k\(\Omega\) thermistor was added to the SLED submount. After preparing both chips, the SLED is aligned and butt-coupled to the Al\({}_{2}\)O\({}_{3}\) feedback chip. When optimum alignment is reached, the chips are hybrid integrated by bonding using a UV curable epoxy. The epoxy is applied such that the near UV optical mode remains free from epoxy. On the output side of the feedback chip the fiber array is aligned for maximum coupling and bonded using an epoxy, again ensuring a free optical path between the feedback chip and fibers. Subsequently a Peltier element is mounted to the bottom of a shared substrate, to be used with the thermistor for the thermal control of the laser. Afterwards the full assembly is placed in a standard 14-pin butterfly package. The electrical connections for the cathode and anode of the SLED and heaters on the Al\({}_{2}\)O\({}_{3}\) feedback chip are wirebonded to the pins of the butterfly package. The Peltier connectors are soldered to the remaining two pins of the package. The final step is hermetic sealing of the butterfly package. For each package a getter material is attached to the inside of the lid before hermetically sealing the package. The getter material absorbs any residual gas (from outgassing of the epoxy, like volatile organic compounds and moisture after sealing). The laser in the pictures (Fig. 1e, 1f, 2a) is a package with a glass lid in a nitrogen environment (\(\kappa^{2}\) = 5.9%). The measurements shown in this work are carried out with a seamwelded, argon atmosphere laser (\(\kappa^{2}\) = 5.9%), except the measurement for one of the optical spectra shown in Fig. 2c (seamwelded, argon atmosphere, \(\kappa^{2}\) = 2.9%). ### Experimental methods Waveguide heaters are driven with a high-precision, low-noise, multichannel power supply (Chilas B.V., Tunable Laser Controller, TLC). The TLC also contains a PID control loop which controls the temperature of the laser with the thermistor and Peltier element in the butterfly housing. The TLC is equipped with a USB interface to receive serial commands from a PC. Together with in-house software, photodiodes and other lab equipment, optimization and control of the laser can be fully automated. Depending on availability several current sources were used: Toptica DLC Pro, ILX Lightwave LDX-3620a and Thorlabs LDC205B. Output powers are measured by connecting the output fiber to a calibrated photodiode (Thorlabs, S150C photodiode). The calibration accuracy of the photodiode is 5% in this spectral range. A 405-nm 90/10 custom fiber splitter (Thorlabs with S405-XP fiber) is used for simultaneous monitoring of spectral properties and output power. To find the conversion for the fiber-coupled output power to the on-chip power, we first note the measured fiber-to-chip coupling loss, which Figure 3: _Transmittance (\(P_{out}/P_{in}\)) of 405 nm light through various paths and spirals of different propagation length. The linear slope indicates a propagation loss of 2.8 \(\pm\) 0.3 dB/cm and from the y-axis crossing a 10.2 \(\pm\) 0.8 dB fiber-to-chip coupling loss is obtained._ is 10.2 \(\pm\) 0.8 dB (Fig. 3). This value is measured with an unpolished chip facet, whereas the integrated laser facets are finely polished which should reduce the fiber-to-chip coupling loss by an estimated 3.5 \(\pm\) 1.1 dB [44]. Finally, the 5% (\(\sim\) 0.2 dB) measurement error for the photodiode can be included in this conversion error. These estimates bring the total conversion factor from fiber-coupled output to on-chip power to +6.7 \(\pm\) 2.1 dB. Therefore, we find an on-chip power of about 3.5 mW (\(\pm\) 50%). The measurements for the spectra, wavelength stability and spectral linewidth are all recorded at the maximum current of 90 mA. Laser output spectra are recorded with an optical spectrum analyzer (Ando AQ6315A, approximately 50 pm or 90 GHz resolution at 405 nm). To measure the optical spectra the 90/10 fiber splitter is used to simultaneously monitor the output power of the laser with a photodiode, which enables to use our optimization software for controlling the laser. The nominally 90% arm of the fiber splitter is connected to the spectrum analyzer and the spectra are corrected (due to the splitting and loss in fiber splitter) in post-processing for the photodiode measured, fiber-coupled output power of the laser. Long-term measurements of passive laser frequency stability rely on a high-resolution wavelength meter (HighFinesse WS-U 1645, wavelength deviation sensitivity of 0.5 MHz ). Delayed self-heterodyne measurements, using existing methods [42], are carried out with a fiber delay length of 7.9 m (S405-XP fiber), an acousto-optic modulator (at 200 MHz, G&H FiberQ) and a balanced photodiode (Thorlabs PD435A-AC), with signals being recorded with an electrical spectrum analyzer (Keysight CXA N9000B). To match the polarization of the signal from both arms of the set-up, a fiber polarization controller is used (Thorlabs FPC560 with an S405-XP fiber). The recorded RF spectrum of the beatnote is shown as the purple trace in Fig. 2f. The measurement time of an RF frequency sweep is 67 ms and the raw data is averaged 20 times. The fringe period is extracted from the RF signal as 40 ns, which corresponds well with a 7.9 m fiber delay length. The noise floor is recorded when the light to the sensor is blocked (blue trace, Fig. 2f).
2308.15877
ABA Learning via ASP
Recently, ABA Learning has been proposed as a form of symbolic machine learning for drawing Assumption-Based Argumentation frameworks from background knowledge and positive and negative examples. We propose a novel method for implementing ABA Learning using Answer Set Programming as a way to help guide Rote Learning and generalisation in ABA Learning.
Emanuele De Angelis, Maurizio Proietti, Francesca Toni
2023-08-30T09:02:29Z
http://arxiv.org/abs/2308.15877v1
# ABA Learning via ASP ###### Abstract Recently, ABA Learning has been proposed as a form of symbolic machine learning for drawing Assumption-Based Argumentation frameworks from background knowledge and positive and negative examples. We propose a novel method for implementing ABA Learning using Answer Set Programming as a way to help guide Rote Learning and generalisation in ABA Learning. ## 1 Introduction Recently, _ABA Learning_ has been proposed [13] as a methodology for learning Assumption-Based Argumentation (ABA) frameworks [2, 3] from a background knowledge, in the form of an ABA framework, and positive and negative examples, in the form of sentences in the language of the background knowledge. The goal of ABA Learning is to build a larger ABA framework than the background knowledge from which arguments for all positive examples can be "accepted" and no arguments for any of the negative examples can be "accepted". In this paper, for a specific form of ABA frameworks corresponding to logic programs [2], we focus on a specific form of "acceptance", given by cautious (or sceptical) reasoning under the argumentation semantics of stable extensions [2, 3]. We then leverage on the well known correspondence between stable extensions in the logic programming instance of ABA and answer set programs [6] to outline a novel implementation strategy for the form of ABA Learning we consider, pointing out along the way restrictions on ABA Learning enabling the use of Answer Set Programming (ASP). Related WorkOur strategy for ABA Learning differs from other works learning argumentation frameworks, e.g. [4, 12], in that it learns a different type of argumentation frameworks and it uses ASP. ABA can be seen as performing abductive reasoning (as assumptions are hypotheses open for debate). Other approaches combine abductive and inductive learning [14], but they do not learn ABA frameworks. Some approaches learn abductive logic programs [7], which rely upon assumptions, like ABA frameworks. A formal comparison with these methods is left for future work. ABA captures several non-monotonic reasoning formalisms, thus ABA Learning is related to other methods learning non-monotonic formalisms. Some of these methods, e.g. [8, 15, 18], do not make use of ASP. Some others, e.g. [9, 16, 17], do. While our use of ASP to help guide some aspects of ABA Learning (e.g. its Rote Learning transformation rule) is unique, a formal and empirical comparison with these methods is left for future work. ## 2 Background AspIn this paper we use _answer set programs_ (ASPs) [6] consisting of rules of the form \[\text{p :- q}_{1},\ldots,\text{q}_{\text{k}},\text{ not q}_{\text{k+1}},\ldots, \text{ not q}_{\text{n}}\qquad\text{or}\qquad\text{:- q}_{1},\ldots,\text{q}_{\text{k}},\text{ not q}_{\text{k+1}},\ldots,\text{ not q}_{\text{m}}\] where p, q\({}_{1}\),..., q\({}_{\text{n}}\), q\({}_{1}\),..., q\({}_{\text{m}}\) are atoms, \(\text{k}\geq 0\), n \(\geq 0\), m \(\geq 1\), and not denotes negation as failure. Given any ASP program \(P\), by \(\mathit{ans}(P)\), called _answer set_ of \(P\), we denote the set of ground atoms assigned to \(P\) by the answer set semantics. Let \(\mathit{ans}_{1}(P),\ldots,\mathit{ans}_{l}(P)\) be the answer sets of \(P\), for \(l\geq 1\) (if \(l=0\), then \(P\) is _unsatisfiable_). By \(\mathcal{C}(P)\!=\!\bigcap_{i}\mathit{ans}_{i}(P)\), we denote the set of _cautious_ consequences of \(P\). AbaAn _ABA framework_ (as originally proposed in [1], but presented here following recent accounts in [4, 18] and [2]) is a tuple \(\langle\mathcal{L},\mathcal{R},\mathcal{A},\overline{\ \ \rangle\ }\) such that * \(\langle\mathcal{L},\mathcal{R}\rangle\) is a deductive system, where \(\mathcal{L}\) is a _language_ and \(\mathcal{R}\) is a set of _(inference) rules_ of the form \(s_{0}\gets s_{1},\ldots,s_{m}\) (\(m\geq 0,s_{i}\in\mathcal{L}\), for \(1\leq i\leq m\)); * \(\mathcal{A}\subseteq\mathcal{L}\) is a (non-empty) set of _assumptions_;1 Footnote 1: The non-emptiness requirement can always be satisfied by including in \(\mathcal{A}\) a _bogus assumption_, with its own contrary, neither occurring elsewhere in the ABA framework. For conciseness, we will leave this assumption and its contrary implicit. * -- is a total mapping from \(\mathcal{A}\) into \(\mathcal{L}\), where \(\overline{a}\) is the _contrary_ of \(a\), for \(a\in\mathcal{A}\). Given a rule \(s_{0}\gets s_{1},\ldots,s_{m}\), \(s_{0}\) is the _head_ and \(s_{1},\ldots,s_{m}\) is the _body_; if \(m=0\) then the body is said to be _empty_ (represented as \(s_{0}\leftarrow\) or \(s_{0}\leftarrow\mathit{true}\)) and the rule is called a _fact_. If assumptions are not heads of rules then the ABA framework is called _flat_. In this paper we focus on flat ABA frameworks. Elements of \(\mathcal{L}\) can be any sentences, but in this paper we focus on (flat) ABA frameworks where \(\mathcal{L}\) is a set of ground atoms. However, in the spirit of logic programming, we will use _schemata_ for rules, assumptions and contraries, using variables to represent compactly all instances over some underlying universe. **Example 1**.: _The following is a flat ABA framework with \(\mathcal{L}\) a set of atoms._ \[\mathcal{R} = \{\mathit{innocent}(X)\leftarrow\mathit{person}(X),\mathit{not\_ guilty}(X),\quad\mathit{guilty}(X)\gets\mathit{witness\_con}(X,Y),\] \[\mathit{person}(\mathit{mary})\leftarrow,\quad\mathit{person}( \mathit{alex})\leftarrow,\quad\mathit{witness\_con}(\mathit{mary},\mathit{alex })\leftarrow\}\] \[\mathcal{L} = \{\mathit{innocent}(X),\mathit{person}(X),\mathit{not\_guilty}(X), \mathit{guilty}(X),\mathit{witness\_con}(X,Y)|X,Y\in\{\mathit{mary},\mathit{ alex}\}\}\] \[\mathcal{A} = \{\mathit{not\_guilty}(\mathit{mary})\}\quad\mathrm{where}\quad \overline{\mathit{not\_guilty}(\mathit{mary})}=\mathit{guilty}(\mathit{mary}).\] The semantics of flat ABA frameworks is given in terms of "acceptable" extensions, i.e. sets of _arguments_ able to "defend" themselves against _attacks_, in some sense, as determined by the chosen semantics. Intuitively, arguments are deductions of claims using rules and supported by assumptions, and attacks are directed at the assumptions in the support of arguments. For illustration, in the case of Example 1, there are, amongst others, arguments \(\{\mathit{not\_guilty}(\mathit{mary})\}\!\! ## 3 Preliminaries: Cautious ABA Learning under Stable Extensions Here we recap the instance of the ABA Learning method proposed in [12] that we focus on implementing using ASP in this paper, while stating restrictions on ABA Learning required by the implementation. In [12], the _background knowledge_ is _any_ ABA framework \(\langle\mathcal{R},\mathcal{A},\overline{\phantom{\bullet}\phantom{\bullet}}\rangle\). Here, we assume that (i) it is restricted so that each assumptions occurs in the body of at most one rule schema in \(\mathcal{R}\), (ii) for each non-ground \(\alpha(X)\in\mathcal{A}\) in the body of any \(\rho\in\mathcal{R}\), for \(X\) a tuple of variables, for each variable \(X^{\prime}\) in \(X\), there is at least one _non-assumption_ (in \(\mathcal{L}\setminus\mathcal{A}\)) \(p(Y)\) in the body of \(\rho\) with \(X^{\prime}\) in \(Y\), and (iii) each fact in \(\mathcal{R}\) is ground. Restriction (i) is without loss of generality; the other two derive from the use of schemata. In [12], _positive/negative examples_ are ground atoms of the form \(p(c)\), for \(p\) a predicate with arity \(n\geq 0\) and \(c\) a tuple of \(n\) constants. Here, we impose that examples are non-assumptions (in the background knowledge \(\langle\mathcal{R},\mathcal{A},\overline{\phantom{\bullet}}\rangle\)). So, for \(\mathcal{L}\) as in Example 1, _not_guilty_ cannot appear in examples (but contraries, e.g. _guilty_(_mary_), can be examples). The exclusion of assumptions from examples is derived from the flatness restriction. We also assume that for each example \(p(c)\) and \(c^{\prime}\) in \(c\), \(\exists\,q(d)\leftarrow\in\mathcal{R}\) such that \(c^{\prime}\) is in \(d\). We impose the same restriction on constants \(c^{\prime}\) anywhere in the background knowledge. Given background knowledge \(\langle\mathcal{R},\mathcal{A},\overline{\phantom{\bullet}\phantom{\bullet}}\rangle\), positive examples \(\mathcal{E}^{+}\) and negative examples \(\mathcal{E}^{-}\) with \(\mathcal{E}^{+}\cap\mathcal{E}^{-}=\emptyset\), the _goal of ABA Learning_ is to construct (a flat ABA framework) \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overline{\phantom{\bullet} \phantom{\bullet}}\rangle\) such that \(\mathcal{R}\subseteq\mathcal{R}^{\prime}\), \(\mathcal{A}\subseteq\mathcal{A}^{\prime}\), and \(\overline{\alpha}^{\prime}=\overline{\alpha}\) for all \(\alpha\in\mathcal{A}\), so that \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime\prime},\overline{\phantom{ \bullet}\phantom{\bullet}}\rangle\)_entails_\(\langle\mathcal{E}^{+},\mathcal{E}^{-}\rangle\), that is: (_Existence_) \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overline{\phantom{\bullet} \phantom{\bullet}}\rangle\) admits at least one stable extension, (_Completeness_) for all \(e\in\mathcal{E}^{+}\), \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overline{\phantom{\bullet} \phantom{\bullet}}\rangle\models e\), and (_Consistency_) for all \(e\in\mathcal{E}^{-}\), \(\langle\mathcal{R},\mathcal{A}^{\prime},\overline{\phantom{\bullet}\phantom{ \bullet}}\rangle\not\models e\). \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overline{\phantom{\bullet} \phantom{\bullet}}\rangle\) is called a _solution_ of the ABA Learning problem \((\langle\mathcal{R},\mathcal{A},\overline{\phantom{\bullet}\phantom{\bullet}} \rangle,\langle\mathcal{E}^{+},\mathcal{E}^{-}\rangle)\). The second condition implies, when \(\mathcal{E}^{+}\neq\emptyset\), that the set of cautious consequences of a solution is non-empty. In this paper we strive towards what we may call _intensional solutions_, namely such that \(\mathcal{R}^{\prime}\setminus\mathcal{R}\) comprises of _intentional rules_ (i.e. non-ground rule schemata), to avoid or limit "lazy" learning of facts covering the positive examples alone and none of the negative examples, leading to poor generalisation. **Example 2**.: _Consider the background knowledge:_ \[\mathcal{R}= \{\text{in}ocent(X)\leftarrow\text{defendant}(X),\text{not\_guilty }(X),\] \[\text{witness\_con}(\text{mary},\text{alex})\leftarrow,\quad witness\_ con(\text{david},\text{carool})\leftarrow,\quad witness\_con(\text{john},\text{carool})\leftarrow,\] \[\text{defendant}(\text{mary})\leftarrow,\text{defendant}(\text{ david})\leftarrow,\text{defendant}(\text{john})\leftarrow,\,\text{liar}(\text{alex})\leftarrow,\,\text{away}(\text{bob})\leftarrow,\] \[\text{person}(\text{alex})\leftarrow,\quad\text{person}(\text{ bob})\leftarrow,\quad\text{person}(\text{carool})\leftarrow,\] \[\text{person}(\text{mary})\leftarrow,\quad\text{person}(\text{david })\leftarrow,\quad\text{person}(\text{john})\leftarrow\}\] \[\mathcal{A}= \{\text{not\_guilty}(X)\mid X\in\{\text{mary},\text{david}, \text{john}\}\}\quad\text{where}\quad\overline{\phantom{\bullet}\phantom{ \bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{ \bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet}\phantom{\bullet} \phantom{\bullet}\phantom{\bullet} ## 4 Learning ABA Frameworks via Transformation Rules and ASP Solving In order to learn ABA frameworks from examples, we follow the approach based on _transformation rules_ presented in [12], but only consider a subset of those rules: _Rote Learning_, _Folding_, _Assumption Introduction_, and (a special case of) _Subsumption_ (thus ignoring _Equality Removal_). Some rules (Folding and Subsumption) are borrowed from logic program transformation [10], while others (Rote Learning and Assumption Introduction) are specific for ABA. Given an ABA framework \(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ \ }\rangle\), a transformation rule constructs a new ABA framework \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overrightarrow{\ \ \ }\rangle\) (in the remainder, we will mention explicitly only the modified components). The application of the transformation rules is guided by the _ASP-ABALearm_ strategy (see Figure 1), a variant of the strategy in [12] amenable to be implemented via an ASP solver, towards the goal of deriving an intensional solution of the given ABA Learning problem. **Strategy**_ASP-ABALearm_. **Input:** An ABA Learning problem \((\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle)\); \(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad The _ASP-ABAlearn_ strategy is the composition of two procedures: (1) _RoLe_, which has the goal of adding suitable facts to the initial background knowledge \(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle\) so that the new ABA framework \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime\prime},\overrightarrow{\ \ }\rangle\) is a (non-intensional) solution of the learning problem \((\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle)\) given in input, and (2) \(GEN\), which has the objective of transforming \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overrightarrow{\ \ }\rangle\) into an intensional solution. In general, it is not obvious which fact should be added to \(\mathcal{R}\) in _RoLe_ and how to generalise a non-intensional solution to obtain an intentional one in \(GEN\). For these purposes, we use various encodings into ASP defined in Fig.2 to obtain the following sets of ASP rules: * The set of ASP rules at points (a) and (b.1) of Fig. 2 is denoted by \(ASP(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle)\). For a claim \(s\in\mathcal{L}\), we can check that \(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle\models s\) (i.e., \(s\) is a cautious consequence of \(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle\) under stable extensions) by checking that \(\mathsf{s}\in\mathcal{C}(ASP(\langle\mathcal{R},\mathcal{A},\overrightarrow{ \ }\rangle))\). * The set of ASP rules at points (a)-(d) is denoted \(ASP^{+}(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle,\mathcal{K})\). By computing \(\mathcal{C}(ASP^{+}(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle,\mathcal{K}))\) we generate facts, if at all possible, for contraries of assumptions in \(\mathcal{K}\) that, when added to \(\mathcal{R}\), enable the entailment of the examples in \(\langle\mathcal{E}^{+},\mathcal{E}^{-}\rangle\). * The set of ASP rules at points (a)-(e) is denoted \(ASP^{*}(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle,\mathcal{K})\). It can be used similarly to \(ASP^{+}(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle,\mathcal{K})\), but also generates facts for positive examples that cannot be obtained by \(ASP^{+}(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle,\mathcal{K})\). The _RoLe_ procedure repeatedly applies Rote Learning, which adds a fact \(\rho:p(X)\gets X=t\), where \(t\) is a tuple of constants, to \(\mathcal{R}\) (thus, \(\mathcal{R}^{\prime}\!=\!\mathcal{R}\cup\{\rho\}\)). We illustrate with the _innocent_ running example. Figure 2: ASP-encodings for a given ABA Learning problem \((\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle,\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle)\) and a set \(\mathcal{K}\subseteq\mathcal{A}\) of assumptions. Here, \(\mathsf{dom}\) is chosen so that \(\mathsf{dom}(\mathbb{X})\) holds for all \(\mathbb{X}\) encoding tuples of constants of \(\mathcal{L}\). Note that, without loss of generality, in (b.1)-(b.2) we can assume \(vars(\alpha_{1},\ldots,\alpha_{n})\!\subseteq\!vars(B)\). So, we could replace \(\mathsf{dom}(\mathbb{X})\) by any subset of \(B\) that contains all variables of \(\alpha_{1}\) and \(\mathsf{dom}\) may already occur in \(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle\) (this is an optimisation, as fewer ground instances of the rule may be given by the ASP solver). Also, in (e) \(\mathsf{dom}\) may already occur in \(\langle\mathcal{R},\mathcal{A},\overrightarrow{\ \ }\rangle\). **Example 3**.: _Let us consider the learning problem \(((\mathcal{R},\mathcal{A},\overrightarrow{-}),\langle\mathcal{E}^{+},\mathcal{E}^{- }\rangle)\) of Example 2. In this case, \(ASP^{*}(\langle\mathcal{R},\mathcal{A},\overrightarrow{-}\rangle,\langle\mathcal{E }^{+},\mathcal{E}^{-}\rangle,\mathcal{A})\) consists of \(\mathcal{R}\) rewritten in ASP syntax and of the following ASP rules:_ not_guilty(X) :- person(X), not guilty(X). :- not innocent(mary). guilty(X) :- person(X), not not_guilty(X). :- not innocent(bob). :- not_guilty(X), guilty(X). :- innocent(john). innocent(X) :- person(X), not neg_innocent(X). :- innocent(david). neg_innocent(X) :- person(X), not innocent(X). :- innocent(X), neg_innocent(X). _In this example, \(\mathcal{C}(ASP^{*}(\langle\mathcal{R},\mathcal{A},\overrightarrow{-} \rangle,\langle\mathcal{E}^{+},\mathcal{E}^{-}\rangle,\mathcal{A}))\setminus \mathcal{C}(ASP(\langle\mathcal{R},\mathcal{A},\overrightarrow{-}\rangle))\) includes innocent(bob), guilty(david), guilty(john), and by Rote Learning, we obtain:_ \(\mathcal{R}_{1}=\mathcal{R}\cup\{innocent(X)\gets X=bob,\ guilty(X) \gets X=david,\ guilty(X)\gets X=john\}\) \(\langle\mathcal{R}_{1},\mathcal{A},\overrightarrow{-}\rangle\) _is a (non-intensional, due to the added ground facts) solution of our ABA Learning problem._ Now, the _ASP-ABALearn_ learning strategy proceeds by applying the \(GEN\) procedure, for transforming a non-intensional rule into an intensional one. First, \(GEN\) applies (once or more times) Folding, which, given rules \(\rho_{1}\): \(H\gets Eqs_{1},B_{1},B_{2}\) and \(\rho_{2}\): \(K\gets Eqs_{1},Eq_{32},B_{1}\) in \(\mathcal{R}\), replaces \(\rho_{1}\) by \(\rho_{3}\): \(H\gets Eqs_{2},K,B_{2}\) (hence, \(\mathcal{R}^{\prime}=(\mathcal{R}\setminus\{\rho_{1}\})\cup\{\rho_{3}\}\)). Folding is a form of _inverse resolution_[9], which generalises a rule by replacing some atoms in its body with their 'consequence' using a rule in \(\mathcal{R}\). **Example 4**.: _By applying Folding to innocent\((X)\gets X=bob\) and \(guilty(X)\gets X=david\), using \(away(X)\gets X=bob\), witness_con\((X,Y)\gets X=david,\ Y=cavid\), and \(person(Y)\gets Y=cavid\) in \(\mathcal{R}_{1}\), we get:_ \(\mathcal{R}_{2}=\mathcal{R}\cup\{\,innocent(X)\gets avavg(X),\ guilty(X) \gets witness\_con(X,Y),\ person(Y),\) \(guilty(X)\gets X=john\}\)_._ _The ABA framework \(\langle\mathcal{R}_{2},\mathcal{A},\overrightarrow{-}\rangle\), though, is not a solution, as it does no longer entail innocent(mary)._ If the effect of Folding a rule is that the resulting ABA framework is no longer a solution of the given learning problem, \(GEN\) applies Assumption Introduction and Rote Learning with the goal of deriving a new ABA framework which is again a (non-intensional) solution. Assumption Introduction replaces a rule \(\rho_{1}:H\gets B\) in \(\mathcal{R}\) by \(\rho_{2}:H\gets B,\alpha(X)\), where \(X=\mathit{vars}(H)\cup\mathit{vars}(B)\) and \(\alpha(X)\) is a new assumption with contrary \(c_{\_}\alpha(X)\) (thus, \(\mathcal{R}^{\prime}=(\mathcal{R}\setminus\{\rho_{1}\})\cup\{\rho_{2}\}\), \(\mathcal{A}^{\prime}=\mathcal{A}\cup\{\alpha(X)\}\), \(\overrightarrow{\alpha(X)}^{\prime}=c_{\_}\alpha(X)\), and \(\overrightarrow{\beta}=\overrightarrow{\beta}\) for all \(\beta\in\mathcal{A}\)). New facts for \(c_{\_}\alpha(X)\) are learnt by Rote Learning by using \(ASP^{+}(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overrightarrow{-} \rangle,\langle\mathcal{E}^{+},\mathcal{E}^{-}\rangle,\{\alpha(X)\}\). The facts for \(c_{\_}\alpha(X)\) can be seen as the _exceptions_ to the _defeasible_ rule \(\rho_{2}\). **Example 5**.: _By Assumption Introduction, we get:_ \(\mathcal{R}_{3}=\mathcal{R}\cup\{\mathit{innocent}(X)\gets avavg(X),\quad guilty(X) \gets witness\_con(X,Y),\ person(Y),\ a(X,Y),\\ \mathit{guilty}(X)\gets X=john\}\) _with \(\mathcal{A}^{\prime}=\mathcal{A}\cup\{a(X,Y)\ |\ X,Y\in\{alex,\ bob,cavid,david,john,mary\}\}\) and \(\overrightarrow{a(X,Y)}^{\prime}=c_{\_}\alpha(X,Y)\). To determine the facts for \(c_{\_}\alpha(X,Y)\), we use \(ASP^{+}(\langle\mathcal{R}_{3},\mathcal{A},\overrightarrow{-}\rangle,\langle \mathcal{E}^{+},\mathcal{E}^{-}\rangle,\{a(X,Y)\})\) (we omit the encoding of the background knowledge \(\mathcal{R}\)):_ innocent(X) :- away(X). :- not innocent(mary). guilty(X) :- X=john. :- not innocent(bob). guilty(X) :- witness_con(X,Y), person(Y), alpha1(X,Y). :- innocent(david). alpha1(X,Y) :- witness_con(X,Y), not c_alpha1(X,Y). :- innocent(john). c_alpha1(X,Y) :- witness_con(X,Y), not alpha1(X,Y). :- alpha1(X,Y), c_alpha1(X,Y). \(\mathcal{C}(ASP^{+}(\langle\mathcal{R}_{3},\mathcal{A},\overrightarrow{\ \ },\langle\mathcal{E}^{+}, \mathcal{E}^{-}\rangle,\{a(X,Y)\}))\) contains the atom \(c\_\)alpha1(mary,alex) and thus, by Rote Learning, we obtain again a (non-intensional) solution, by adding a fact for predicate \(c\_a(X,Y)\):_ \(\mathcal{R}_{4}=\mathcal{R}_{3}\cup\{c\_a(X,Y)\gets X=mary,\,Y=alex\}\) \(GEN\) proceeds by applying the Subsumption rule, which gets rid of redundant facts. Indeed, suppose that \(\mathcal{R}\) contains \(\rho:p(X)\gets X=t\) and let \(\mathcal{R}^{\prime}=\mathcal{R}\setminus\{\rho\}\). If \(\langle\mathcal{R}^{\prime},\mathcal{A}^{\prime},\overrightarrow{\ \ }\rangle\models p(t)\), then, by Subsumption, \(\rho\) can be deleted from \(\mathcal{R}\). Subsumption is applicable if \(\mathtt{p}(\mathtt{t})\in\mathcal{C}(ASP(\langle\mathcal{R}^{\prime}, \mathcal{A}^{\prime},\overrightarrow{\ \ }\rangle))\). **Example 6**.: _The rule \(\rho=guilty(X)\!\!\leftarrow\!\!X\!=\!\!john\) can be deleted, as \(\mathtt{guilty}(\mathtt{john})\!\in\!\mathcal{C}(ASP(\langle\mathcal{R}_{4} \setminus\{\rho\})\rangle)\). ASP-ABAlearn_ halts when \(GEN\) generates no new contrary, as Folding yields an intensional solution. **Example 7**.: _By two final applications of the Folding rule, \(GEN\) gets:_ \(\mathcal{R}_{5}=\mathcal{R}\cup\{\,\mbox{in}oncent(X)\gets anway(X),\quad guilty(X)\gets witness\_con(X,Y),\,person(Y),\,a(X,Y),\)__ \(c\_a(X,Y)\gets defendant(X),liar(Y)\}\) _Now, \(\langle\mathcal{R}_{5},\mathcal{A}^{\prime},\overrightarrow{\ \ }\rangle\) is an intensional solution of the given learning problem._ ## 5 Discussion and Conclusion We have revisited a strategy recently proposed for learning ABA frameworks based on transformation rules [12], and we have shown that, in the case of the stable extension semantics, many of the reasoning tasks used by that strategy can be implemented through an ASP solver. A proof-of-concept implementation of our _ASP-ABAlearn_ strategy is ongoing using SWI-Prolog (v. 9.0.4) and the Clingo ASP solver (v. 5.6.2). It consists of two Prolog modules implementing _RoLe_ and _GEN_ and two further modules implementing (i) the _ASP_, \(ASP^{+}\), and \(ASP^{*}\) encodings, and (ii) the API to invoke Clingo from SWI-Prolog and collect the cautious consequences to be used by _RoLe_ and _GEN_. The most critical issue for implementing \(GEN\) is that the application of Folding is non-deterministic, as there may be different choices for the rules to be used for applying that transformation. Currently, we simply make use of a bound to limit the number of alternatives. The design of more sophisticated mechanisms to control Folding, e.g., based on the notion of _information gain[17]_, is left as future work. In addition to refining the implementation, we also plan to perform an experimental comparison to non-monotonic ILP systems (such as Fold [17] and ILASP [8]). On the theoretical side, further work is needed to investigate conditions under which _ASP-ABAlearn_ is complete, in the sense that it terminates and finds a solution if it exists. A simple way of guaranteeing termination is based on a mechanism for avoiding the generation of contraries that are "equivalent" to previously generated ones. However, the solution obtained in this way is not guaranteed to be intensional. ## Acknowledgments We thank support from the Royal Society, UK (IEC\R2\(\backslash\)222045 - International Exchanges 2022). De Angelis and Proietti are members of the INDAM-GNCS research group; they were partially supported by the PNRR MUR project PE0000013-FAIR, Italy. Toni was partially funded by the ERC under the EU's Horizon 2020 research and innovation programme (grant agreement No. 101020934) and by J.P. Morgan and the Royal Academy of Engineering, UK, under the Research Chairs and Senior Research Fellowships scheme.
2306.03561
CIN++: Enhancing Topological Message Passing
Graph Neural Networks (GNNs) have demonstrated remarkable success in learning from graph-structured data. However, they face significant limitations in expressive power, struggling with long-range interactions and lacking a principled approach to modeling higher-order structures and group interactions. Cellular Isomorphism Networks (CINs) recently addressed most of these challenges with a message passing scheme based on cell complexes. Despite their advantages, CINs make use only of boundary and upper messages which do not consider a direct interaction between the rings present in the underlying complex. Accounting for these interactions might be crucial for learning representations of many real-world complex phenomena such as the dynamics of supramolecular assemblies, neural activity within the brain, and gene regulation processes. In this work, we propose CIN++, an enhancement of the topological message passing scheme introduced in CINs. Our message passing scheme accounts for the aforementioned limitations by letting the cells to receive also lower messages within each layer. By providing a more comprehensive representation of higher-order and long-range interactions, our enhanced topological message passing scheme achieves state-of-the-art results on large-scale and long-range chemistry benchmarks.
Lorenzo Giusti, Teodora Reu, Francesco Ceccarelli, Cristian Bodnar, Pietro Liò
2023-06-06T10:25:10Z
http://arxiv.org/abs/2306.03561v1
# CIN++: Enhancing Topological Message Passing ###### Abstract Graph Neural Networks (GNNs) have demonstrated remarkable success in learning from graph-structured data. However, they face significant limitations in expressive power, struggling with long-range interactions and lacking a principled approach to modeling higher-order structures and group interactions. Cellular Isomorphism Networks (CINs) recently addressed most of these challenges with a message passing scheme based on cell complexes. Despite their advantages, CINs make use only of boundary and upper messages which do not consider a direct interaction between the rings present in the underlying complex. Accounting for these interactions might be crucial for learning representations of many real-world complex phenomena such as the dynamics of supramolecular assemblies, neural activity within the brain, and gene regulation processes. In this work, we propose CIN++, an enhancement of the topological message passing scheme introduced in CINs. Our message passing scheme accounts for the aforementioned limitations by letting the cells to receive also lower messages within each layer. By providing a more comprehensive representation of higher-order and long-range interactions, our enhanced topological message passing 1 scheme achieves state-of-the-art results on large-scale and long-range chemistry benchmarks. Footnote 1: The code implementation can be found at: [https://github.com/twitter-research/cwn](https://github.com/twitter-research/cwn) ## 1 Introduction Graph Neural Networks (GNNs) find applications in a plethora of fields, like computational chemistry [1], social networks [2] and physics simulations [3]. Since their introduction [4; 5], GNNs have shown remarkable results in learning tasks when data are defined over a graph domain, where the flexibility of neural networks is coupled with prior knowledge about data relationships, expressed in terms of the underlying topology [6]. The idea behind GNNs is learning representations of node features using local aggregation, where the neighbourhood is formally represented by the underlying graph, which can be seen as a simple instance of a _topological space_, able to capture _pairwise_ interactions through the presence of an edge between any pair of directly interacting nodes. By leveraging this simple but powerful idea, outstanding performance has been achieved in many traditional tasks such as classification for nodes or entire graphs [7] or link prediction [8] as well as more specialized ones such as _protein folding_[9] and _algorithmic reasoning_[10]. While Graph Neural Networks (GNNs) have advanced the modelling of pairwise interactions on graph-structured data, their inability to accurately capture long-range and group interactions, along with their struggles to manage higher-order structures, are significant shortcomings. These limitations critically restrict the application of GNNs in understanding real-world complex systems. To cope with these limitations, a major performance boost to GNNs algorithms has been offered by considering more complex topological spaces such as simplicial complexes [11] or cell complexes [12], introduced to handle tasks for data that are naturally defined on higher-order elements. Then, these ideas were combined with provably powerful message-passing schemes on simplicial [13] and cell complexes [14] achieving remarkable results. Nonetheless, the aforementioned models are unable to discover consistently long-range and group interactions, which play a crucial role in many practical applications such as network neuroscience [15], physics of complex systems [16] or gene regulatory networks [17], where some reactions occur only when a group of more than two entities interact. **Contribution** In this work, we leverage the advantages of complex topological spaces to introduce a novel message-passing scheme on cell complexes. Motivated by the fact that cell complexes provide a natural framework to represent higher-dimensional structures and topological features that are inherent in the realm of chemistry, throughout this work, we will mostly focus on this domain. In particular, we enhanced the Topological Message Passing scheme defined in [14] by including messages that flow within the lower neighbourhood of the underlying cell complex. These are messages exchanged between edges that share a common vertex and between rings that are glued through an edge to better capture group interactions and escape potential bottlenecks. In the experimental section, we show that with respect to other models, this representation allows for a more natural and comprehensive understanding of chemical systems and their properties, resulting in state-of-the-art performance in both a large-scale molecular benchmark (ZINC) and a long-range graph benchmark (Peptides). We see that the ability of our model to understand higher-dimensional structures and topological features could have an immediate and significant impact in the areas of computational chemistry and drug discovery. ## 2 Background In this section, we recall the basics of regular cell complexes. These are topological spaces that enable efficient representation of high-order interaction systems, generalizing graphs and simplicial complexes. In particular, we first introduce the definition of a regular cell complex and then we recall a few additional properties enabling the representation of cell complexes via boundary operators. **Definition 1** (Regular Cell Complex).: _[_18_]_ _A regular cell complex is a topological space \(\mathcal{C}\) together with a partition \(\{\mathcal{C}_{\sigma}\}_{\sigma\in\mathcal{P}_{\mathcal{C}}}\) of subspaces \(\mathcal{C}_{\sigma}\) of \(\mathcal{C}\) called_ **cells**_, where \(\mathcal{P}_{\mathcal{C}}\) is the indexing set of \(\mathcal{C}\), such that_ 1. _For each_ \(\sigma\in\mathcal{C}\)_, every sufficient small neighbourhood of_ \(\sigma\) _intersects finitely many_ \(\mathcal{C}_{\sigma}\)_;_ 2. _For all_ \(\tau\)_,_\(\sigma\) _we have that_ \(\mathcal{C}_{\tau}\cap\overline{\mathcal{C}}_{\sigma}\neq\varnothing\) _iff_ \(\mathcal{C}_{\tau}\subseteq\overline{\mathcal{C}}_{\sigma}\)_, where_ \(\overline{\mathcal{C}}_{\sigma}\) _is the closure of the cell;_ 3. _Every_ \(\mathcal{C}_{\sigma}\) _is homeomorphic to_ \(\mathbb{R}^{k}\) _for some_ \(k\)_;_ 4. _For every_ \(\sigma\in\mathcal{P}_{\mathcal{C}}\) _there is a homeomorphism_ \(\phi\) _of a closed ball in_ \(\mathbb{R}^{k}\) _to_ \(\overline{\mathcal{C}}_{\sigma}\) _such that the restriction of_ \(\phi\) _to the interior of the ball is a homeomorphism onto_ \(\mathcal{C}_{\sigma}\) Figure 1: Visual representation of adjacencies within cell complexes. The reference cell, \(\sigma\), is showcased in blue, with adjacent cells \(\tau\) highlighted in green. Any intermediary cells, \(\delta\), facilitating connectivity are depicted in yellow. Condition 2 implies that the indexing set \(\mathcal{P}_{\mathcal{C}}\) has a poset structure, given by \(\tau\leq\sigma\) iff \(\mathcal{C}_{\tau}\subseteq\overline{\mathcal{C}_{\sigma}}\). This is known as the face poset of \(\mathcal{C}\). The regularity condition (4) implies that all topological information about \(\mathcal{C}\) is encoded in the poset structure of \(\mathcal{P}_{\mathcal{C}}\). Then, a regular cell complex can be identified with its face poset. For this reason, from now on we will indicate the cell \(\mathcal{C}_{\sigma}\) with its corresponding face poset element \(\sigma\) which dimension \(\dim(\sigma)\) is equal to the dimension of the space homeomorphic to \(\mathcal{C}_{\sigma}\). In this study, we focus on cell complexes with cells of maximum dimension equal to 2. In this context, a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) can be viewed as a particular case of a regular cell complex \(\mathcal{C}\). Specifically, a graph is a cell complex where the set of 2-cells is the empty set. In this context, the vertices of the graph correspond to the 0-cells in \(\mathcal{C}\), while the edges of the graph are then represented by its 1-cells, connecting pairs of vertices. Throughout this work, we will consider regular cell complexes \(\mathcal{C}\) built using _skeleton-preserving cellular lifting maps_[14] from an input graph \(\mathcal{G}\). A pictorial example of this operation is provided in Fig. 2, where filled rings are attached to closed paths of edges having no internal chords. **Definition 2** (Boundary Relation).: _Given two cells \(\sigma,\tau\in\mathcal{C}\). We have the boundary relation \(\sigma\trianglelefteq\tau\) iff \(\dim(\sigma)\leq\dim(\tau)\) and \(\nexists\delta\in\mathcal{C}:\sigma\triangletriangleleft\delta\triangleleft\tau\)._ We can leverage the previous definitions to characterize the different types of neighbourhoods present in cell complexes. Boundary NeighbourhoodFor a cell \(\sigma\), the boundary is a set \(\mathcal{B}(\sigma)=\{\tau\,|\,\tau\triangleleft\sigma\}\) composed by the lower-dimensional cells that respect Definition 2. In the first column of Fig. 1 we depicted a glossary of the boundary neighbourhoods of a regular cell complex \(\mathcal{C}\). In particular, a vertex does not have a boundary neighbourhood, an edge has a boundary composed of the nodes attached to its endpoints, while the boundary cells of a ring are the edges that enclose the ring itself. Co-Boundary NeighbourhoodFor a cell \(\sigma\), the co-boundary neighbourhood is a set \(\mathcal{C}o(\sigma)=\{\tau\,|\,\sigma\triangleleft\tau\}\) of higher-dimensional cells with \(\sigma\) on their boundary. For a node, its co-boundary is composed of the edges that have that node as an endpoint. For an edge, it is the set of rings that have that edge as one of their sides. In our case, rings do not have a co-boundary neighbourhood. We show a pictorial example of the various co-boundary neighbourhoods in the second column of Fig. 1. Upper NeighbourhoodThese are the cells of the same dimension as \(\sigma\) that are on the boundary of the same higher-dimensional cell as \(\sigma:\mathcal{N}^{{}^{\downarrow}}(\sigma)=\{\tau\,|\,\exists\delta:\sigma \triangleleft\delta\wedge\tau\triangleleft\delta\}\). For instance, as shown in the third column of Fig. 1, the upper adjacent cells of a vertex \(v_{i}\) are the vertices connected to \(v_{i}\) via an edge (i.e., the canonical graph adjacency). The upper adjacent cells of an edge \(e_{i}\) are the edges that surround the rings for which \(e_{i}\) is a boundary element. However, in a 2-Complex, the rings do not have upper adjacent cells. Lower NeighbourhoodThese are the cells of the same dimension as \(\sigma\) that share a lower dimensional cell on their boundary: \(\mathcal{N}^{{}^{\downarrow}}(\sigma)=\{\tau\,|\,\exists\delta:\delta \triangleleft\sigma\wedge\delta\triangleleft\tau\}\). For instance, as shown in the fourth column of Fig. 1, the lower adjacent cells of an edge \(e_{i}\) are the edges that share a common vertex with \(e_{i}\) and the lower adjacent cells of a ring \(r_{i}\) are the rings that have a common edge on their boundary. In any case, the vertices of a regular cell complex do not have a lower neighbourhood. After defining the structural and neighbourhood elements of a cell complex, we now address how to represent signals over it. A cell signal is defined as a mapping from the set of all the cells contained in the complex to a multi-dimensional feature vector [19]. **Definition 3** (Signals Over Cell Complexes).: _Let \(\mathcal{C}\) be a cell-complex and let \(\mathcal{P}_{\mathcal{C}}\) be its indexing set. A cell signal is defined as a map \(h_{\sigma}:\mathcal{P}_{\mathcal{C}}\rightarrow\mathbb{R}^{d}\) that assigns a \(d\)-dimensional feature vector to each cell \(\sigma\) of the complex._ Figure 2: Cellular lifting process. Given an input graph \(\mathcal{G}\), we attach closed two-dimensional rings to the boundary of the induced cycles of \(\mathcal{G}\). The result is a 2D regular cell complex \(\mathcal{C}\). ## 3 Supramolecular Chemistry Supramolecular chemistry [20], often referred to as the _"chemistry beyond the molecule"_, explores the intricacies of molecules connected through various weak bonds of differing strengths. These spontaneous secondary interactions include hydrogen bonding, dipole-dipole, charge transfer, van der Waals, and \(\pi-\pi\) stacking interactions. The presence of numerous \(\pi-\pi\) stacking interactions is particularly significant in this context, as the overall system can be seen as a regular cell complex structure. Supramolecular assemblies often exhibit complex chemical architectures and high-order self-assembly, giving rise to molecular machines [21], gas absorption [22], high-tech molecular sensing systems [23], nanoreactors [24], chemical catalysis [25] and drug delivery systems [26]. Intriguingly, molecular shape serves as a design principle, thanks to the self-assembly [27] and self-healing [28] properties of supramolecules. ### Long-Range Interactions Long-range interactions play a key role in supramolecular chemistry. They can be intended as the dependency of certain molecular properties on elements that are _"far away"_ within a chemical system [29]. Of particular interest in this context are the long-range interactions that arise in oxygenic photosynthesis. This is the process by which light energy is converted into chemical energy in the form of glucose or other sugars [30]. This process is mediated by Chlorophyll-a (Fig.3), a cyclic tetrapyrrole molecule. Through its extensive conjugated \(\pi\)-system, Chlorophyll-a represents the basic building block of a photosystem. During photosynthesis, when a photon strikes a molecule of Chlorophyll-a, it excites an electron to a higher energy state. The energy produced is transferred from molecule to molecule within the light-harvesting complex via resonance energy transfer. Throughout the whole process, the energy transfer process is materialised as a quantum-coherent phenomenon [31], that is where long-range interactions become crucial. Being able to capture them could lead to a positive impact in the development of efficient artificial photosynthetic systems [32] and enhance solar energy technologies [33]. On Oversquashing in Molecular GraphsTo capture interactions in molecular graphs, in the last years we have seen _Message Passing Neural Networks_ (MPNNs) [1] taken as a reference model with remarkable results. These are a class of Graph Neural Networks (GNNs) that update the spatial representation of a node \(u\) with layers of the form: \[\mathbf{h}_{u}^{l+1}=\text{U}\Big{(}\mathbf{h}_{u}^{l},\underset{v\in\mathcal{ N}(u)}{\text{AGG}}\Big{(}\mathbf{h}_{v}^{l}\Big{)}\Big{)}, \tag{1}\] where U is a function that _updates_ the node's current features with messages from its neighbours and AGG is a _permutation-invariant aggregation function_. When it is required to aggregate information between nodes located in remote parts of the graph, MPNNs as in Eq. 1 are susceptible to bottlenecks. These bottlenecks are manifested as an exponentially increasing amount of information constrained into vectors with a fixed representation. This is known in the literature as _Oversquashing_[34, 35], a phenomenon that leads to sub-optimal performance when the prediction task is highly reliant on long-range interactions. Oversquashing arises in MPNNs because the propagation of information happens between nodes that are connected through edges, which induces a computational graph directly mirroring the input graph structure. Message passing schemes on complex topological spaces or _Topological Message Passing_[36, 14, 12] mitigate this issue by not just considering nodes (0-cells) and edges (1-cells), but also involving higher-dimensional elements such as rings (2-cells). With a richer topological structure, the messages can be propagated through these higher-dimensional cells, effectively providing shortcuts or additional routes for information flow. With this construction, the underlying computational graph is no longer coupled with the input graph structure. Figure 3: Structure of Chlorophyll-a, the most common molecule in photosynthetic organisms. ### Group Interactions As long-range interactions, group interactions play a fundamental role in chemical and biological processes. One example is the case of aromatic stacking. Aromatic stacking refers to the non-covalent interactions between aromatic rings, such as those found in the amino acid tryptophan or the nucleotide bases of DNA [37]. These interactions are essential in various biological processes, _including protein folding, DNA/RNA structure, and ligand-receptor interactions_[38]. Another example is given by the Polycyclic Aromatic Hydrocarbons (PAHs) as they play a significant role in astrophysics and astrobiology. PAHs (Fig. 4) are thought to be among the most abundant and widespread organic molecules in the universe. They are identified in space via their unique infrared emission spectra [39] and can form in the extreme conditions of space. Studying them can potentially contribute to our understanding of the formation of life's essential building blocks. On the convergence speed of Cellular Isomorphism Networks (CINs) [14] are known to be powerful architectures able to model higher-order signals using a hierarchical message-passing procedure on cell complexes. Analysing the colouring procedure of CINs, the edges must first get the messages coming from the upper neighbourhood and only at the next iteration they can refine the colour of the rings (Fig. 5 (left)). Although this colouring refinement procedure holds the same expressive power ([14], Thm. 7), is it possible to reach a _faster convergence_ by including messages coming from the lower neighbourhood of the cells. This allows for a direct interaction between the rings of the complex which removes the bottleneck caused by edges waiting for upper messages before updating ring colours (Fig. 5 (right)). ## 4 Enhancing Topological Message Passing In this section, we will describe our enhanced topological message-passing scheme that broadens the exchange of information within the cell complex. In particular, our enhancement consists of the inclusion of lower messages in the scheme proposed in [14]. As we will show later in the section, including lower messages will let the information flow within a broader neighbourhood of the complex, enabling group interaction via the messages exchanged between the rings that are lower adjacent and escaping potential bottlenecks [34] via messages between lower adjacent edges. ### Boundary Messages These are the messages that each cell \(\sigma\) receives from its boundary elements \(\tau\in\mathcal{B}(\sigma)\). We denote the information coming from the boundary of \(\sigma\) as \(m_{\mathcal{B}}(\sigma)\) and consists in a permutation invariant aggregation that takes as Figure 5: In molecular graphs featuring regions with a high concentration of rings, incorporating lower messages into cellular isomorphism networks expedites the convergence of the 2-cell colours. Figure 6: Boundary messages received by an edge (top) and a ring (bottom). input all the _boundary messages_\(M_{\mathcal{B}}\) between the feature vector \(h_{\sigma}\) and all the feature vectors of its boundary elements, \(h_{\tau}\) as in Fig.6. Formally: \[m_{\mathcal{B}}^{l+1}(\sigma)=\underset{\tau\in\mathcal{B}(\sigma)}{\text{AGG}} \Big{(}M_{\mathcal{B}}\big{(}h_{\sigma}^{l},h_{\tau}^{l}\big{)}\Big{)}.\] This operation lifts the information from lower cells to higher-order ones, facilitating effective bottom-up communication across the complex. Leveraging the theory developed in [40] for graphs and later on in [14] for regular cell complexes, to maximize the representational power of the underlying network, the function \(m_{\mathcal{B}}\) is implemented using a Multi-Layer Perceptron (MLP) with 2 layers. ### Upper Messages These are the messages that each cell \(\sigma\) receives from its upper neighbouring cells \(\tau\in\mathcal{N}^{{}^{\uparrow}}(\sigma)\) (i.e., the blue arrows in Fig.7) and from common co-boundary cells \(\delta\in\mathcal{C}o(\sigma,\tau)\) (i.e., the purple arrows in Fig.7). We denote the information coming from the upper neighbourhood of \(\sigma\) and the common co-boundary elements as \(m_{\mathcal{N}^{{}^{\uparrow}}}\). It consists in a permutation invariant aggregation that takes as input all the _upper messages_\(M_{\mathcal{N}^{{}^{\uparrow}}}\) between the feature vector \(h_{\sigma}\), all the feature vectors in its upper neighbourhood \(h_{\tau}\) and all the cells in the common co-boundary neighbourhood, \(h_{\delta}\). Formally: \[m_{\mathcal{N}^{{}^{\uparrow}}}^{l+1}(\sigma)=\underset{\begin{subarray}{c} \tau\in\mathcal{N}^{{}^{\uparrow}}(\sigma)\\ \delta\in\mathcal{C}o(\sigma,\tau)\end{subarray}}{\text{AGG}}\Big{(}M_{ \mathcal{N}^{{}^{\uparrow}}}\big{(}h_{\sigma}^{l},h_{\tau}^{l},h_{\delta}^{l} \big{)}\Big{)}.\] This operation will let the information flow within a _narrow_ neighbourhood of \(\sigma\), ensuring consistency and coherence with respect to the underlying topology of the complex. We implement the function \(m_{\mathcal{N}^{{}^{\uparrow}}}\) using a 2-Layer MLP and \(M_{\mathcal{N}^{{}^{\uparrow}}}\) is represented as a single dense layer followed by a point-wise non-linearity. ### Lower Messages These are the messages that each cell \(\sigma\) receives from its lower neighbouring cells \(\tau\in\mathcal{N}^{{}^{\downarrow}}(\sigma)\) (i.e., the red arrows in Fig.8) and from common boundary cells \(\delta\in\mathcal{B}(\sigma,\tau)\) (i.e., the green arrows in Fig.8). We denote a function that aggregates the information coming from the upper neighbourhood of \(\sigma\) and the common co-boundary elements as \(m_{\mathcal{N}^{{}^{\downarrow}}}\). It consists in a permutation invariant aggregation that takes as input all the _lower messages_\(M_{\mathcal{N}^{{}^{\downarrow}}}\) between the feature vector \(h_{\sigma}\), all the feature vectors in its lower neighbourhood \(h_{\tau}\) and all the cells in the common boundary neighbourhood, \(h_{\delta}\). Formally: \[m_{\mathcal{N}^{{}^{\downarrow}}}^{l+1}(\sigma)=\underset{\begin{subarray}{c} \tau\in\mathcal{N}^{{}^{\downarrow}}(\sigma)\\ \delta\in\mathcal{B}(\sigma,\tau)\end{subarray}}{\text{AGG}}\Big{(}M_{ \mathcal{N}^{{}^{\downarrow}}}\big{(}h_{\sigma}^{l},h_{\tau}^{l},h_{\delta}^{l }\big{)}\Big{)}.\] As pictorially shown in Fig.8 (top), this operation would help a _broader_ diffusion of the information between edges that are not necessarily part of a ring. Also, it will let the rings of the complex communicate directly (Fig.8 (bottom)). Similarly to the upper messages, we implement \(m_{\mathcal{N}^{{}^{\downarrow}}}\) using an MLP with 2 layers. The function \(M_{\mathcal{N}^{{}^{\downarrow}}}\) is implemented using a single dense layer followed by a point-wise non-linearity. Figure 8: Lower messages are sent to an edge (top) and to a ring (bottom). Boundary messages are shown in green. Figure 7: Upper messages are sent to a node (top) and to an edge (bottom). Co-boundary messages are shown in purple. ### Update and Readout Update and Readout operations are performed in line with [14]. The exception is that in our case, the update function receives additional information provided by the messages that a cell \(\sigma\) receives from its lower neighbourhood: \[h_{\sigma}^{l+1}=U\Big{(}h_{\sigma}^{l},m_{\mathcal{B}}^{l}(\sigma),m_{ \mathcal{N}^{\uparrow}}^{l+1}(\sigma),m_{\mathcal{N}^{\downarrow}}^{l+1}( \sigma)\Big{)}. \tag{2}\] We represent the update function \(U\) using a single fully connected layer followed by a point-wise non-linearity that uses a different set of parameters for each layer of the model and for each dimension of the complex. After \(L\) layers, we compute the representation of the complex \(\mathcal{C}\) as: \[h_{\mathcal{C}}=R\Big{(}\{\{\{h_{\sigma}^{L}\}\}\}\}_{dim(\sigma)=0}^{2}\Big{)}, \tag{3}\] where \(\{\{h_{\sigma}^{L}\}\}\) is the multi-set of cell's features at layer \(L\). In practice, the representation of the complex is computed in two stages: first, for each dimension of the complex, we compute the representation of the cells at dimension \(k\) by applying a mean or sum readout operation. This results in one representation for the vertices \(h_{\mathcal{V}}\), one for the edges \(h_{\mathcal{E}}\) and one for the rings \(h_{\mathcal{R}}\). Then, we compute a representation for the complex \(\mathcal{C}\) as: \(h_{\mathcal{C}}=\mathrm{MLP}_{R,\mathcal{V}}\big{(}h_{\mathcal{V}}\big{)}+ \mathrm{MLP}_{R,\mathcal{E}}\big{(}h_{\mathcal{E}}\big{)}+\mathrm{MLP}_{R, \mathcal{R}}\big{(}h_{\mathcal{R}}\big{)}\), where each \(\mathrm{MLP}_{R,\cdot}\) is implemented as a single fully-connected layer followed by a non-linearity. Finally, \(h_{\mathcal{C}}\) is forwarded to a final dense layer to obtain the predictions. A neural architecture that updates the cell's representation using the message passing scheme defined in Eq. 2 and obtains complex-wise representations as in Eq. 3 takes the name of _Enhanced Cell Isomorphism Network_ (CIN++). The expressive power of CIN++ can then be directly inferred from the expressivity results reported in [14]. **Theorem 1**.: _Let \(\mathcal{F}:\mathcal{C}\rightarrow\mathbb{R}^{d}\) be a CIN++ network. With a sufficient number of layers and injective neighbourhood aggregators \(\mathcal{F}\) is able to map any pair of complexes \((\mathcal{C}_{1},\mathcal{C}_{2})\) in an embedding space that the Cellular Weisfeiler-Lehman (CWL) test is able to tell if \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are non-isomorphic._ ## 5 Experiments In this section, we validate the properties of the proposed message-passing scheme in different real-world scenarios involving graph-structured data. We focus our experiments on a large-scale molecular benchmark (ZINC) [61] and a long-range graph benchmark (Peptides) [62]. Unless otherwise specified, in each Multi-Layer Perceptron, we apply Batch Normalization [63] between the linear transformations and ReLU activations as used Adam [64] with a starting learning rate of 0.001 that is halved whenever the validation loss reaches a plateau after a patience value we set to 20. We used an early stopping criterion that terminates the training when the learning rate reaches a threshold. Unless stated otherwise, we employ \(1e^{-5}\) as the early stopping threshold. ### Large-Scale Molecular Benchmarks We evaluate topological message passing on a large-scale molecular benchmark from the _ZINC_ database [65]. The benchmark is composed of two datasets: _ZINC-Full_ (consisting of 250K molecular graphs) and _ZINC-Subset_ (an extract of 12k graphs from ZINC-Full) from [61]. In these experiments, we used the same experimental setup of [14] with the exception that we used 3 layers with a hidden dimension of 64. This restricts the parameter budget of our model to have nearly 500K parameters. We follow the training and evaluation procedures in [61]. All results are illustrated in Tab. 1. _Without any use of feature augmentation_ such as positional encoding, our model exhibits particularly strong performance on these benchmarks: it attains state-of-the-art results by a significant margin on _ZINC-Subset_, outperforming other models by a significant margin and is on par with the best baselines for _ZINC-Full_. ### Long-Range Graph Benchmarks To test the effectiveness of enhanced topological message passing for discovering long-range interactions we evaluate our method on a long-range molecular benchmark [62]. The datasets used from the benchmark are derived from 15,535 peptides that compose the SATPdb database [66]. For this benchmark, we evaluate our method against the tasks of peptides structure prediction (Peptides-struct) and peptides function prediction (Peptides-func). For both datasets, we did not employ any feature augmentation such as positional encoding. We ensured that the parameter budget was constrained to 500K. We repeat the training with 4 different seeds and report the mean of the test AP and MAEs at the time of early stopping. For Peptides-struct, we used a cellular lifting map that considers all the induced cycles of dimension up to 8 as rings. We used 3 layers with 64 as a hidden dimension, a batch size of 128 and a sum aggregation to obtain complex-level embeddings. For Peptides-func, we attach 2 cells to all the induced cycles of dimension up to 6. We used 4 layers with an embedding \begin{table} \begin{tabular}{c l c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Model} & \multirow{2}{*}{Time (s)} & \multirow{2}{*}{Params} & \multicolumn{2}{c}{Test MAE} \\ & & & & ZINC-Subset & ZINC-Full \\ \hline \multirow{8}{*}{MPNNs} & GIN [41] & 8.05 & 509,549 & 0.526\(\pm\)0.051 & 0.088\(\pm\)0.002 \\ & GraphSAGE [42] & 6.02 & 505,341 & 0.398\(\pm\)0.002 & 0.126\(\pm\)0.003 \\ & GAT [43] & 8.28 & 531,345 & 0.384\(\pm\)0.007 & 0.111\(\pm\)0.002 \\ & GCN [44] & 5.85 & 505,079 & 0.367\(\pm\)0.011 & 0.113\(\pm\)0.002 \\ & MoNet [45] & 7.19 & 504,013 & 0.292\(\pm\)0.006 & 0.090\(\pm\)0.002 \\ & GatedGCN-PE[46] & 10.74 & 505,011 & 0.214\(\pm\)0.006 & - \\ & MPNN(sum) [1] & - & 480,805 & 0.145\(\pm\)0.007 & - \\ & PNA [47] & - & 387,155 & 0.142\(\pm\)0.010 & - \\ \hline Higher-order & RingGNN [48] & 178.03 & 527,283 & 0.353\(\pm\)0.019 & - \\ GNNs & 3WLGNN [49] & 179.35 & 507,603 & 0.303\(\pm\)0.068 & - \\ \hline Substructure GNNs & GSN [50] & - & \(\sim\)500k & 0.101\(\pm\)0.010 & - \\ \hline \multirow{4}{*}{Subgraph GNNs} & NGNN [51] & - & \(\sim\)500k & 0.111\(\pm\)0.003 & 0.029\(\pm\)0.001 \\ & DSS-GNN [52] & - & 445,709 & 0.097\(\pm\)0.006 & - \\ & GNN-AK [53] & - & \(\sim\)500k & 0.105\(\pm\)0.010 & - \\ & GNN-AK+ [53] & - & \(\sim\)500k & 0.091\(\pm\)0.011 & - \\ & SUN [54] & 15.04 & 526,489 & 0.083\(\pm\)0.003 & - \\ \hline \multirow{4}{*}{Graph Transformers} & GT [55] & - & 588,929 & 0.226\(\pm\)0.014 & - \\ & SAN [56] & - & 508,577 & 0.139\(\pm\)0.006 & - \\ & Graphormer [57] & 12.26 & 489,321 & 0.122\(\pm\)0.006 & 0.052\(\pm\)0.005 \\ & URPE [58] & 12.40 & 491,737 & 0.086\(\pm\)0.007 & 0.028\(\pm\)0.002 \\ \hline GD-WL & Graphormer-GD [59] & 12.52 & 502,793 & 0.081\(\pm\)0.009 & 0.025\(\pm\)0.004 \\ \hline Topological NNs & CIN-Small [60] & - & \(\sim\)100k & 0.094\(\pm\)0.004 & 0.044\(\pm\)0.003 \\ & CIN++ (ours) & 8.29 & 501,967 & 0.077\(\pm\)0.004 & 0.027\(\pm\)0.007 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance results on ZINC benchmark. We use gold %, silver \({}^{\circ}\), and bronze % colors to indicate the best performance. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{Peptides-func} & \multicolumn{2}{c}{Peptides-struct} \\ & **Train AP** & **Test AP** \(\uparrow\) & **Train MAE** & **Test MAE** \(\downarrow\) \\ \hline MLP & 0.4217\(\pm\)0.0049 & 0.4060\(\pm\)0.0021 & 0.4273\(\pm\)0.0011 & 0.4351\(\pm\)0.0008 \\ GCN & 0.8840\(\pm\)0.0131 & 0.5930\(\pm\)0.0023 & 0.2939\(\pm\)0.0055 & 0.3496\(\pm\)0.0013 \\ GCNII & 0.7271\(\pm\)0.0278 & 0.5543\(\pm\)0.0078 & 0.2957\(\pm\)0.0025 & 0.3471\(\pm\)0.0010 \\ GINE & 0.7682\(\pm\)0.0154 & 0.5498\(\pm\)0.0079 & 0.3116\(\pm\)0.0047 & 0.3547\(\pm\)0.0045 \\ GatedGCN & 0.8695\(\pm\)0.0402 & 0.5864\(\pm\)0.0077 & 0.2761\(\pm\)0.0032 & 0.3420\(\pm\)0.0013 \\ GatedGCN+RWSE & 0.9131\(\pm\)0.0321 & 0.6069\(\pm\)0.0035 & 0.2578\(\pm\)0.0116 & 0.3357\(\pm\)0.0006 \\ \hline Transformer+LapPE & 0.8438\(\pm\)0.0263 & 0.6326\(\pm\)0.0126 & 0.2403\(\pm\)0.0066 & 0.2529\(\pm\)0.0016 \\ SAN+LapPE & 0.8217\(\pm\)0.0280 & **0.6384\(\pm\)0.0121** & 0.2822\(\pm\)0.0108 & 0.2683\(\pm\)0.0043 \\ SAN+RWSE & 0.8612\(\pm\)0.0219 & 0.6439\(\pm\)0.0075 & 0.2680\(\pm\)0.0038 & 0.2545\(\pm\)0.0012 \\ \hline CIN++ (ours) & 0.8943\(\pm\)0.0226 & **0.6569\(\pm\)0.0117** & 0.229\(\pm\)0.0079 & **0.2523\(\pm\)0.0013** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance results for Peptides-func (graph classification) and Peptides-struct (graph regression). Best scores are highlighted using gold %, silver %, and bronze % colors. dimension of 50, and a batch size of 64. A dropout [67] with a probability of 0.15 is inserted. With respect to the other benchmarks, we set the starting learning rate of \(4e-4\), a weight decay of \(5e^{-5}\). The final readout is performed with a mean aggregation. As shown in Tab. 2 we achieve very high performance on these tasks even without any use of feature augmentation. ## 6 Conclusions, Related Works and Future Developments Broader ImpactsThis work provides evidence of how the enhanced topological message-passing scheme proposed in this work allows the integration of local and global information within a cell complex in the context of computational chemistry. In particular, our model captures complex dependencies and long-range interactions more effectively. We foresee the proposed work having a broad impact within the field of computational chemistry, as our scheme offers a robust and versatile approach to predict meaningful properties of chemical systems by accurately modelling complex dependencies and capturing long-range and group interactions. Related WorksIn light of Topological Deep Learning being an emerging research area that has been introduced quite recently [68], numerous pioneering works appeared in this field. In [13] the authors proposed a Simplicial Weisfeiler-Lehman (SWL) colouring procedure for distinguishing non-isomorphic simplicial complexes and a provably powerful message passing scheme based on SWL, that generalise Graph Isomorphism Networks [40]. This was later refined in [14], where the authors introduced CW Networks (CWNs), a hierarchical message-passing on cell complexes proven to be strictly more powerful than the WL test and not less powerful than the 3-WL test. In [12], the authors provide a general message-passing mechanism over cell complexes however, they do not study the expressive power of the proposed scheme, nor its complexity. Furthermore, they did not experimentally validate its performance. The works in [69; 70] introduced Neural Sheaf Diffusion Models, neural architectures that learn a sheaf structure on graphs to improve learning performance on transductive tasks in heterophilic graphs. Meanwhile, attentional schemes appeared in topological deep learning in the context of simplicial complexes [71; 72], cellular complexes [19], sheaves [73] and combinatorial complexes [74]. For a more detailed examination of the architectures developed in the field of topological deep learning, we refer the reader to the survey of Papillon [75]. Recent works considered also rings within the message passing scheme by means of Junction Trees (JT) [76] and by augmenting node features with information about cycles [77]. However, it is easy to see that these schemes have a different design than the one provided in this work. LimitationsWhile our work demonstrates that topological message passing effectively models complex dependencies and long-range interactions in chemical systems, we acknowledge that the complexity of the proposed method inherently increases due to the cellular lifting maps and the additional messages sent throughout the complex. We mitigate this computational overhead by mapping all the graphs present in the datasets into cell complexes in a pre-processing stage and storing them for later use. Additionally, the overhead of our message-passing scheme is mitigated by the fact that the operations within the same layer are naturally decoupled. Efficient network implementations make it possible to update the representation of a cell \(\sigma\) in a concurrent execution [78], amortizing the cost to be proportional to the largest neighbourhood of \(\sigma\). ConclusionsOur study has presented an innovative approach to neural networks operating on graph-structured data. Current state-of-the-art models do not naturally account for a principled way to model group interactions. We addressed this by introducing an enhancement of the Topological Message Passing scheme developed in [14]. The newly proposed Topological Message Passing scheme, named CIN++, enables a direct interaction within high-order structures of the underlying cell complex, by letting messages flow within its lower neighbourhood without sacrificing the model's expressivity. By allowing the exchange of messages between higher-order structures, we significantly enhance the model's capacity to capture multi-way relationships in the data. We have demonstrated that the ability to model long-range and group interactions is critical for capturing real-world chemistry-related problems. In particular, the natural affinity of cellular complexes for representing higher-dimensional structures and topological features will provide a more detailed understanding of complex chemical systems compared to traditional models.
2305.15653
Alternating Subgradient Methods for Convex-Concave Saddle-Point Problems
We propose an alternating subgradient method with non-constant step sizes for solving convex-concave saddle-point problems associated with general convex-concave functions. We assume that the sequence of our step sizes is not summable but square summable. Then under the popular assumption of uniformly bounded subgradients, we prove that a sequence of convex combinations of function values over our iterates converges to the value of the function at a saddle-point. Additionally, based on our result regarding the boundedness of the sequence of our iterates, we show that a sequence of the function evaluated at convex combinations of our iterates also converges to the value of the function over a saddle-point. We implement our algorithms in examples of a linear program in inequality form, a least-squares problem with $\ell_{1}$ regularization, a matrix game, and a robust Markowitz portfolio construction problem. To accelerate convergence, we reorder the sequence of step sizes in descending order, which turned out to work very-well in our examples. Our convergence results are confirmed by our numerical experiments. Moreover, we also numerically compare our iterate scheme with iterates schemes associated with constant step sizes. Our numerical results support our choice of step sizes. Additionally, we observe the convergence of the sequence of function values over our iterates in multiple experiments, which currently lacks theoretical support.
Hui Ouyang
2023-05-25T02:01:34Z
http://arxiv.org/abs/2305.15653v1
# Alternating Subgradient Methods for Convex-Concave Saddle-Point Problems ###### Abstract We propose an alternating subgradient method with non-constant step sizes for solving convex-concave saddle-point problems associated with general convex-concave functions. We assume that the sequence of our step sizes is not summable but square summable. Then under the popular assumption of uniformly bounded subgradients, we prove that a sequence of convex combinations of function values over our iterates converges to the value of the function at a saddle-point. Additionally, based on our result regarding the boundedness of the sequence of our iterates, we show that a sequence of the function evaluated at convex combinations of our iterates also converges to the value of the function over a saddle-point. We implement our algorithms in examples of a linear program in inequality form, a least-squares problem with \(\ell_{1}\) regularization, a matrix game, and a robust Markowitz portfolio construction problem. To accelerate convergence, we reorder the sequence of step sizes in descending order, which turned out to work very-well in our examples. Our convergence results are confirmed by our numerical experiments. Moreover, we also numerically compare our iterate scheme with iterates schemes associated with constant step sizes. Our numerical results support our choice of step sizes. Additionally, we observe the convergence of the sequence of function values over our iterates in multiple experiments, which currently lacks theoretical support. ## 1 Introduction In the whole work, \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) are Hilbert spaces and \(X\subseteq\mathcal{H}_{1}\) and \(Y\subseteq\mathcal{H}_{2}\) are nonempty closed and convex sets. The Hilbert direct sum \(\mathcal{H}_{1}\times\mathcal{H}_{2}\) of \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) is equipped with the inner product \[(\forall(x,y)\in X\times Y)(\forall(u,v)\in X\times Y)\quad\left\langle(x,y), (u,v)\right\rangle=\left\langle x,u\right\rangle+\left\langle y,v\right\rangle \tag{1.1}\] and the induced norm \[(\forall(x,y)\in X\times Y)\quad\left\|(x,y)\right\|^{2}=\left\langle(x,y),(x,y)\right\rangle=\left\langle x,x\right\rangle+\left\langle y,y\right\rangle= \left\|x\right\|^{2}+\left\|y\right\|^{2}. \tag{1.2}\] Throughout this work, \(f:X\times Y\to\mathbf{R}\cup\{-\infty,+\infty\}\) satisfies that \((\forall y\in Y)\)\(f(\cdot,y):X\to\mathbf{R}\cup\{+\infty\}\) is proper and convex, and that \((\forall x\in X)\)\(f(x,\cdot):Y\to\mathbf{R}\cup\{-\infty\}\) is proper and concave. (It is referred to as _convex-concave function_ from now on.) In this work, we aim to solve the following _convex-concave saddle-point problem_ \[\underset{x\in X}{\text{minimize}}\ \underset{y\in Y}{\text{ maximize}}\ f(x,y). \tag{1.3}\] We assume that \((\forall(\bar{x},\bar{y})\in X\times Y)\)\(\partial_{x}f(\bar{x},\bar{y})\neq\varnothing\) and \(\partial_{y}(-f(\bar{x},\bar{y}))\neq\varnothing\). We also suppose that the solution set of (1.3) is nonempty, that is, there exists a _saddle-point_\((x^{*},y^{*})\in X\times Y\) of \(f\) satisfying \[(\forall x\in X)(\forall y\in Y)\quad f(x^{*},y)\leq f(x^{*},y^{*})\leq f(x,y ^{*}). \tag{1.4}\] ### Related work Convex-concave saddle-point problems arise in a wide range of applications such as resource allocation problems for networked-systems, game theory, finance, robust and minimax optimization, image processing, generative adversarial networks, adversarial training, robust optimization, primal-dual reinforcement learning, and machine learning (see, e.g., [9], [4], [12], [6], [13], and [14] for details). Based on the easy implementation, low memory requirement, and low barrier for usage, the subgradient method is one of the most popular methods for solving convex-concave saddle-point problems. For interested readers, we recommend [9, Section 1] with a summary of various subgradient methods for solving convex-concave saddle-point problems and [13, Section 3] with a list of different algorithms and literature on solving special cases or more general versions of convex-concave saddle-point problems. Among the literature on subgradient methods for solving convex-concave saddle-point problems, the author in [10] proposed a primal-dual subgradient method with two control sequences (one aggregates the support functions in the dual space, and the other establishes a dynamically updated scale between the primal and dual spaces) for different types of nonsmooth problems with convex structure. Moreover, the author provides a variant of the proposed subgradient scheme for convex-concave saddle-point problems in [10, Section 4], and shows an upper bound on a sequence constructed by their iterates under some assumptions, including the existence of certain strongly convex prox-functions and the uniform boundedness of subgradients. In addition, the authors of [12] worked on saddle problems including the partial supremum or infimum of convex-concave functions, which is a more general version of the convex-concave saddle-point problem considered in this work. They applied the language and methods presented in [8], used the idea of conic-representable saddle-point programs to automatically reduce a saddle problem to a single convex optimization problem, and developed an open-source package called DSP for users to easily formulate and solve saddle problems. ### Comparison with related work Main theoretical results in this work are mainly inspired by [9, Section 3] by Nedic and Ozdaglar and [1, Section 6] by Boyd. In [9, Section 3], the authors worked on a subgradient algorithm with an averaging scheme for generating approximate saddle-points of a convex-concave function. They also showed the convergence of function values at iterate averages to the the function value at a saddle-point, with the error level being a function of the step-size value, under some assumptions regarding the boundedness of the sequence of iterates or compactness of related sets. [1] is a lecture note by Boyd for the course EE364b in Stanford University, which covers various subgradient methods and techniques of their convergence proofs. Convex-concave saddle-point problems are not considered in [1] yet. Here, we apply some convergence proofs techniques and popular assumptions on sequences of step sizes, as presented in [1]. We state main differences between this work and [9] below. * Our iterate scheme (3.1) replaces the constant step-size \(\alpha\) in the iterate scheme worked in [9] by a sequence \((t_{k})_{k\in\mathbf{N}}\). * Under some boundedness of the sequence of iterates or compactness of related sets, authors in [9, Section 3] considered the convergence of \(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\frac{1}{k}\sum_{i=0}^{k-1}x^{i},\frac{1}{k}\sum_{i=0}^{k-1}y^{i})\to f(x^{* },y^{*})\), where \((x^{*},y^{*})\) is a saddle-point of \(f\) and \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) is generated by the iterate scheme (3.1) below with \((\forall k\in\mathbf{N})\)\(t_{k}\equiv\alpha\in\mathbf{R}_{++}\). In this work, we directly establish the boundedness of the sequence \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) of iterates generated by our scheme (3.1). So without any assumption on the boundedness of the sequence of iterates or compactness of related sets, we prove \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\to f(x^{*},y^{*})\) in Theorem3.1 and \(f\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}x^{i},\sum_{i=0}^{k} \frac{t_{i}}{\sum_{j=0}^{k}t_{j}}y^{i}\right)\to f(x^{*},y^{*})\) in Theorem3.2. * In [9], after section 3, by replacing the second projector in their original iterate scheme with a projector onto a compact convex set containing the set of dual optimal solutions, the authors introduced their primal-dual subgradient method; moreover, under some standard Slater constraint qualification and uniformly boundedness assumption of related subgradients, the authors theoretically studied the estimate on the convergence rate of generated primal sequences for finding approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function of a convex constrained optimization problem. Although we don't theoretically work on finding saddle-points of Lagrangian (which are special cases of convex-concave functions), we numerically apply our algorithm to Lagrangian functions of linear program in inequality form and of least-squares problem with \(\ell_{1}\) regularization in Sections4.2 and 4.3. Furthermore, in Sections4.4 and 4.5, we also numerically confirm our theoretical results with problems of matrix game and robust Markowitz portfolio construction problem. In addition, we also apply our algorithm to another Lagrangian of an easy constrained convex problem in Section4.1 to show the importance or benefit of replacing the constant step-size \(\alpha\) with a sequence \((t_{k})_{k\in\mathbf{N}}\) satisfying some popular constraints of step sizes. ### Outline We mainly present some examples of convex-concave saddle-point problems in Section2. In Section3, we introduce our alternating subgradient methods for convex-concave saddle-point problems associated with a general convex-concave function. Then in the same section, we prove two desired results on the convergence to the value of function over a saddle-point. In Section4, we implement our algorithm and compare it to some other schemes of iterates in examples of the following problems: linear program in inequality form, least-squares problem with \(\ell_{1}\) regularization, matrix game, and robust Markowitz portfolio construction problem. To accelerate convergence, in our numerical experiments, we reorder the sequence \((t_{k})_{k\in\mathbf{N}}\) of step sizes in descending order, which turns out to work very-well in our examples. We sum up our work in Section5. ## 2 Preliminaries We point out some notation used in this work below. \(\mathbf{R}\), \(\mathbf{R}_{+}\), \(\mathbf{R}_{++}\), and \(\mathbf{N}\) are the set of all real numbers, the set of all nonnegative real numbers, the set of all positive real numbers, and the set of all nonnegative integers, respectively. Let \(C\) be a nonempty closed and convex subset of \(\mathcal{H}\). The _projector_ (or _projection operator_) onto \(C\) is the operator, denoted by \(\mathrm{P}_{C}\), that maps every point in \(\mathcal{H}\) to its unique projection onto \(C\), that is, \((\forall x\in\mathcal{H})\ \|x-\mathrm{P}_{C}\,x\|=\inf_{c\in C}\|x-c\|\). Let \(\mathcal{H}\) be a Hilbert space and let \(g:\mathcal{H}\to\left]-\infty,+\infty\right]\) be proper. The _subdifferential of \(g\)_ is the set-valued operator \[\partial g:\mathcal{H}\to 2^{\mathcal{H}}:x\mapsto\{u\in\mathcal{H}\ :\ ( \forall y\in\mathcal{H})\ \left\langle u,y-x\right\rangle+f(x)\leq f(y)\}.\] The following result is well-known and will be used several times later. For completeness, we show some details below. **Fact 2.1**.: _Let \((\bar{x},\bar{y})\in X\times Y\). The following statements are equivalent._ * \((\bar{x},\bar{y})\) _is a saddle-point of the function_ \(f\)_._ * \((\forall(x,y)\in X\times Y)\ f(\bar{x},y)-f(x,\bar{y})\leq 0\)_._ * \(0\in\partial_{x}f(\bar{x},\bar{y})\) _and_ \(0\in\partial_{y}(-f(\bar{x},\bar{y}))\)_._ Proof.: (i) \(\Leftrightarrow\) (ii): This is trivial by recalling the definition (1.4) of the saddle-point. (i) \(\Leftrightarrow\) (iii): According to the definition of saddle-point (1.4), we observe that (i) is equivalent to the following \[(\forall x\in X)\ \left\langle 0,x-\bar{x}\right\rangle+f(\bar{x},\bar{y}) \leq f(x,\bar{y})\ \text{and}\ (\forall y\in Y)\ \left\langle 0,y-\bar{y}\right\rangle+(-f(\bar{x}, \bar{y}))\leq-f(\bar{x},y),\] which is exactly (iii). We present some examples of saddle-points below. **Example 2.1**.: Let \((\forall i\in\{1,2\})\ f_{i}:\mathcal{H}_{i}\to\mathbf{R}\) be proper, convex, and differentiable function. Define \(f:\mathcal{H}_{1}\times\mathcal{H}_{2}\to\mathbf{R}\) as \((\forall(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2})\ f(x,y)=f_{1}(x)-f_{2}(y)\). Let \((\bar{x},\bar{y})\in\mathcal{H}_{1}\times\mathcal{H}_{2}\). Then, by Fact2.1, \((\bar{x},\bar{y})\) is a saddle-point of the function \(f\) if and only if \(0=\nabla f_{1}(\bar{x})\) and \(0=\nabla f_{2}(\bar{y})\). We shall revisit examples below in Section4 on numerical experiments. **Example 2.2** (Lagrange duality).: Note that when strong duality holds, a pair of primal and dual optimal points turns out to be a saddle-point; and the value of the Lagrangian over a saddle-point equals to both the optimal solutions of corresponding primal and dual problems. We display Lagrangian of some problems for which strong duality obtains below. 1. [4, Exercise 5.1] Consider the primal problem \[\begin{array}{ll}&\mbox{minimize}\;\;x^{2}+1\\ \mbox{(P${}_{1}$)}&\mbox{subject to}\;\;(x-2)(x-4)\leq 0,\end{array}\] with variable \(x\in{\bf R}\). The corresponding Lagrangian \(L:{\bf R}\times{\bf R}_{+}\to{\bf R}\) is \[\mbox{(L${}_{1}$)}\qquad L(x,y)=x^{2}+1+y(x-2)(x-4)=x^{2}(1+y)-6xy+8y+1,\] and the dual problem is \[\begin{array}{ll}&\mbox{maximize}\;\;10-y-\frac{9}{y+1}\\ \mbox{(D${}_{1}$)}&\mbox{subject to}\;\;y\geq 0,\end{array}\] with variable \(y\in{\bf R}\). It is easy to know that the primal and dual optimal points are \(x^{*}=2\) and \(y^{*}=2\), respectively, and that the primal and dual optimal values are \(p^{*}=d^{*}=5\). Then we deduce that \((x^{*},y^{*})=(2,2)\) is a saddle-point of the function \(L(x,y)\) and that \[\sup_{y\in{\bf R}_{+}}\inf_{x\in{\bf R}}L(x,y)=L(2,2)=5=\inf_{x\in{\bf R}}\sup _{y\in{\bf R}_{+}}L(x,y).\] 2. [4, Section 5.2.1] Consider the inequality form LP \[\begin{array}{ll}&\mbox{minimize}\;\;c^{T}x\\ \mbox{(P${}_{2}$)}&\mbox{subject to}\;\;Ax\leq b,\end{array}\] where \(x\in{\bf R}^{n}\) is the variable, and \(A\in{\bf R}^{m\times n}\), \(c\in{\bf R}^{n}\), and \(b\in{\bf R}^{m}\). As it is stated on [4, Page 225], the Lagrangian \(L:{\bf R}^{n}\times{\bf R}^{m}_{+}\to{\bf R}\) is \[\begin{array}{ll}&\mbox{minimize}\;\;c^{T}x\\ \mbox{(P${}_{2}$)}&\mbox{subject to}\;\;Ax\leq b,\end{array}\] where \(x\in{\bf R}^{n}\) is the variable, and \(A\in{\bf R}^{m\times n}\), \(c\in{\bf R}^{n}\), and \(b\in{\bf R}^{m}\). As it is stated on [4, Page 225], the Lagrangian \(L:{\bf R}^{n}\times{\bf R}^{m}_{+}\to{\bf R}\) is \[\begin{array}{ll}&\mbox{minimize}\;\;-b^{T}y\\ \mbox{subject to}\;\;y\geq 0\\ &\mbox{$A^{T}y+c=0,\end{array}$}\] with variable \(y\in{\bf R}^{m}\). Suppose that \(x^{*}\) and \(y^{*}\) are the optimal points of (P\({}_{2}\)) and (D\({}_{2}\)), respectively. It is clear that Slater's condition holds, so \((x^{*},y^{*})\) is a saddle-point of the function \(L(x,y)\) and that \[\sup_{y\in{\bf R}^{m}_{+}}\inf_{x\in{\bf R}^{n}}L(x,y)=L(x^{*},y^{*})=\inf_{x \in{\bf R}^{n}}\sup_{y\in{\bf R}^{m}_{+}}L(x,y).\] 3. [5, Exercise 5.34(b)] Consider the following least-squares problem with \(\ell_{1}\) regularization \[\mbox{minimize}\;\;\frac{1}{2}\left\|Ax-b\right\|_{2}^{2}+\gamma\left\|x \right\|_{1}\] where \(x\in\mathbf{R}^{n}\) is the variable, and \(A\in\mathbf{R}^{m\times n}\), \(b\in\mathbf{R}^{m}\), and \(\gamma\in\mathbf{R}_{++}\). Clearly, it is equivalent to \[\begin{split}\text{minimize}\ \frac{1}{2}\left\|u\right\|_{2}^{2}+ \gamma\left\|x\right\|_{1}\\ \text{(P${}_{3}$)}\qquad\text{subject to}\ u=Ax-b.\end{split}\] The corresponding Lagrangian \(L:\mathbf{R}^{n}\times\mathbf{R}^{m}\times\mathbf{R}^{m}\to\mathbf{R}\) is \[\begin{split}\text{(L${}_{3}$)}\qquad L(x,u,y)=& \frac{1}{2}\left\|u\right\|_{2}^{2}+\gamma\left\|x\right\|_{1}+y^{T}(Ax-b-u) \\ =& y^{T}\left(A\ \ \ -I\right)\begin{pmatrix}x\\ u\end{pmatrix}+\frac{1}{2}\left\|u\right\|_{2}^{2}+\gamma\left\|x\right\|_{1}-y ^{T}b.\end{split}\] Note that \[\begin{split}\inf_{x\in\mathbf{R}^{n},u\in\mathbf{R}^{m}}L(x,u,y) =&\inf_{x\in\mathbf{R}^{n},u\in\mathbf{R}^{m}}\frac{1}{2} \left\|u\right\|_{2}^{2}+\gamma\left\|x\right\|_{1}+y^{T}(Ax-b-u)\\ =&-b^{T}y+\inf_{x\in\mathbf{R}^{n}}\left((A^{T}y)^{T}x+ \gamma\left\|x\right\|_{1}\right)+\inf_{u\in\mathbf{R}^{m}}\left(-y^{T}u+\frac {1}{2}\left\|u\right\|_{2}^{2}\right),\end{split}\] that, by [4, Section 5.1.6 and Example 3.26], \[\begin{split}\inf_{x\in\mathbf{R}^{n}}(A^{T}y)^{T}x+\gamma\left\| x\right\|_{1}=&-\gamma\sup_{x\in\mathbf{R}^{n}}-\frac{1}{\gamma}(A^{T}y)^{T}x- \left\|x\right\|_{1}\\ =&\begin{cases}0&\text{if}\ \left\|-\frac{1}{ \gamma}A^{T}y\right\|_{\infty}\leq 1\\ -\infty&\text{otherwise},\end{cases}\end{split}\] and that \[\inf_{u\in\mathbf{R}^{m}}-y^{T}u+\frac{1}{2}\left\|u\right\|_{2}^{2}=\frac{1} {2}\inf_{u\in\mathbf{R}^{m}}\left\|u-y\right\|_{2}^{2}-\frac{1}{2}\left\|y \right\|_{2}^{2}=-\frac{1}{2}\left\|y\right\|_{2}^{2}.\] We derive that \[\inf_{x\in\mathbf{R}^{n},u\in\mathbf{R}^{m}}L(x,u,y)=-b^{T}y+\frac{1}{2}\left\| y\right\|_{2}^{2},\] with \(\left\|A^{T}y\right\|_{\infty}\leq\gamma\), and that the dual problem is \[\begin{split}\text{maximize}\ \ -b^{T}y+\frac{1}{2}\left\|y \right\|_{2}^{2}\\ \text{(D${}_{3}$)}\qquad\text{subject to}\ \ \left\|A^{T}y\right\|_{ \infty}\leq\gamma,\end{split}\] with variable \(y\in\mathbf{R}^{m}\). Suppose that \((x^{*},u^{*})\) and \(y^{*}\) are the optimal points of (P\({}_{3}\)) and (D\({}_{3}\)), respectively. In view of Slater's theorem, \(((x^{*},u^{*}),y^{*})\) is s a saddle-point of the function \(L((x,u),y)\) and that \[\sup_{y\in\mathbf{R}^{m}}\inf_{x\in\mathbf{R}^{n},u\in\mathbf{R}^{m}}L((x,u),y )=L((x^{*},u^{*}),y^{*})=\inf_{x\in\mathbf{R}^{n},u\in\mathbf{R}^{m}}\sup_{y \in\mathbf{R}^{m}}L((x,u),y).\] Alternating subgradient methods Results in this section are inspired by [9, Section 3] by Nedic and Ozdaglar and [1] by Boyd for the course EE364b in Stanford University. In particular, the technique in the proof of Theorem 3.1 mimics that in [9, Section 3]. The authors in [9, Section 3] considered the convergence of \(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\frac{1}{k}\sum_{i=0}^{k-1}x^{i},\frac{1}{k}\sum_{i=0}^{k-1}y^{i})\to f(x^{* },y^{*})\) under some assumptions including boundedness or compactness, where \((x^{*},y^{*})\) is a saddle-point of \(f\) and \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) is generated by (3.1) below with \((\forall k\in\mathbf{N})\,t_{k}\equiv\alpha\in\mathbf{R}_{++}\). In addition, the proof of the boundedness of the iteration sequence \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) in Lemma 3.4 is motivated from [1, Section 6], which is essential to our convergence result in Theorem 3.2. ### Algorithm Let \((x^{0},y^{0})\in X\times Y\). We consider the sequence of iterations generated by \[(\forall k\in\mathbf{N})\quad x^{k+1}=\mathrm{P}_{X}(x^{k}-t_{k}g_{k})\quad \text{and}\quad y^{k+1}=\mathrm{P}_{Y}(y^{k}-t_{k}h_{k}), \tag{3.1}\] where \(\mathrm{P}_{X}\) and \(\mathrm{P}_{Y}\) are projectors onto \(X\) and \(Y\), respectively, \((\forall k\in\mathbf{N})\,\,g_{k}\in\partial_{x}f(x^{k},y^{k})\) and \(h_{k}\in\partial_{y}(-f(x^{k},y^{k}))\), and \((\forall k\in\mathbf{N}\smallsetminus\{0\})\,\,t_{k}\in\mathbf{R}_{+}\) and \(t_{0}\in\mathbf{R}_{++}\). Let's revisit the Lagrangian function of a least-squares problem with \(\ell_{1}\) regularization, presented in Example 2.2(iii), and illustrate the algorithm (3.1) on this particular function. **Example 3.1**.: Consider the convex-concave function \(f:\mathbf{R}^{n+m}\times\mathbf{R}^{m}\to\mathbf{R}\) defined as \[(\forall((x,u),y)\in\mathbf{R}^{n+m}\times\mathbf{R}^{m})\quad f((x,u),y)= \frac{1}{2}\left\|u\right\|_{2}^{2}+\gamma\left\|x\right\|_{1}+y^{T}(Ax-b-u).\] In view of [2, Section 3.4], \[(\forall x\in\mathbf{R}^{n})\quad\mathbf{sign}(x)\in\partial\left\|x\right\| _{1},\quad\text{where}\quad(\forall i\in\{1,\dots,n\})\,\,(\mathbf{sign}(x))_ {i}=\begin{cases}1&\text{if }x_{i}>0\\ 0&\text{if }x_{i}=0\\ -1&\text{if }x_{i}<0.\end{cases}\] Therefore, it is easy to see that in this case, the algorithm (3.1) is simply that \(((x_{0},Ax_{0}-b),y_{0}))\in\mathbf{R}^{n+m}\times\mathbf{R}^{m}\) and for every \(k\in\mathbf{N}\), \[(x^{k+1},u^{k+1})= (x^{k},u^{k})-t_{k}\left(A^{T}y^{k}+\gamma\mathbf{sign}(x^{k}),- y^{k}+u^{k}\right)\] \[y^{k+1}= y^{k}-t_{k}\left(-Ax^{k}+u^{k}+b\right).\] ### Convergence results Henceforth, \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) is constructed by our iterate scheme (3.1); moreover, \[(\forall k\in\mathbf{N})\quad\hat{x}_{k}=\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0 }^{k}t_{j}}x^{i}\text{ and }\hat{y}_{k}=\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}y^{i},\] where \((\forall k\in\mathbf{N}\smallsetminus\{0\})\,\,t_{k}\in\mathbf{R}_{+}\) and \(t_{0}\in\mathbf{R}_{++}\) are step sizes used in our iterate scheme (3.1). Suppose \((x^{*},y^{*})\in X\times Y\) is a saddle point of a convex-concave function \(f\). In this subsection, we present convergence results Theorems 3.1 and 3.2 on \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\hat{x}_{k},\hat{y}_{k})\to f(x^{*},y^{*})\), respectively. To this end, we need the following auxiliary results. **Lemma 3.1**.: _Let \(x\in X\) and \(y\in Y\). The following results hold._ 1. _For every_ \(k\in\mathbf{N}\)_,_ \[t_{k}(f(x^{k},y^{k})-f(x,y^{k})) \leq\tfrac{1}{2}\left(\left\|x^{k}-x\right\|^{2}-\left\|x^{k+1}-x \right\|^{2}\right)+\tfrac{1}{2}t_{k}^{2}\left\|g_{k}\right\|^{2};\] (3.2a) \[t_{k}(f(x^{k},y)-f(x^{k},y^{k})) \leq\tfrac{1}{2}\left(\left\|y^{k}-y\right\|^{2}-\left\|y^{k+1}-y \right\|^{2}\right)+\tfrac{1}{2}t_{k}^{2}\left\|h_{k}\right\|^{2}.\] (3.2b) 2. _For every_ \(k\in\mathbf{N}\)_,_ \[\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x,\hat{y}_{k}) \leq\frac{\left\|x^{0}-x\right\|^{2}-\left\|x^{k+1}-x\right\|^{2 }}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i}\right\|^{2 }}{2\sum_{j=0}^{k}t_{j}};\] (3.3a) \[f(\hat{x}_{k},y)-\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}} f(x^{i},y^{i}) \leq\frac{\left\|y^{0}-y\right\|^{2}-\left\|y^{k+1}-y\right\|^{2 }}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{i}\right\|^{2 }}{2\sum_{j=0}^{k}t_{j}}.\] (3.3b) Proof.: (i): Let \(k\in\mathbf{N}\). Because \(g_{k}\in\partial_{x}f(x^{k},y^{k})\) and \(h_{k}\in\partial_{y}(-f(x^{k},y^{k}))\), we have that \[(\forall x\in X)\quad\left\langle g_{k},x-x^{k}\right\rangle \leq f(x,y^{k})-f(x^{k},y^{k}); \tag{3.4a}\] \[(\forall y\in Y)\quad\left\langle h_{k},y-y^{k}\right\rangle \leq-f(x^{k},y)+f(x^{k},y^{k}). \tag{3.4b}\] Note that \(x=\mathrm{P}_{X}\,x\) and \(\mathrm{P}_{X}\) is nonexpansive. According to (3.1), \[\left\|x^{k+1}-x\right\|^{2} =\left\|\mathrm{P}_{X}(x^{k}-t_{k}g_{k})-\mathrm{P}_{X}\,x\right\| ^{2}\] \[\leq\left\|x^{k}-t_{k}g_{k}-x\right\|^{2}\] \[=\left\|x^{k}-x\right\|^{2}-2t_{k}\left\langle g_{k},x^{k}-x \right\rangle+t_{k}^{2}\left\|g_{k}\right\|^{2}\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: Sum (3.2a) over \(i\in\{0,1,\ldots,k\}\) to get that \[\sum_{i=0}^{k}t_{i}(f(x^{i},y^{i})-f(x,y^{i}))\leq\tfrac{1}{2}(\left\|x^{0}-x \right\|^{2}-\left\|x^{k+1}-x\right\|^{2})+\tfrac{1}{2}\sum_{i=0}^{k}t_{i}^{2} \left\|g_{i}\right\|^{2}.\] Divide both sides of the inequality above by \(\sum_{j=0}^{k}t_{j}\) to see that \[\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-\sum_{i=0}^{k} \frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x,y^{i})\leq\frac{\left\|x^{0}-x\right\|^{2 }-\left\|x^{k+1}-x\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i }^{2}\left\|g_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}},\] which, combining with (3.5a), derives (3.3a). Similarly, sum (3.2b) over \(i\in\{0,1,\ldots,k\}\) to get that \[\sum_{i=0}^{k}t_{i}(f(x^{i},y)-f(x^{i},y^{i}))\leq\tfrac{1}{2}(\left\|y^{0}-y \right\|^{2}-\left\|y^{k+1}-y\right\|^{2})+\tfrac{1}{2}\sum_{i=0}^{k}t_{i}^{2 }\left\|h_{i}\right\|^{2}.\] Divide also both sides of the inequality above by \(\sum_{j=0}^{k}t_{j}\) to get that \[\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y)-\sum_{i=0}^{k}\frac{ t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\leq\frac{\left\|y^{0}-y\right\|^{2}- \left\|y^{k+1}-y\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{ 2}\left\|h_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}},\] which, connecting with (3.5b), establishes (3.3b). **Lemma 3.2**.: _Let \((x^{*},y^{*})\in X\times Y\) be a solution of (1.3), and let \(k\in\mathbf{N}\). The following results hold._ 1. \(-\frac{\left\|x^{0}-\hat{x}_{k}\right\|^{2}-\left\|x^{k+1}-\hat{x}_{k}\right\| ^{2}}{2\sum_{j=0}^{k}t_{j}}-\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i}\right\| ^{2}}{2\sum_{j=0}^{k}t_{j}}\leq f(\hat{x}_{k},\hat{y}_{k})-\sum_{i=0}^{k} \frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\leq\frac{\left\|y^{0}-\hat{y} _{k}\right\|^{2}-\left\|y^{k+1}-\hat{y}_{k}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+ \frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}\)_. Consequently,_ \[-\frac{\left\|x^{0}-\hat{x}_{k}\right\|^{2}-\left\|y^{k+1}-y^{*}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}-\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}\leq f(\hat{x}_{k},\hat{y}_{k})-\sum_{i=0}^{k}\frac{t_{i}} {\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\leq\frac{\left\|y^{0}-\hat{y}_{k}\right\|^{ 2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{i}\right\|^{2 }}{2\sum_{j=0}^{k}t_{j}}.\] 2. \(-\frac{\left\|y^{0}-y^{*}\right\|^{2}-\left\|y^{k+1}-y^{*}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}-\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{i}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}\leq\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i}, y^{i})-f(x^{*},y^{*})\leq\frac{\left\|x^{0}-x^{*}\right\|^{2}-\left\|x^{k+1}-x^{*} \right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i} \right\|^{2}}{2\sum_{j=0}^{k}t_{j}}-\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i} \right\|^{2}}{2\sum_{j=0}^{k}t_{j}}\)_. Consequently,_ \[-\frac{\left\|y^{0}-y^{*}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}-\frac{\sum_{i=0}^{k} t_{i}^{2}\left\|h_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}\leq\sum_{i=0}^{k}\frac{t_{i}} {\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x^{*},y^{*})\leq\frac{\left\|x^{0}-x^{*} \right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i} \right\|^{2}}{2\sum_{j=0}^{k}t_{j}}.\] 3. \(-\frac{\left\|x^{0}-\hat{x}_{k}\right\|^{2}+\left\|y^{0}-y^{*}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}-\frac{\sum_{i=0}^{k}t_{i}^{2}\left(\left\|g_{i}\right\|^{2}+ \left\|h_{i}\right\|^{2}\right)}{2\sum_{j=0}^{k}t_{j}}\leq f(\hat{x}_{k},\hat{y }_{k})-f(x^{*},y^{*})\leq\frac{\left\|x^{0}-x^{*}\right\|^{2}+\left\|y^{0}- \hat{y}_{k}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2} \left(\left\|g_{i}\right\|^{2}+\left\|h_{i}\right\|^{2}\right)}{2\sum_{j=0}^{k}t_ {j}}.\) Proof.: (i): Substitute \(x\) in (3.3a) of Lemma 3.1 with \(\hat{x}_{k}\) to observe that Replace \(y\) in (3.3b) of Lemma 3.1 by \(\hat{y}_{k}\) to get that \[f(\hat{x}_{k},\hat{y}_{k})-\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i },y^{i})\leq\frac{\left\|y^{0}-\hat{y}_{k}\right\|^{2}-\left\|y^{k+1}-\hat{y}_ {k}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{ i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}.\] Combine the last two inequalities to deduce the desired inequality in (i). (ii): Note that \((x^{k})_{k\in\mathbf{N}}\) and \((y^{k})_{k\in\mathbf{N}}\) are sequences in \(X\) and \(Y\), respectively, and that \(X\) and \(Y\) are convex. We know that \(\hat{x}_{k}\in X\) and \(\hat{y}_{k}\in Y\). Recall that \((x^{*},y^{*})\in X\times Y\) is a solution of (1.3). Employing (1.4), we obtain that \[f(x^{*},\hat{y}_{k}) \leq f(x^{*},y^{*}); \tag{3.6a}\] \[f(x^{*},y^{*}) \leq f(\hat{x}_{k},y^{*}). \tag{3.6b}\] Set \(x=x^{*}\) in (3.3a) to get that \[\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x^{*},\hat{y}_ {k})\overset{\eqref{eq:x-x^{*}}}{\leq}\frac{\left\|x^{0}-x^{*}\right\|^{2}- \left\|x^{k+1}-x^{*}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_ {i}^{2}\left\|g_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}},\] which, connecting with (3.6a), derives \[\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x^{*},y^{*}) \leq\frac{\left\|x^{0}-x^{*}\right\|^{2}-\left\|x^{k+1}-x^{*}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|g_{i}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}, \tag{3.7}\] Similarly, setting \(y=y^{*}\) in (3.3b), we have that \[f(\hat{x}_{k},y^{*})-\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y ^{i})\overset{\eqref{eq:x-y^{*}}}{\leq}\frac{\left\|y^{0}-y^{*}\right\|^{2}- \left\|y^{k+1}-y^{*}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_ {i}^{2}\left\|h_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}},\] which, combining with (3.6b), entails \[f(x^{*},y^{*})-\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i}) \leq\frac{\left\|y^{0}-y^{*}\right\|^{2}-\left\|y^{k+1}-y^{*}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}+\frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{i}\right\|^{2}}{2 \sum_{j=0}^{k}t_{j}}. \tag{3.8}\] Combine (3.7) and (3.8) to reach the required inequality in (ii). (iii): This follows immediately from (i) and (ii) above. The following result is inspired by [9, Proposition 3.1] in which, under some extra boundedness or compactness assumptions, the authors considered the convergence \(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\frac{1}{k}\sum_{i=0}^{k-1}x^{i},\frac{1}{k}\sum_{i=0}^{k-1}y^{i})\to f(x^{* },y^{*})\), where \((x^{*},y^{*})\) is a saddle-point of \(f\) and \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) is generated by (3.1) with \((\forall k\in\mathbf{N})\)\(t_{k}\equiv\alpha\in\mathbf{R}_{++}\). **Theorem 3.1**.: _Let \((x^{*},y^{*})\in X\times Y\) be a solution of (1.3). Let \(R\), \(G\), and \(S\) be in \(\mathbf{R}_{++}\). Suppose that_ \[\big{\|}(x^{0},y^{0})\big{\|}\leq R,\quad\|(x^{*},y^{*})\|\leq R, \quad\text{and}\quad(\forall k\in\mathbf{N})\,\|(g_{k},h_{k})\|\leq G, \tag{3.9}\] _and that the step sizes satisfy that_ \[(\forall i\in\mathbf{N})\ t_{i}\geq 0\ \text{with}\ t_{0}>0, \quad\sum_{j=0}^{\infty}t_{j}=\infty,\quad\text{and}\quad\sum_{j=0}^{\infty}t _{j}^{2}=S<\infty. \tag{3.10}\] _Then \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\) converges to \(f(x^{*},y^{*})\)._ Proof.: According to Lemma3.22, we have that \[-\frac{\left\|y^{0}-y^{*}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}- \frac{\sum_{i=0}^{k}t_{i}^{2}\left\|h_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}} \leq\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x^{*},y^{ *})\leq\frac{\left\|x^{0}-x^{*}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{\sum_ {i=0}^{k}t_{i}^{2}\left\|g_{i}\right\|^{2}}{2\sum_{j=0}^{k}t_{j}},\] which, combined with our assumptions, (3.9) and (3.10), guarantees that \[-\frac{4R^{2}}{2\sum_{j=0}^{k}t_{j}}-\frac{G^{2}S}{2\sum_{j=0}^{k }t_{j}}\leq\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x^{ *},y^{*})\leq\frac{4R^{2}}{2\sum_{j=0}^{k}t_{j}}+\frac{G^{2}S}{2\sum_{j=0}^{k} t_{j}},\] The inequalities above ensure \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})-f(x^{*},y^{*})\to 0\) since \(\sum_{i=0}^{\infty}t_{i}=\infty\). Below we shall prove the boundedness of the sequence \(((x^{k},y^{k}))_{k\in\mathbf{N}}\), which plays a critical role in our proof of Theorem3.2 below. **Lemma 3.3**.: _Let \((x^{*},y^{*})\in X\times Y\) be a solution of (1.3). Then_ \[(\forall k\in\mathbf{N})\quad\left\langle(x^{k},y^{k})-(x^{*},y^{ *}),(g_{k},h_{k})\right\rangle\geq 0.\] Proof.: As a result of [11, Theorem 1], the operator \[T:X\times Y\to\mathcal{H}_{1}\times\mathcal{H}_{2}:(\bar{x},\bar{y})\mapsto(u,v),\quad\text{where }u\in\partial_{x}f(\bar{x},\bar{y})\text{ and }v\in\partial_{y}(-f(\bar{x},\bar{y}))\] is monotone. In view of our assumption and Fact2.1, \[0\in\partial_{x}f(x^{*},y^{*})\quad and\quad 0\in\partial_{y}(-f(x^{*},y^{*})). \tag{3.11}\] Recall from the construction of the algorithm (3.1) that \((\forall k\in\mathbf{N})\ g_{k}\in\partial_{x}f(x^{k},y^{k})\) and \(h_{k}\in\partial_{y}(-f(x^{k},y^{k}))\). Combine this with (3.11) to derive that \[(\forall k\in\mathbf{N})\quad\left\langle(x^{k},y^{k})-(x^{*},y^{ *}),(g_{k},h_{k})\right\rangle=\left\langle(x^{k},y^{k})-(x^{*},y^{*}),T(x^{k}, y^{k})-T(x^{*},y^{*})\right\rangle\geq 0,\] where we use the monotonicity of \(T\) in the last inequality. **Lemma 3.4**.: _Let \((x^{*},y^{*})\in X\times Y\) be a solution of (1.3). Let \(R\), \(G\), and \(S\) be in \(\mathbf{R}_{++}\). Suppose that_ \[\left\|(x^{0},y^{0})\right\|\leq R,\quad\left\|(x^{*},y^{*})\right\|\leq R,\quad \text{and}\quad\left(\forall k\in\mathbf{N}\right)\left\|(g_{k},h_{k})\right\| \leq G,\] _and that the step sizes satisfy that_ \[(\forall i\in\mathbf{N})\ t_{i}\geq 0\ \text{with}\ t_{0}>0\quad\text{and} \quad\sum_{j=0}^{\infty}t_{j}^{2}=S<\infty.\] _Then \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) is bounded._ Proof.: Let \(k\in\mathbf{N}\). Applying (3.1) in the first equation, we have that \[\left\|(x^{k+1},y^{k+1})-(x^{*},y^{*})\right\|^{2}\] \[= \left\|(\mathrm{P}_{X}(x^{k}-t_{k}g_{k}),\mathrm{P}_{Y}(y^{k}-t _{k}h_{k}))-(x^{*},y^{*})\right\|^{2}\] \[= \left\|\mathrm{P}_{X}(x^{k}-t_{k}g_{k})-x^{*}\right\|^{2}+\left\| \mathrm{P}_{Y}(y^{k}-t_{k}h_{k})-y^{*}\right\|^{2}\] \[= \left\|\mathrm{P}_{X}(x^{k}-t_{k}g_{k})-\mathrm{P}_{X}\,x^{*} \right\|^{2}+\left\|\mathrm{P}_{Y}(y^{k}-t_{k}h_{k})-\mathrm{P}_{Y}\,y^{*} \right\|^{2}\] \[\leq \left\|x^{k}-t_{k}g_{k}-x^{*}\right\|^{2}+\left\|y^{k}-t_{k}h_{k} -y^{*}\right\|^{2}\] \[= \left\|x^{k}-x^{*}\right\|^{2}-2t_{k}\left\langle x^{k}-x^{*},g_{ k}\right\rangle+t_{k}^{2}\left\|g_{k}\right\|^{2}+\left\|y^{k}-y^{*}\right\|^{2}-2t_ {k}\left\langle y^{k}-y^{*},h_{k}\right\rangle+t_{k}^{2}\left\|h_{k}\right\|^ {2}\] \[= \left\|(x^{k},y^{k})-(x^{*},y^{*})\right\|^{2}-2t_{k}\left\langle (x^{k},y^{k})-(x^{*},y^{*}),(g_{k},h_{k})\right\rangle+t_{k}^{2}\left\|(g_{k},h_{k})\right\|^{2}.\] Note that we use (1.2) in the second equation, that we use the fact \(x^{*}\in X\), \(y^{*}\in Y\), \(x^{*}=\mathrm{P}_{X}\,x^{*}\), and \(y^{*}=\mathrm{P}_{Y}\,y^{*}\) in the third equation, that the nonexpansiveness of \(\mathrm{P}_{X}\) and \(\mathrm{P}_{Y}\) is used in the inequality above, and that we use both (1.2) and (1.1) in the last equation. The result above actually tells us that for every \(i\in\{0,1,\ldots,k\}\), \[\left\|(x^{i+1},y^{i+1})-(x^{*},y^{*})\right\|^{2}-\left\|(x^{i},y^{i})-(x^{* },y^{*})\right\|^{2}+2t_{i}\left\langle(x^{i},y^{i})-(x^{*},y^{*}),(g_{i},h_{i} )\right\rangle\leq t_{i}^{2}\left\|(g_{i},h_{i})\right\|^{2}.\] Sum the inequality above over \(i\in\{0,1,\ldots,k\}\) to obtain that \[\left\|(x^{k+1},y^{k+1})-(x^{*},y^{*})\right\|^{2}-\left\|(x^{0}, y^{0})-(x^{*},y^{*})\right\|^{2}+2\sum_{i=0}^{k}t_{i}\left\langle(x^{i},y^{i})-(x^{* },y^{*}),(g_{i},h_{i})\right\rangle\] \[\leq \sum_{i=0}^{k}t_{i}^{2}\left\|(g_{i},h_{i})\right\|^{2},\] which, combined with the assumptions, implies that \[\left\|(x^{k+1},y^{k+1})-(x^{*},y^{*})\right\|^{2}+2\sum_{i=0}^{k}t_{i}\left \langle(x^{i},y^{i})-(x^{*},y^{*}),(g_{i},h_{i})\right\rangle\leq 4R^{2}+SG^{2}. \tag{3.12}\] In view of Lemma 3.3, \(\sum_{i=0}^{k}t_{i}\left\langle(x^{i},y^{i})-(x^{*},y^{*}),(g_{i},h_{i}) \right\rangle\geq 0\). Hence, by (3.12), \[\left\|(x^{k+1},y^{k+1})-(x^{*},y^{*})\right\|^{2}\leq 4R^{2}+SG^{2},\] which yields the boundedness of \(((x^{k},y^{k}))_{k\in\mathbf{N}}\). **Lemma 3.5**.: _Let \((x^{*},y^{*})\in X\times Y\) be a solution of (1.3). Let \(R\), \(G\), and \(S\) be in \(\mathbf{R}_{++}\). Suppose that_ \[\left\|(x^{0},y^{0})\right\|\leq R,\quad\left\|(x^{*},y^{*})\right\|\leq R,\quad \text{and}\quad(\forall k\in\mathbf{N})\left\|(g_{k},h_{k})\right\|\leq G,\] _and that the step sizes satisfy that_ \[(\forall i\in\mathbf{N})\ t_{i}\geq 0\ \text{with}\ t_{0}>0\quad\text{and} \quad\sum_{j=0}^{\infty}t_{j}^{2}=S<\infty.\] _Then \(((\hat{x}_{k},\hat{y}_{k}))_{k\in\mathbf{N}}\) is bounded._ Proof.: The required result is clear from the definition of the sequence \(((\hat{x}_{k},\hat{y}_{k}))_{k\in\mathbf{N}}\) and the boundedness of \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) proved in Lemma 3.4. **Theorem 3.2**.: _Let \((x^{*},y^{*})\in X\times Y\) be a solution of (1.3). Let \(R\), \(G\), and \(S\) be in \(\mathbf{R}_{++}\). Suppose that_ \[\left\|(x^{0},y^{0})\right\|\leq R,\quad\left\|(x^{*},y^{*})\right\|\leq R, \quad\text{and}\quad(\forall k\in\mathbf{N})\left\|(g_{k},h_{k})\right\|\leq G,\] _and that the step sizes satisfy that_ \[(\forall i\in\mathbf{N})\ t_{i}\geq 0\ \text{with}\ t_{0}>0,\quad\sum_{j=0}^{ \infty}t_{j}=\infty,\quad\text{and}\quad\sum_{j=0}^{\infty}t_{j}^{2}=S<\infty.\] _Then \(f(\hat{x}_{k},\hat{y}_{k})\) converges to \(f(x^{*},y^{*})\)._ Proof.: As a result of our assumptions and Lemma 3.5, there exists a constant \(Q\in\mathbf{R}_{++}\) such that \[(\forall k\in\mathbf{N})\quad\left\|(\hat{x}_{k},\hat{y}_{k})\right\|\leq Q.\] Combine this with our assumptions and Lemma 3.2(iii) to establish that \[-\frac{4R^{2}+(R+Q)^{2}}{2\sum_{j=0}^{k}t_{j}}-\frac{SG^{2}}{2\sum_{j=0}^{k}t _{j}}\leq f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\leq\frac{4R^{2}+(R+Q)^{2} }{2\sum_{j=0}^{k}t_{j}}+\frac{SG^{2}}{2\sum_{j=0}^{k}t_{j}},\] which, combining with the assumption \(\sum_{j=0}^{\infty}t_{j}=\infty\), entails that \(f(\hat{x}_{k},\hat{y}_{k})\to f(x^{*},y^{*})\). ## 4 Numerical experiments In this section, we implement our alternating subgradient method on some particular examples to verify our convergence results presented in previous Section 3 and also to analyze convergence rates of our algorithms. Moreover, we compare our iterate scheme with step sizes \((t_{k})_{k\in\mathbf{N}}\) and iterate schemes considered in [9] with constant step sizes. Based on our numerical results presented in Figure 1, we see benefits of replacing constant step sizes by step sizes \((t_{k})_{k\in\mathbf{N}}\) satisfying \((\forall i\in\mathbf{N}\smallsetminus\{0\})\ t_{i}\geq 0\), \(t_{0}>0\), \(\sum_{j=0}^{\infty}t_{j}=\infty\), and \(\sum_{j=0}^{\infty}t_{j}^{2}<\infty\). In addition, in view of Figure 2, Figure 3, Figure 4, and Figure 6 below obtained from our numerical experiments, we discover the convergence \(f(x^{k},y^{k})\to f(x^{*},y^{*})\) in multiple examples, which doesn't have any theoretical support yet. We know that generally subgradient methods converge slowly. Normally, the iterate point corresponding to larger iterate number has better convergence performance. Therefore, to accelerate our subgradient methods for solving convex-concave saddle-point problems, in our experiments below we reorder sequence \((t_{k})_{k\in\mathbf{N}}\) in descending order. Based on our numerical results, this trick works very well. In this section, unless stated otherwise, \(\left((x^{k},y^{k})\right)_{0\leq k\leq K}\) is generated by (3.1) with \(K\) being the number of iterates and step sizes \((\forall k\in\{1,\ldots,K\})\)\(t_{k}=\frac{1}{K+1-k}\). Moreover, we have \[(\forall k\in\mathbf{N})\quad(\hat{x}_{k},\hat{y}_{k})=\left(\sum_{i=0}^{k} \frac{t_{i}}{\sum_{j=0}^{k}t_{j}}x^{i},\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{ k}t_{j}}y^{i}\right).\] Note that the choice of step sizes \[(\forall k\in\{1,\ldots,K\})\quad t_{k}=\frac{1}{K+1-k}\] satisfies \((\forall i\in\mathbf{N}\smallsetminus\{0\})\)\(t_{i}\geq 0\), \(t_{0}>0\), \(\sum_{j=0}^{\infty}t_{j}=\infty\), and \(\sum_{j=0}^{\infty}t_{j}^{2}<\infty\), which is required in our convergence results \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\hat{x}_{k},\hat{y}_{k})\to f(x^{*},y^{*})\) provided in Theorem3.1 and Theorem3.2, respectively. ### Toy example In [9, Proposition 3.1], the authors considered the convergence of \(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\frac{1}{k}\sum_{i=0}^{k-1}x^{i},\frac{1}{k}\sum_{i=0}^{k-1}y^{i})\to f(x^{* },y^{*})\) within certain error level and with some boundedness and compactness assumptions, where \(f:X\times Y\to\mathbf{R}\) is a convex-concave function, \((x^{*},y^{*})\) is a saddle-point of \(f\), and \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\) is generated by (3.1) with \((\forall k\in\mathbf{N})\)\(t_{k}\equiv\alpha\in\mathbf{R}_{++}\). In this subsection, we compare our iterate scheme with iterate schemes associated with step sizes being a constant to show the drawback of constant step sizes. We consider the toy convex-concave function \(f:\mathbf{R}\times\mathbf{R}_{+}\to\mathbf{R}\) defined by \[(\forall(x,y)\in\mathbf{R}\times\mathbf{R}_{+})\quad f(x,y)=x^{2}(1+y)-6xy+8 y+1.\] which is considered in Example2.2(i). As a consequence of Fact2.1, \((2,2)\) is a saddle-point of \(f\). So all desired sequences of iterates must converge to \(f(2,2)=5\). In our experiments related to Figure1, we mainly consider \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{k \in\mathbf{N}}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{k\in\mathbf{N}}\) with \((\forall k\in\mathbf{N})\)\(t_{i}=\alpha\in\mathbf{R}_{++}\) and for different values of \(\alpha\). Note that although when \((\forall k\in\mathbf{N})\)\(t_{i}=\alpha\in\mathbf{R}_{++}\), \((\forall i\in\mathbf{N})\)\(\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}=\frac{1}{k+1}\) independent of \(\alpha\), the sequence of iterates \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) generated by the iterate scheme (3.1) is dependent on the sequence \((t_{i})_{i\in\mathbb{N}}\). So different values of \(\alpha\) indeed deduce different sequences of iterates in consideration. Based on our results, when \(\alpha>1\), both \(\left(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\right)_{k\in\mathbf{N}}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{k\in\mathbf{N}}\) generally don't converge to the desired value \(5\). To get the following Figure1, we calculate \(\left(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\right)_{0\leq k\leq 200}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{0\leq k\leq 200}\) with a random chosen initial point and with setting \(\alpha=1,\alpha=0.8,\alpha=0.5,\alpha=0.1,\alpha=0.01\), and \(\alpha=0.0001\), respectively. Moreover, we also calculate \(\left(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\right)_{0\leq k\leq 200}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{0\leq k\leq 200}\) with \((\forall k\in\{1,\ldots,200\})\)\(t_{k}=\frac{1}{200+1-k}\) as a reference. According to the first subplot of Figure1, we observe that generally sequences of iterates associated with constant step sizes don't converge to required optimal values. (In our numerous related experiments for this particular example, we found that only when \((\forall i\in\mathbf{N})\)\(t_{i}=0.1\), the sequences \(\left(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\right)_{k\in\mathbf{N}}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{k\in\mathbf{N}}\) converge to the required value \(5\) consistently, regardless of the random initial points and problem data. ) Moreover, the second subplot of Figure 1 shows that the convergence rate of our iterate scheme with \((\forall k\in\{1,\ldots,200\})\)\(t_{k}=\frac{1}{200+1-k}\) is faster than iterates schemes associated with constant step sizes. Figure 1 shows some drawbacks of constant step sizes and explains why should we consider \((t_{k})_{k\in\mathbf{N}}\) not being a sequence of constant. In fact, in our experiments associated with this example, we calculated \((f(x^{k},y^{k}))_{1\leq k\leq 200}\) together with \(\left(\frac{1}{k+1}\sum_{i=0}^{k}f(x^{i},y^{i})\right)_{1\leq k\leq 200}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 200}\). We don't present them in Figure 1 because when step sizes are constant, the performance of \((f(x^{k},y^{k}))_{1\leq k\leq 200}\) is much worse than performances of sequences presented in Figure 1 below. Then we randomly chose initial points and calculate the sequence \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 500}\), \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 500}\), and \(\left(f(x^{k},y^{k})\right)_{1\leq k\leq 500}\) with \((\forall k\in\{1,\ldots,500\})\)\(t_{k}=\frac{1}{500+1-k}\). Our result is presented in Figure 2 below. Note that to get a clearer view on the convergence rate, we zoom in more interesting part and set the range of y-axis view as \([4.5,5.5]\). The theoretical convergence of \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\to f(x^{*},y^{*})\) and \(f(\hat{x}_{k},\hat{y}_{k})\to f(x^{*},y^{*})\) is Figure 1: Comparison with iterate scheme associated with constant step sizes presented in Theorems 3.1 and 3.2. Although the convergence of \(f(x^{k},y^{k})\to f(x^{*},y^{*})\) is not provided theoretically yet, it is shown numerically in Figure 2, which motivates our future work on the convergence of \(f(x^{k},y^{k})\to f(x^{*},y^{*})\) or \((x^{k},y^{k})\to(x^{*},y^{*})\). ### Linear program in inequality form Let \(A\in\mathbf{R}^{m\times n}\), \(b\in\mathbf{R}^{m}\), and \(c\in\mathbf{R}^{n}\). In this part, we consider the convex-concave function \(f:\mathbf{R}^{n}\times\mathbf{R}^{m}_{+}\to\mathbf{R}\) defined as \[(\forall(x,y)\in\mathbf{R}^{n}\times\mathbf{R}^{m}_{+})\quad f(x,y)=y^{T}Ax+c^ {T}x-b^{T}y,\] which is presented in Example 2.2(ii). In our experiments, after randomly choosing \(A\in\mathbf{R}^{100\times 10}\), \(b\in\mathbf{R}^{100}\), and \(c\in\mathbf{R}^{10}\), we apply the Python-embedded modeling language CVXPY (see [7] for details) to find bounded and feasible problems. Recall from Example 2.2(ii) that the convex-concave function \(f\) above is the Lagrangian of a linear programming with an inequality constraint and that the optimal solutions of the related primal and dual problem are both equal to the value of \(f\) over a saddle-point. We also solve the corresponding primal and dual problems by CVXPY to check the correctness of results from our algorithms. After finding problems with optimal solutions, we randomly choose initial points \((x^{0},y^{0})\in\mathbf{R}^{10}\times\mathbf{R}^{100}\) and calculate the sequence \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 100}\), \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 100}\), and \(\left(f(x^{k},y^{k})\right)_{1\leq k\leq 100}\) with \(\left(\forall k\in\{1,\ldots,100\}\right)\)\(t_{k}=\frac{1}{100+1-k}\). We presented one result in Figure 3 below. Note that to see only the important range of y-axis, we zoom in and set the y-axis limit on the picture as \([f(x^{*},y^{*})-1,f(x^{*},y^{*})+1]\), where the optimal value \(f(x^{*},y^{*})\) Figure 2: Convergence result with randomly chosen initial points is obtained from our CVXPY code. Because in this case \(f\) is linear, it's not a surprise that \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1\leq k \leq 100}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 100}\) on Figure 3 are consistent. In all our experiments, the convergent point of our sequence of iterates is identical with the optimal solutions obtained by CVXPY. The convergence of \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 100}\) and \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 100}\) confirms numerically our theoretical results in Theorems 3.1 and 3.2. Again, \(\left(f(x^{k},y^{k})\right)_{1\leq k\leq 100}\) converges to our required point although we have no theoretical support of the convergence yet. ### Least-squares problem with \(\ell_{1}\) regularization Let \(A\in\mathbf{R}^{m\times n}\), \(b\in\mathbf{R}^{m}\), and \(\gamma\in\mathbf{R}_{++}\). We consider the convex-concave function \(f:\mathbf{R}^{n+m}\times\mathbf{R}^{m}\to\mathbf{R}\) defined as \[\left(\forall((x,u),y)\in\mathbf{R}^{n+m}\times\mathbf{R}^{m}\right)\quad f(( x,u),y)=\frac{1}{2}\left\|u\right\|_{2}^{2}+\gamma\left\|x\right\|_{1}+y^{T}(Ax-b-u),\] which is considered in Example 2.2(iii) and Example 3.1. As stated in Example 2.2(iii), \(f\) is the Lagrangian of a least-squares problem with \(\ell_{1}\) regularization. In our experiments, we set \(\gamma=1\) and randomly choosing \(A\in\mathbf{R}^{100\times 50}\) and \(b\in\mathbf{R}^{100}\). Note that in this case the problem is always feasible and bounded. Similarly with Section 4.2, we apply CVXPY to solve related primal and dual problems. Consider \(X=\mathbf{R}^{100+50}\) and \(Y=\{y\in\mathbf{R}^{100}\ :\ \left\|A^{T}y\right\|_{\infty}\leq\gamma\}\). We randomly choose initial points \((x^{0},y^{0})\in\mathbf{R}^{100+50}\times\mathbf{R}^{100}\) and implement the sequences \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 500}\), \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 500}\), and \((f(x^{k},y^{k}))_{1\leq k\leq 500}\) with \((\forall k\in\{1,\ldots,500\})\)\(t_{k}=\frac{1}{500+1-k}\). We show one Figure 3: Lagrangian of the inequality form LP result in Figure 4 in which the convergent point is consistent with the optimal solution obtained by CVXPY for corresponding primal and dual problems. With the optimal value \(f(x^{*},y^{*})\) obtained by our related CVXPY code, we zoom in and set the y-axis limit on the picture as \([f(x^{*},y^{*})-1,f(x^{*},y^{*})+1]\) to see only the important range of y-axis. It's interesting that \((f(x^{k},y^{k}))_{1\leq k\leq 500}\) converges to the required optimal value in this example as well. ### Matrix game We consider an easy version of the game interpretation of saddle-point problems below (see, e.g., [4, Section 5.4.3] for details). Let \(C\in\mathbf{R}^{m\times n}\) and let \(X\subseteq\mathbf{R}^{n}\) and \(Y\subseteq\mathbf{R}^{m}\) be nonempty closed and convex subsets. The subject function in this case is \(f:\mathbf{R}^{n}\times\mathbf{R}^{m}\rightarrow\mathbf{R}\) defined as \[(\forall(x,y)\in X\times Y)\quad f(x,y)=x^{T}Cy.\] Figure 4: Lagrangian of the least-squares problem with \(\ell_{1}\) regularization In our experiment, we use the example of matrix game in [12, Sections 5.2 and 5.3] and set \[C=\begin{pmatrix}1&2\\ 3&1\end{pmatrix},\] \[X:=\{x\in\mathbf{R}^{2}\ :\ \sum_{i=1}^{2}x_{i}=1\text{ and }(\forall i \in\{1,2\})\ x_{i}\geq 0\},\text{ and}\] \[Y:=\{y\in\mathbf{R}^{2}\ :\ \sum_{i=1}^{2}y_{i}=1\text{ and }( \forall i\in\{1,2\})\ y_{i}\geq 0\}.\] In view of [12, Section 5.3], the optimal value (that is the value of \(f\) over the saddle-point) is \(1.6667\) with keeping \(4\) decimal places. In our experiments, we randomly choose initial points \((x^{0},y^{0})\in X\times Y\) and calculate the sequences \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 1000}\), \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 1000}\), and \((f(x^{k},y^{k}))_{1\leq k\leq 1000}\) with multiple choices of \((t_{k})_{k\in\mathbf{N}}\) but we noticed that in our experiments \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 1000}\) and \((f(x^{k},y^{k}))_{1\leq k\leq 1000}\) converge very slow. To get the following Figure 5, we randomly choose initial points \((x^{0},y^{0})\in X\times Y\) and implement \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{1\leq k\leq 1000}\) with two cases of the parameter \((t_{k})_{k\in\mathbf{N}}\): \((\forall i\in\mathbf{N})\ t_{i}=0.1\) and \((\forall i\in\mathbf{N})\ t_{i}=0.01\). (We considered also other choices of \((t_{k})_{k\in\mathbf{N}}\) including \((\forall k\in\{1,\ldots,1000\})\ t_{k}=\frac{1}{1000+1-k}\), but their convergence performances are not good. We calculated also \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 1000}\) and \((f(x^{k},y^{k}))_{1\leq k\leq 1000}\) in the experiment associated with Figure 5, but they converge very slowly.) We observe that, in Figure 5, the convergent point is indeed the optimal solution obtained by [12, Section 5.3]. ### Robust Markowitz portfolio construction problem As a last example in this section, we consider the robust Markowitz portfolio construction problem. (See, e.g., [12] or [3] for details.) We consider the robust Markowitz portfolio construction Figure 5: Matrix game with constant step sizes problem presented in [12, Sections 3.4 and 6.3] here. Let \(\bar{\mu}\in\mathbf{R}^{n}\) be the nominal mean, let \(\bar{\Sigma}\in\mathbf{S}^{n}_{++}\) be the nominal covariance, and let \(\gamma\in\mathbf{R}_{++}\). Let \(\mathcal{W}\subseteq\mathbf{R}^{n}\) be a convex set of feasible portfolios and let \[\mathcal{M}= \{\bar{\mu}+\delta\ :\ (\forall i\in\{1,2,\ldots,n\})\left|\delta_{i} \right|\leq\rho_{i}\},\] \[\mathcal{S}= \{\bar{\Sigma}+\Delta\ :\ \bar{\Sigma}+\Delta\in\mathbf{S}^{n}_{+} \text{ and }(\forall i,j\in\{1,2,\ldots,n\})\left|\Delta_{ij}\right|\leq\eta(\bar{ \Sigma}_{ii}\bar{\Sigma}_{jj})^{\frac{1}{2}}\},\] where \(\rho\in\mathbf{R}^{n}_{++}\) is a vector of uncertainties in the forecast returns, \(\delta\in\mathbf{R}^{n}\) is the perturbation of the nominal mean \(\bar{\mu}\), \(\Delta\in\mathbf{S}^{n}\) is the perturbation of the nominal covariance \(\bar{\Sigma}\), and \(\eta\in(0,1)\) is a parameter scaling the perturbation to the forecast covariance matrix. In this case, the convex-concave function is \(f:\mathcal{M}\times\mathcal{S}\times\mathcal{W}\to\mathbf{R}\) defined as \[(\forall(x,y)=((\mu,\Sigma),w)\in(\mathcal{M}\times\mathcal{S})\times \mathcal{W})\quad f(x,y)=f((\mu,\Sigma),w)=\mu^{T}w-\gamma w^{T}\Sigma w.\] We also use the data (\(\bar{\mu}\), \(\bar{\Sigma}\), \(\rho\), \(\eta\), and \(\gamma\)) worked in [12, Sections 6.3] where \(n=6\). In view of the result obtained therein, the optimal value (that is, the value \(f((\mu^{*},\Sigma^{*}),w^{*})\) over the saddle-point) is \(0.076\). We randomly choose \(x^{0}=(\mu^{0},\Sigma^{0})\in\mathcal{M}\times\mathcal{S}\) and \(y^{0}=w^{0}\in\mathcal{W}\) and calculate the sequences \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 1000}\) and \((f(x^{k},y^{k}))_{1\leq k\leq 1000}\) with \((\forall k\in\{1,\ldots,1000\})\)\(t_{k}=\frac{1}{1000+1-k}\). We present one result in Figure 6 and see clearly that \(\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\right)_{1 \leq k\leq 1000}\) and \(\left(f(x_{k},y_{k})\right)_{k\in\mathbf{N}}\) converge to the required optimal value \(0.076\) in \(1000\) iterates. The performance of \(\left(f(\hat{x}_{k},\hat{y}_{k})\right)_{k\in\mathbf{N}}\) in our experiments of this example is not good so we don't present it in Figure 6 below. Figure 6: Convergence of robust Markowitz portfolio construction problem Conclusion In this work, we proved some convergence results of our alternating subgradient method for convex-concave saddle-point problems associated with general convex-concave functions. Let \((x^{*},y^{*})\in X\times Y\) be a saddle-point of a convex-concave function \(f\), and let \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) be a sequence of iterates generated by our alternating subgradient method associated with the sequence \((t_{k})_{k\in\mathbf{N}}\) of step sizes. We presented the convergence \(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}f(x^{i},y^{i})\to f(x^{*},y^{*})\) under some popular assumptions on the step-size. With the same assumption, we also showed \(f\left(\sum_{i=0}^{k}\frac{t_{i}}{\sum_{j=0}^{k}t_{j}}x^{i},\sum_{i=0}^{k} \frac{t_{i}}{\sum_{j=0}^{k}t_{j}}y^{i}\right)\to f(x^{*},y^{*})\). Our convergence results were confirmed by our numerical experiments in examples of a linear program in inequality form, a least-squares problem with \(\ell_{1}\) regularization, a matrix game, and a robust Markowitz portfolio construction problem. We also compared our iterate scheme (associated with a not summable but square summable sequence \((t_{k})_{k\in\mathbf{N}}\) of step sizes) with iterate schemes associated with constant step sizes on a Lagrangian of an easy convex constrained optimization problem. Our numerical result showed some benefits of replacing constant step sizes with our step sizes \((t_{k})_{k\in\mathbf{N}}\). Additionally, in our numerical experiments, we displayed the convergence \(f(x^{k},y^{k})\to f(x^{*},y^{*})\) in multiple examples, which currently lacks theoretical support, to the best of our knowledge. This motivates our future work on theoretical proof of the convergence of \((x^{k},y^{k})\to(x^{*},y^{*})\) and \(f(x^{k},y^{k})\to f(x^{*},y^{*})\). ## Acknowledgments Hui Ouyang acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number PDF - 567644 - 2022].
2310.00956
Semiframes: algebras of heterogeneous consensus
Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.
Murdoch Gabbay, Giuliano Losa
2023-10-02T07:48:55Z
http://arxiv.org/abs/2310.00956v2
# Semiframes: algebras of heterogeneous consensus ###### Abstract Semitopologies model consensus in distributed system by equating the notion of a _quorum_ -- a set of participants sufficient to make local progress -- with that of an _open set_. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a _compatibility relation_\(*\), generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right. This paper may appear theoretical -- it is! -- but the definitions and results are still motivated by and in some cases correspond to behaviour in practical systems. Keywords:Frames, Semitopology, Semiframes, Consensus. ###### Contents * 1 Introduction * 1.1 Modern consensus * 1.2 Map of the paper * 1.3 This paper in context * 2 Semitopology * 2.1 Definitions, examples, and some discussion * 2.1.1 Definition * 2.1.2 Semitopologies are not topologies * 2.1.3 Examples * 2.1.4 Some discussion * 2.2 Continuity, and interpretation of continuity as consensus * 2.3 Neighbourhoods of a point * 3 Transitive sets and topens * 3.1 Some background on sets intersection * 3.2 Transitive open sets and value assignments * 3.3 Examples and discussion of transitive sets and topens * 3.4 Closure properties of transitive sets * 3.5 Closure properties of topens * 3.6 Intertwined points * 3.6.1 The basic definition, and some lemmas * 3.6.2 Pointwise characterisation of transitive sets * 4 Open interiors, communities, and regular points * 4.1 Community of a (regular) point * 4.2 Further exploration of (quasi-/weak) regularity and topens sets * 4.3 Intersection and partition properties of regular spaces * 4.4 Examples of communities and (ir)regular points * 5 Closed sets * 5.1 Closed sets * 5.2 Closed neighbourhoods and intertwined points * 5.3 Regularity, maximal topens, and minimal closed neighbourhoods * 5.4 Relation between \(p_{\hat{0}}\) and \(|p|\) * 5.5 (Un)conflicted points: transitivity of \(\backslash\) * 6 Semiframes: compatible complete semilattices * 6.1 Complete join-semilattices, and morphisms between them * 6.2 The compatibility relation * 6.3 The definition of a semiframe Semifilters and abstract points * 7.1 The basic definition, and discussion * 7.2 Properties of semifilters * 7.2.1 Things that are familiar from filters * 7.2.2 Things that are different from filters * 7.3 Sets of abstract points * 7.4 \(\operatorname{St}(\mathsf{X},\leq,*)\): the semitopology of abstract points * 8 Spatial semiframes, and sober semitopologies * 8.1 Definition of spatial semiframes * 8.2 The neighbourhood semifilter \(nbhd(p)\) * 8.2.1 The definition and basic lemma * 8.2.2 Application to semiframes of open sets * 8.2.3 Application to characterise \(T_{0}\) spaces * 8.3 Sober semitopologies * 8.3.1 The definition and a key result * 8.3.2 Sober topologies contrasted with sober semitopologies * 9 Four categories, and functors between them * 9.1 The categories \(\mathsf{SemiTop}/\mathsf{Sober}\) of semitopologies/sober semitopologies * 9.2 The categories \(\mathsf{SemiFrame}/\mathsf{Spatial}\) of semiframes/spatial semiframes * 9.3 Functoriality of the maps * 9.4 Sober semitopologies are categorically dual to spatial semiframes * 10 Semifilters and their well-behavedness conditions, dually * 10.1 (Maximal) semifilters and transitive elements * 10.2 The compatibility system \(x^{*}\) * 10.3 The compatibility system \(F^{*}\) * 10.3.1 Basic definitions and results * 10.3.2 Strong compatibility: when \(F^{*}\) is a semifilter * 10.4 Semiframe characterisation of community * 10.5 Semiframe characterisation of regularity * 10.6 Semiframe characterisation of (quasi/weak)regularity * 10.7 Characterisation of being intertwined * 10.8 Strong compatibility in semitopologies * 11 Graph representation of semitopologies * 11.1 From a semitopology to its intersection graph * 11.1.1 The basic definition * 11.1.2 The preorder \(\leq\) * 11.1.3 Transitive elements * 11.2 From a semiframe to its straddling graph * 11.2.1 The straddling relation \(\ltimes\) 11.2.2 Recovering \(\leq\) and \(*\) from \(\ltimes\) 12.1 Future work and conclusions 12.1 Future work 12.2 Topology vs. semitopology 12.3 Related work 12.4 Final comments ## 1 Introduction ### Modern consensus Consensus requires a set of participants to agree on a common output starting from possibly conflicting individual inputs. Algorithms to attain consensus often use a notion of _quorum_[13]; a set of participants whose unanimous adoption of a value guarantees that other (typically all other) participants will eventually also adopt this value. Social choice theorists have a similar notion called a _winning coalition_[15, Item 5, page 40]. A typical simple example of a quorum is a set of participants whose cardinality exceeds a certain threshold, e.g. more than one half, or two-thirds, of the total system size. This two-thirds majority is what is used for example for the consensus algorithm presented in a classic paper [17] (see e.g. Theorem 2 of that paper). However, talking about a quorum as being (for example) a 'two-thirds majority' assumes that this concept is \(a\). well-defined, \(b\). practical, and \(c\). makes sense even as a hypothetical. For a modern breed of distributed systems, we may need to reject any or all of these assumptions: systems now exist that are radically more distributed, driven by an exponential democratisation of computation and networks and a corresponding imperative to maximise reliability and performance whilst minimising coordination. Consider an example outside of computer systems: _social consensus_. People in the real world form opinions based on quorums -- the opinions of their sets of friends and family and trusted media sources. These sets can and do differ from one person to the next, and it is certainly not the case that people form opinions based on e.g. polling a two-thirds majority of _all humans on the planet_. Even if we tried to compute such a poll, it would be \(a\). out-of-date (by the time we have even announced the poll, the population has changed!), \(b\). impossible in practice, and \(c\). politically unacceptable. And yet, societies do manage to organise themselves (some of the time). Some other notion of consensus is clearly at play here. Computing offers a great many more examples: Bitcoin (a blockchain) and Napster (a peer-to-peer file-sharing protocol) are two that have entered our collective social conscience (though their notions of quorum are a bit implicit). The XRP Ledger [18] and Stellar network [19] have explicit notions of quorum in the sense we will study in this paper (see also the discussion of _fail-prone systems and quorum systems_ in Subsection 12.3). The basic idea of semitopologies is to equate the notion of 'quorum' above with the notion of 'open set' from topology. Semitopologies/semiframes generalise topology/frames, because there is no requirement that the intersection of two opens be open, nor (correspondingly) that a semiframe be closed under meets. It turns out that this idea is fruitful. For example, continuity at a point corresponds to consensus at that point (Remark 2.2.5). And, as a model of consensus, semitopologies/semiframes are already naturally _heterogeneous_: * There is no requirement that opens be necessarily uniformly generated. * There is no requirement that opens necessarily intersect, nor that intersections of open sets necessarily be open. We would not even normally mention that topologies are 'heterogeneous'; it is taken for granted that the open neighbourhoods of a point \(p\) need not _a priori_ have much in common with those of a distinct point \(p^{\prime}\). However, historically in the theory of consensus this is a relatively new idea - albeit one which is being taken up now by numerous authors [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 15, 19, 20]. **Remark 1.1.2**.: **(This paper in a nutshell)** Our starting point in this paper is to use topology to get a mathematical handle on local quorums, by identifying them with open sets. As mentioned above, the intersection of two quorums need not be a quorum, so this leads us to generalise to _semitopology_ -- which is like topology but intersections of opens need not be open. We then 1. study semitopological well-behavedness properties, 2. abstract semitopologies to semiframes, 3. prove a categorical duality result, and then 4. transfer semitopological well-behavedness properties to semiframes. **Remark 1.1.3**.: This is a maths paper, inspired by practical systems. To the mathematically inclined reader we say: enjoy the maths but remember that we are aiming for a broader audience where possible, so please be patient if we spell some things out for clarity and precision. To the more practical reader we say: this is a paper written for mathematicians of a particular kind (notably, the kind interested in algebras and duality results) so please be patient with that -- but there may be less distance between practical systems and what is in this paper than you think. The underlying semitopological definitions and results do correspond to behaviour in practical systems: in particular, semitopologies are a fairly accurate high-level account of how the XRP Ledger [11] and Stellar network [18] work, and semiframes just shift up another level of abstraction. Getting distributed systems to work is a problem that requires both good theory _and_ good practice, working together. This paper is one step towards this larger goal. **Remark 1.1.4**.: A classic text on topology [12] justifies topology as follows: 1. Logically, open sets model _affirmations_:1 an open set \(O\) corresponds to an affirmation (of \(O\)) [12, page 10]. Footnote 1: Affirmation: Something declared to be true; a positive statement or judgment (permalink). 2. Computationally, open sets model _semidecidable properties_. See the first page of the preface in [12]. In the same style we can justify semitopologies in terms of _collaboration_: 1. Logically, open sets model _collaborative_ affirmations: an open set \(O\) corresponds to an outcome that is agreed on by collaborations within \(O\). 2. Computationally, open sets model _collaborative_ outcomes. Of course, this raises the question of what collaboration is. Informally, collaboration is when participants work together to arrive at a shared outcome. Slightly more formally, a _collaboration_ is some set \(C\) of participants with power to progress \(C\) to a next state, for some notion of transition.2 Footnote 2:...and possibly more than \(C\); but definitely \(C\). So, consider a set of _participants_\(\mathsf{P}\) and let _collaborations_\(\mathcal{C}\subseteq\mathit{pow}(\mathsf{P})\) be a subset of the powerset of \(\mathsf{P}\) subject to just one condition: every \(p\in\mathsf{P}\) is contained in at least one (and possibly more than one) collaboration \(C\in\mathcal{C}\). Then a set \(O\subseteq\mathsf{P}\) is considered to be open when every \(p\in O\) is accompanied by (at least one) of its collaborations \(p\in C\subseteq O\), or equivalently we can just say: \(O\) is a union of collaborations. Note that this does not mean that every point in \(O\) must proceed in lockstep with every other point (cf. the discrete semitopology in Definition 2.1.6(1); in which every point need collaborate only with itself and so can progress individually), but it does mean that any outcome represented by \(O\) is collaborative in a weaker sense that every \(p\) has a collaboration with _some_\(p\in C\subseteq O\). The reader who wants to see concrete examples of semitopologies and semiframes will find these in the body of the paper, starting with Subsection 2.1.3. Or: the reader can read this paper in a spirit of pure mathematics, where a test for an interesting definition is that we get interestingly more structure out than we put in, and as we shall see, this is indeed the case. ### Map of the paper 1. Section 1 is the Introduction. You Are Here. 2. In Section 2 we define semitopologies and show how continuity corresponds to local consensus (Definition 2.1.2 and Lemma 2.2.4). 3. In Section 3 we introduce _transitive sets_, _topens_, and _intertwined points_. These are all different views on the anti-separation well-behavedness properties that will interest us in this paper. Transitive sets are guaranteed to be in consensus (in a sense made precise in Theorem 3.2.3 and Corollary 3.2.5), and we take a first step to understanding the fine structure of semitopologies by proving that every semitopology partitions into maximal open sets (Corollary 3.5.3 and Theorem 3.5.4), plus other kinds of points which we classify in the next Section. 4. In Section 4 we start to classify points in more detail, introducing notions of _regular_, _weakly regular_, and _quasiregular_ points (Definition 4.1.3).3 Footnote 3: The other main classification is _conflicted_ points, in Definition 5.5.1. These properties are connected by an equation: regular = weakly regular + conflicted; see Theorem 5.5.4. Regular points are those contained in some topen set, and they display particularly good consensus behaviour. Regularity will be very important to us because implies consensus (see Theorems 3.2.3 and 3.5.4) and we will characterise it in multiple ways: see Remark 4.2.5. (A survey of characterisations of weak regularity requires more machinery and appears in Remark 5.2.5.) 5. In Section 5 we study _closed sets_ and _(un)conflicted points_. In particular we study the interaction between intertwined points, topens, and closures. Typical results are Proposition 5.2.3 and Theorem 5.3.2 which characterise sets of intertwined points as minimal closures. We also see Theorem 5.5.4, which is a key result connecting various well-behavedness conditions. 6. In Section 6 we introduce _semiframes_. These are the algebraic version of semitopologies, and they are to semitopologies as frames are to topologies. The innovation here is that semiframes are not just join-semilattices; they include a _compatibility relation_\(\ast\), which abstracts the property of sets intersection \(\betweenbetween\) (see Remark 1.3.6). 7. In Section 7 we introduce _semifilters_. These play a similar role as filters do in topologies, except that semifilters have a _compatibility condition_ instead of closure under finite meets. We develop the notion of abstract points (completely prime semifilters), and show how to build a semitopology out of the abstract points of a semiframe. 8. In Section 8 we introduce _sober semitopologies_, and _spatial semiframes_. The reader familiar with categorical duality will know these conditions. Some of the details are significantly different (see for instance the discussion in Subsection 8.3.2) but at a high level these conditions work in the proofs just as they do for the topological duality. 9. In Section 9 we consider the _duality_ between suitable categories of (sober) semitopologies and (spatial) semiframes. 10. In Section 10 we dualise the semitopological _well-behavedness conditions_ from Section 3 to algebraic versions. The correspondence is good (Proposition 10.6.2) but also imperfect in some interesting ways (Remark 10.8.11). 11. In Section 11 we briefly consider two _other ways to represent semitopologies_, based on graphs. 12. In Section 12 we conclude and discuss related and future work. ### This paper in context **Remark 1.3.1**.: **(A paper on semitopologies)** We point the reader to a sister paper to this one, which at time of writing is available online as an unpublished draft [10]. It focusses on point-set semitopologies and we refer the interested reader there for more on semitopologies as sets of points. Where there is overlap, it is so that this paper can be self-contained and self-motivated. Even where material is in common (such as examples and some basic theorems and definitions), the presentation here has likely been edited and specialised for the intended application of semiframes. **Remark 1.3.2**.: **(Algorithms)** This is not a paper about computation, so the reader should not expect any algorithms. In fact semitopologies do have algorithmic content; see the preliminary but rigorous development of _witnessed sets_ and _witness semitopologies_ in [11, Section 8], which we will also expand on in future work. But, semiframes and semitopologies are subtle and interesting mathematical objects in their own right, just as frames and topologies are. If semiframes do turn out to be algorithmically useful then this would certainly be a plus, but it is not why we wrote this paper. **Remark 1.3.3**.: **(Algebraic topology \(\neq\) semitopology and semiframes)** Note that algebraic topology has been applied to the solvability of distributed-computing tasks in computational models (e.g. the impossibility of \(k\)-set consensus and the Asynchronous Computability Theorem [14, 15, 16]; see [17] for a survey). This paper is not that! This paper is algebraic and topological, but in different senses and to different ends. We use semitopologies to study the notion of consensus itself, rather than the solvability of consensus and other tasks in computation models; and, we use algebra to abstract semitopologies. **Remark 1.3.4**.: **(Where the interesting properties are)** Topology often studies spaces with strong separability properties between points, like Hausdorff separability. Note that we will go the other way: for our applications, it seems particularly interesting to study clusters of points that _cannot_ be separated. For example, we state and discuss a novel 'anti-Hausdorff' anti-separation property which we call _being intertwined_ (see Definition 3.6.1 and Remark 3.6.7). Within an intertwined set, continuity implies agreement in a particularly strong sense (see Corollary 3.6.6). This leads us to study classes of semitopologies and semiframes with various anti-separation well-behavedness conditions; see most notably _regularity_ (Sections 4 and 10). **Remark 1.3.5**.: The reader may have heard of _frames_ and _locales_. These are the same thing: the category of locales is just the categorical opposite of the category of frames. So every time we write'semiframe', the reader can safely read'semilocale'; these are two names for essentially the same structure up to reversing arrows. The literature on frames and locales is huge, as indeed is the literature on topology. Classic texts are [15, 16]. More recent and very readable presentations are in [13, 13]. This literature is a rich source of ideas for things to do with semiframes, with respect to which we cannot possibly be comprehensive in this single paper: there are many things of interest that we simply have not done yet, or perhaps have not even (yet) realised could be done, and this is a feature, not a bug, since it reflects a wide scope for possible future research. A partial list of possible future work is in Subsection 12.1; and lists of properties and non-properties of semiframes/semitopologies vs. frames/topologies are in Subsections 7.2.1, 7.2.2, 8.3.2, and 12.2. **Remark 1.3.6**.: Something amazing happens in this paper. We have the compatibility relation \(*\). This arises naturally in two ways: it is key to our categorical duality result, and it is also the algebraic version of \(\backslash\!\!\!\backslash\) in semitopologies (sets intersection) which we use to express well-behavedness properties such as regularity. This is amazing because these two motivations for \(*\) are independent: the categorical duality does not require regularity, and the regularity properties do not require a duality -- and yet when we study both well-behavedness and duality, the same structures emerge. ## 2 Semitopology ### Definitions, examples, and some discussion #### 2.1.1 Definition **Notation 2.1.1**.: Suppose \(\mathsf{P}\) is a set. Write \(pow(\mathsf{P})\) for the powerset of \(\mathsf{P}\) (the set of subsets of \(\mathsf{P}\)). **Definition 2.1.2**.: A **semitopological space**, or **semitopology** for short, consists of a pair \((\mathsf{P},\mathsf{Open}(\mathsf{P}))\) of * a (possibly empty) set \(\mathsf{P}\) of **points**, and * a set \(\mathsf{Open}(\mathsf{P})\subseteq pow(\mathsf{P})\) of **open sets**, such that: 1. \(\varnothing\in\mathsf{Open}(\mathsf{P})\) and \(\mathsf{P}\in\mathsf{Open}(\mathsf{P})\). 2. If \(X\subseteq\mathsf{Open}(\mathsf{P})\) then \(\bigcup X\in\mathsf{Open}(\mathsf{P})\).4 Footnote 4: There is a little overlap between this clause and the first one: if \(X=\varnothing\) then by convention \(\bigcup X=\varnothing\). Thus, \(\varnothing\in\mathsf{Open}(\mathsf{P})\) follows from both clause 1 and clause 2. If desired, the reader can just remove the condition \(\varnothing\in\mathsf{Open}(\mathsf{P})\) from clause 1, and no harm would come of it. We may write \(\mathsf{Open}(\mathsf{P})\) just as \(\mathsf{Open}\), if \(\mathsf{P}\) is irrelevant or understood, and we may write \(\mathsf{Open}_{\neq\varnothing}\) for the set of nonempty open sets. **Remark 2.1.3**.: As a sets structure, a semitopology on \(\mathsf{P}\) is like a _topology_ on \(\mathsf{P}\), but without the condition that the intersection of two open sets be an open set. As a lattice structure, a semitopology on \(\mathsf{P}\) is a bounded complete join-subsemilattice of \(pow(\mathsf{P})\).5 Footnote 5: _Bounded_ means closed under empty intersections and unions, i.e. containing the empty and the full set of points. _Complete_ means closed under arbitrary (possibly empty, possibly infinite) sets unions. The outcome of this paper is to show that a semitopology on \(\mathsf{P}\) is not _just_ a bounded complete join-subsemilattice of \(pow(\mathsf{P})\). It is in fact a subsemiframe (Definition 6.3.1) of \(pow(\mathsf{P})\). For that observation to be well-motivated, we need to explain more about what semitopologies are. #### 2.1.2 Semitopologies are not topologies Semitopologies certainly have a topological flavour (by design), but they have their own distinct behaviour. We note here some of the differences that are close enough to the surface to see before we have even developed much of the theory. **Remark 2.1.4**.: Every semitopology \((\mathsf{P},\mathsf{Open})\) gives rise to a topology just by closing opens under intersections. But, there is more to semitopologies than being subbases for a corresponding topology, because: 1. We are explicitly interested in situations where intersections of open sets need _not_ be open (as discussed e.g. in the Introduction and in Remark 2.2.5). 2. Completing a semitopology to a topology by closing under intersections, loses information. For example: the'many', 'all-but-one', and'more-than-one' semitopologies in Example 2.1.7 express three distinct notions of quorum, yet all three yield the discrete semitopology (Definition 2.1.6) if we close under intersections and \(\mathsf{P}\) is infinite. See also the overview in Subsection 12.2. See also Remark 2.1.10. **Lemma 2.1.5**.: In topologies, if a point \(p\) has a minimal open neighbourhood then it is least. In semitopologies, a point may have multiple distinct minimal open neighbourhoods.6 Footnote 6: In finite semitopologies, minimal open neighbourhoods also exist, and these are a good model for the _collaborations_ mentioned in Remark 1.1.4. **Proof:** To see that in a topology every minimal open neighbourhood is least, just note that if \(p\in A\) and \(p\in B\) then \(p\in A\cap B\). So if \(A\) and \(B\) are two minimal open neighbourhoods then \(A\cap B\) is contained in both and by minimality is equal to both. To see that in a semitopology a minimal open neighbourhood need not be least, it suffices to provide an example. Consider \((\mathsf{P},\mathsf{Open})\) defined as follows, as illustrated in Figure 1: * \(\mathsf{P}=\{0,1,2\}\) * \(\mathsf{Open}=\{\varnothing,\ \{0,1\},\ \{1,2\},\ \{0,1,2\}\}\) Note that \(1\) has two minimal open neighbourhoods: \(\{0,1\}\) and \(\{1,2\}\). \(\sqcap\)\(\sqcup\) #### Examples As standard, we can make any set \(\mathsf{Val}\) into a semitopology (indeed, it is also a topology) just by letting open sets be the powerset: **Definition 2.1.6**.: 1. Call \((\mathsf{P},\mathit{pow}(\mathsf{P}))\) the **discrete semitopology** on \(\mathsf{P}\). We may call a set with the discrete semitopology a **semitopology of values**, and when we do we will usually call it \(\mathsf{Val}\). We may identify \(\mathsf{Val}\)-the-set and \(\mathsf{Val}\)-the-discrete-semitopology; meaning will always be clear. Figure 1: An example of a point with two minimal open neighbourhoods (Lemma 2.1.5) 2. When \((\mathsf{P},\mathsf{Open})\) is a semitopology and Val is a semitopology of values, we may call a function \(f:\mathsf{P}\to\mathsf{Val}\) a **value assignment**. **Example 2.1.7**.: We consider further examples of semitopologies: 1. Every topology is also a semitopology; intersections of open sets are allowed to be open in a semitopology, they are just not constrained to be open. 2. The **initial** semitopology \((\varnothing,\{\varnothing\})\) and the **final** semitopology \((\{*\},\{\varnothing,\{*\}\})\) are semitopologies. 3. An important discrete semitopological space is \[\mathbb{B}=\{\bot,\top\}\quad\text{with the discrete semitopology}\quad \mathsf{Open}(\mathbb{B})=\{\varnothing,\{\bot\},\{\top\},\{\bot,\top\}\}.\] We may silently treat \(\mathbb{B}\) as a (discrete) semitopological space henceforth. 4. Take \(\mathsf{P}\) to be any nonempty set. Let the **trivial** semitopology on \(\mathsf{P}\) have \[\mathsf{Open}=\{\varnothing,\mathsf{P}\}.\] So (as usual) there are only two open sets: the one containing nothing, and the one containing every point. The only nonempty quorum is \(\mathsf{P}\) itself, reflecting a notion of quorum that requires unanimous agreement. 5. Take \(\mathsf{P}=\{0,1,\ldots,41\}\). Let the **supermajority** semitopology have \[\mathsf{Open}=\{\varnothing\}\cup\{O\subseteq\mathsf{P}\mid\mathit{cardinality }(O)\geq 28\}.\] Since \(\mathsf{P}\) has 42 elements, \(O\) is open when it contains at least two-thirds of the points. The supermajority semitopology is not a topology, since it is not closed under intersections: that \(O\) and \(O^{\prime}\) each contain at least two-thirds of the points in \(\mathsf{P}\) does not mean that their intersection \(O\cap O^{\prime}\) does. Two-thirds is a typical quorum threshold used for making progress in consensus algorithms. 6. Take \(\mathsf{P}\) to be any nonempty set. Let the **many** semitopology have \[\mathsf{Open}=\{\varnothing\}\cup\{O\subseteq\mathsf{P}\mid\mathit{cardinality }(O)=\mathit{cardinality}(\mathsf{P})\}.\] For example, if \(\mathsf{P}=\mathbb{N}\) then open sets include \(\mathit{evens}=\{2*n\mid n\in\mathbb{N}\}\) and \(\mathit{odds}=\{2*n+1\mid n\in\mathbb{N}\}\). This semitopology is not a topology. Its notion of quorum captures an idea that a quorum is a set that may not be all of \(\mathsf{P}\), but does at least biject with it. 7. Take \(\mathsf{P}\) to be any nonempty set. Let the **all-but-one** semitopology have \[\mathsf{Open}=\{\varnothing,\;\mathsf{P}\}\cup\{\mathsf{P}\setminus\{p\}\mid p\in \mathsf{P}\}.\] This semitopology is not a topology. The notion of quorum here is that there may be at most one objector (but not two). 8. Take \(\mathsf{P}\) to be any set with cardinality at least \(2\). Let the **more-than-one** semitopology have \[\mathsf{Open}=\{\varnothing\}\cup\{O\subseteq\mathsf{P}\mid\textit{cardinality }(O)\geq 2\}.\] This semitopology is not a topology. This notion of quorum reflects a security principle in banking and accounting (and elsewhere) of _separation of duties_, that functional responsibilities be separated such that at least two people are required to complete an action -- so that errors (or worse) cannot be made without being discovered by another person. 9. Take \(\mathsf{P}=\mathbb{R}\) (the set of real numbers) and set \(O\subseteq\mathbb{R}\) to be open when it has the form \([0,r)\) or \((-r,0]\) for any strictly positive real number \(r>0\). This semitopology is not a topology, since (for example) \((1,0]\) and \([0,1)\) are open, but their intersection \(\{0\}\) is not open. 10. Consider any \(L\)-labelled automaton \(A\) (by which here we mean: a rooted directed graph with labels from \(L\)), and let open sets consist of all possible finite traces of labels as we explore \(A\): so choose a (possibly infinite) path through \(A\) and take as an open set the set of initial segments from that path. To make this concrete, we can take \(A\) to have just one node and two edges labelled \(0\) and \(1\) respectively. Then an open set consists of a set of initial segments of any stream of \(0\)s and \(1\)s. For example: this open set is obtained from the alternating stream \([0,1,0,1,\dots]\): \[\{[],\;[0],\;[0,1],\;[0,1,0],\;[0,1,0,1],\;\dots\}.\] **Remark 2.1.8**.: **(Logical models of semitopologies)** One class of examples of semitopologies deserves its own discussion. Consider an arbitrary logical system with predicates \(\mathsf{Pred}\) and entailment relation \(\vdash\).7 Call \(\Phi\subseteq\mathsf{Pred}\)**deductively closed** when \(\Phi\vdash\phi\) implies \(\phi\in\Phi\). Then take Footnote 7: A validity relation \(\vDash\) would also work. * \(\mathsf{P}=\mathsf{Pred}\), and * let \(O\in\mathsf{Open}\) be \(\varnothing\) or the complement to a deductively closed set \(\Phi\), so \(O=\mathsf{Pred}\setminus\Phi\). Note that an arbitrary union of open sets is open (because an arbitrary intersection of deductively closed sets is deductively closed), but an intersection of open sets need not be open (because the union of deductively closed sets need not be deductively closed). This is a semitopology. We can note further that: 1. There is a traditional, simple model of propositional logic whereby we let propositions be denoted by open sets. Intuitively, these points are 'worlds' at which the proposition is 'true'. The example of semitopologies given above is _not_ this. For a start, in the model above propositions are points, not sets of points. 2. Variations on our model above are possible, all with a theme that associates closed sets with consistency and deductive closure, and open sets with inconsistency. For example: 1. Call \(\Phi\subseteq\mathsf{Pred}\)**inconsistent** when \(\Phi\vdash\). Then we can take \(\mathsf{P}=\mathsf{Pred}\), and we can let \(\mathsf{Open}\) be the set of inconsistent sets of predicates. Note that an arbitrary union of inconsistent sets of predicates is inconsistent, but an intersection of inconsistent sets of predicates need not be inconsistent. 2. We can take open sets to be arbitrary unions of _minimal_ inconsistent sets of predicates; then the previous notion of 'open set' can be recovered as 'has a nonempty open interior'. 3. We can restrict \(\mathsf{P}\) to _atomic predicates_ (ones not created by logical connectives or quantifiers, such as \(\mathsf{\Lambda},\Rightarrow\), or \(\mathsf{\Psi}\)).8 Footnote 8: In a proposition logic these would be called _propositional constants_, such as it is raining; in a predicate logic these might take the form of predicate-formers applied to closed terms, such as \(\mathsf{mortal}(\mathsf{Socrates})\) or perhaps 1+2=3. #### 2.1.4 Some discussion **Remark 2.1.9**.: **(Why the name'semitopologies')** When we give a name'semitopologies' to things that are like topologies but without intersections, this is a riff on *'semilattices', for things that are like lattices with joins but without meets (or vice-versa), and *'semigroups', for things that are like groups but without inverses. But, this terminology also reflects a real mathematical connection, because semitopologies _are_ semilattices _are_ semigroups, in standard ways which we take a moment to spell out: * A semitopology \((\mathsf{P},\mathsf{Open})\) is a bounded join subsemilattice of the powerset \(\mathit{pow}(\mathsf{P})\), by taking the join \(\mathsf{V}\) to be sets union \(\cup\) and the bounds \(\bot\) and \(\top\) to be \(\varnothing\) and \(\mathsf{P}\) respectively. * A semilattice is an idempotent commutative monoid, which is an idempotent commutative semigroup with an identity, by taking the multiplication \(\circ\) to be \(\mathsf{V}\) and the identity element to be \(\bot\) (\(\top\) becomes what is called a _zero_ or _absorbing_ element, such that \(\top\circ x=\top\) always). **Remark 2.1.10**.: **(Semitopologies are not _just_ semilattices)** We noted in Remark 2.1.9 that every semitopology is a semilattice. This is true, but the reader should not read this statement as reductive: semitopologies are not _just_ semilattices. To see why, consider the following two simple semitopologies, as illustrated in Figure 2: 1. \((\mathsf{P},\mathsf{Open})\) where \(\mathsf{P}=\{0,1,2\}\) and \(\mathsf{Open}=\{\varnothing,\{0,1\},\{1,2\},\{0,1,2\}\}\). 2. \((\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) where \(\mathsf{P}=\{0,2\}\) and \(\mathsf{Open}^{\prime}=\{\varnothing,\{0\},\{2\},\{0,2\}\}\). Note that the semilattices of open sets \(\mathsf{Open}\) and \(\mathsf{Open}^{\prime}\) are isomorphic. However, \((\mathsf{P},\mathsf{Open})\) is not the same semitopology as \((\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\). There is more than one way to see this, but perhaps the simplest is just to note that there is no inverse pair of continuous injections \(f:(\mathsf{P},\mathsf{Open})\to(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) and \(g:(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\to(\mathsf{P},\mathsf{Open})\) -- we will define continuity in Definition 2.2.1(2) but since the definition is just as for topologies, we will take the liberty of using it here. There are limited possibilities for \(f\) and \(g\), so we just enumerate them and check: * If \(f(0)=0\) and \(f(2)=2\) then there is no continuous inverse \(g\): if \(g(1)=0\) then \(g^{\mbox{-}1}(\{2\})=\{2\}\not\in\mathsf{Open}\), and similarly if \(g(1)=1\) then \(g^{\mbox{-}1}(\{0\})=\{0\}\not\in\mathsf{Open}\). * If \(f(0)=0\) and \(f(2)=1\) then there is still no continuous inverse \(g\): if \(g(1)=0\) then \(g^{\mbox{-}1}(\{2\})=\{1\}\not\in\mathsf{Open}\), and similarly if \(g(1)=2\) then \(g^{\mbox{-}1}(\{0\})=\{0\}\not\in\mathsf{Open}\). * Other possibilites are no harder. ### Continuity, and interpretation of continuity as consensus The definitions and results in this Subsection are standard, and this is the point: we can import the topological notions discussed, and they work fine in semitopologies, and the fact that there are no surprises here is a feature. In Remark 2.2.5 we explain how these notions matter to us: **Definition 2.2.1**.: We import standard topological notions of inverse image and continuity: 1. Suppose \(\mathsf{P}\) and \(\mathsf{P}^{\prime}\) are any sets and \(f:\mathsf{P}\to\mathsf{P}^{\prime}\) is a function. Suppose \(O^{\prime}\subseteq\mathsf{P}^{\prime}\). Then write \(f^{\mbox{-}1}(O^{\prime})\) for the **inverse image** or **preimage** of \(O^{\prime}\), defined by \[f^{\mbox{-}1}(O^{\prime})=\{p\mbox{\in}\mathsf{P}\mid f(p)\in O^{\prime}\}.\] 2. Suppose \((\mathsf{P},\mathsf{Open})\) and \((\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) are semitopological spaces (Definition 2.1.2). Call a function \(f:\mathsf{P}\to\mathsf{P}^{\prime}\)**continuous** when the inverse image of an open set is open. In symbols: \[\forall O^{\prime}\mbox{\in}\mathsf{Open}^{\prime}.f^{\mbox{-}1}(O^{\prime}) \in\mathsf{Open}.\] Figure 2: Two nonidentical semitopologies (Remark 2.1.10) 3. Call a function \(f:\mathsf{P}\to\mathsf{P}^{\prime}\)**continuous at \(p\in\mathsf{P}\)** when \[\forall O^{\prime}\in\mathsf{Open}^{\prime}.f(p)\in O^{\prime}\Longrightarrow \exists O_{p,O^{\prime}}\in\mathsf{Open}.p\in O_{p,O^{\prime}}\wedge O_{p,O^{ \prime}}\subseteq f^{\mbox{-}1}(O^{\prime}).\] In words: \(f\) is continuous at \(p\) when the inverse image of every open neighbourhood of \(f(p)\) contains an open neighbourhood of \(p\). 4. Call a function \(f:\mathsf{P}\to\mathsf{P}^{\prime}\)**continuous on \(P\subseteq\mathsf{P}\)** when \(f\) is continuous at every \(p\in P\). (It is routine to check that \(f\) is continous on \(\mathsf{P}\) precisely when it continuous in the sense of part 2 of this Definition.) **Lemma 2.2.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) and \((\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) are semitopological spaces (Definition 2.1.2) and suppose \(f:\mathsf{P}\to\mathsf{P}^{\prime}\) is a function. Then the following are equivalent: 1. \(f\) is continuous (Definition 2.2.1(2)). 2. \(f\) is continuous at every \(p\in\mathsf{P}\) (Definition 2.2.1(3)). **Proof:** The top-down implication is immediate, taking \(O=f^{\mbox{-}1}(O^{\prime})\). For the bottom-up implication, given \(p\) and an open neighbourhood \(O^{\prime}\ni f(p)\), we write \[O=\bigcup\{O_{p,O^{\prime}}\in\mathsf{Open}\mid p\in\mathsf{P},\ f(p)\in O^{ \prime}\}.\] Above, \(O_{p,O^{\prime}}\) is the open neighbourhood of \(p\) in the preimage of \(O^{\prime}\), which we know exists by Definition 2.2.1(3). It is routine to check that \(O=f^{\mbox{-}1}(O^{\prime})\), and since this is a union of open sets, it is open. **Definition 2.2.3**.: Suppose that: * \((\mathsf{P},\mathsf{Open})\) is a semitopology and * \(\mathsf{Val}\) is a semitopology of values (Definition 2.1.6(1)) and * \(f:\mathsf{P}\to\mathsf{Val}\) is a value assignment (Definition 2.1.6(2); an assignment of a value to each element in \(\mathsf{P}\)). Then: 1. Call \(f\)**locally constant** at \(p\in\mathsf{P}\) when there exists \(p\in O_{p}\in\mathsf{Open}\) such that \(\forall p^{\prime}\in O_{p}.f(p)=f(p^{\prime})\). So \(f\) is locally constant at \(p\) when it is constant on some open neighbourhood \(O_{p}\) of \(p\). 2. Call \(f\)**locally constant** when it is locally constant at every \(p\in\mathsf{P}\). **Lemma 2.2.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(\mathsf{Val}\) is a semitopology of values and \(f:\mathsf{P}\to\mathsf{Val}\) is a value assignment. Then the following are equivalent: * \(f\) is locally constant / locally constant at \(p\in\mathsf{P}\) (Definition 2.2.3). * \(f\) is continuous / continuous at \(p\in\mathsf{P}\) (Definition 2.2.1). **Proof:** This is just by pushing around definitions, but we spell it out: * Suppose \(f\) is continuous, consider \(p\in\mathsf{P}\), and write \(v=f(p)\). By our assumptions we know that \(f^{-1}(v)\) is open, and \(p\in f^{-1}(v)\). This is an open neighbourhood \(O_{p}\) on which \(f\) is constant, so we are done. * Suppose \(f\) is locally constant, consider \(p\in\mathsf{P}\), and write \(v=f(p)\). By assumption we can find \(p\in O_{p}\in\mathsf{Open}\) on which \(f\) is constant, so that \(O_{p}\subseteq f^{-1}(v)\). \(\sqcap\)\(\sqcup\) **Remark 2.2.5**.: **(Continuity = consensus)** Lemma 2.2.4 tells us that * we can identify having consensus with having a continuous value assignment, and * we can view attaining consensus as computing a continuous value assignment. To see why, consider a semitopology \((\mathsf{P},\mathsf{Open})\): view points \(p\in\mathsf{P}\) as _participants_; and view open neighbourhoods \(p\in O\in\mathsf{Open}\) as **quorums** of \(p\). Then to say "\(f\) is a value assignment that is continuous at \(p\)" is to say that: * \(f\) assigns a value or belief to \(p\in\mathsf{P}\), and * \(p\) is contained in a (by Lemma 2.2.4 continuity) _quorum_ of peers that \(p\) trusts enough to justify its value. **Example 2.2.6**.: We continue the list of examples in Example 2.1.7 with some specifically 'consensus-flavoured' examples: 1. Let \(\mathsf{P}\) be a set of _moviegoers_. Let \(O\in\mathsf{Open}\) be the semitopology generated by all groups of friends willing to coordinate to go see a movie together. Then \(\mathsf{Open}\) describes the possible sets of people that can be found inside a movie theatre. 2. Let \(\mathsf{P}\) be a set of _voters_. Let \(O\in\mathsf{Open}\) be the semitopology generated by partitioning two disjoint sets, \(T\) and \(B\): one of which watches a TV channel called Fax News, and the other reads a newspaper called the New Yank Times. Because \(T\cap B=\varnothing\), locally constant functions will have the same value within each of \(T\) and \(B\), but the values of each partition may differ. 3. Let \(\mathsf{P}\) be a set of _senators_, and let \((\mathsf{P},\mathsf{Open})\) be the supermajority semitopology from Example 2.1.7(5) below (generated by all sets containing at least two-thirds of the senate). A function is locally constant at a senator \(p\in\mathsf{P}\) when \(p\) votes with the two-thirds majority. This is arguably a model of a system where actions are determined by a two-thirds majority vote. ### Neighbourhoods of a point Definition 2.3.1 is a standard notion from topology, and Lemma 2.3.2 is a (standard) characterisation of openness, which will be useful later: **Definition 2.3.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\) and \(O\in\mathsf{Open}\). Then call \(O\) an **open neighbourhood** of \(p\) when \(p\in O\). In other words: an open set is (by definition) an _open neighbourhood_ precisely for the points that it contains. **Lemma 2.3.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(P\subseteq\mathsf{P}\) is any set of points. Then the following are equivalent: * \(P\in\mathsf{Open}\). * Every point \(p\) in \(P\) has an open neighbourhood in \(P\). In symbols we can write: \[\forall p\in P.\exists O\in\mathsf{Open}.(p\in O\wedge O\subseteq P)\quad \text{if and only if}\quad P\in\mathsf{Open}\] **Proof:** If \(P\) is open then \(P\) itself is an open neighbourhood for every point that it contains. Conversely, if every \(p\in P\) contains some open neighbourhood \(p\in O_{p}\subseteq P\) then \(P=\bigcup\{O_{p}\mid p\in P\}\) and this is open by condition 2 of Definition 2.1.2. **Remark 2.3.3**.: An initial inspiration for modelling heterogeneous consensus using semitopologies, came from noting that the standard topological property described above in Lemma 2.3.2, corresponds to the _quorum sharing_ property in [1, Property 1]; the connection to topological ideas had not been noticed in [1]. ## 3 Transitive sets and topens ### Some background on sets intersection Some notation will be convenient: **Notation 3.1.1**.: Suppose \(X\), \(Y\), and \(Z\) are sets. 1. Write \[X\between Y\quad\text{when}\quad X\cap Y\neq\varnothing.\] When \(X\between Y\) holds then we say (as standard) that \(X\) and \(Y\)**intersect**. 2. We may chain the \(\between\) notation, writing for example \[X\between Y\between Z\quad\text{for}\quad X\between Y\ \wedge\ Y\between Z\] 3. We may write \(X\between Y\) for \(\neg(X\between Y)\), thus \(X\between Y=\varnothing\). **Remark 3.1.2**.: _Note on design in Notation 3.1.1:_ It is uncontroversial that if \(X\neq\varnothing\) and \(Y\neq\varnothing\) then \(X\between Y\) should hold precisely when \(X\cap Y\neq\varnothing\). But there is an edge case! What truth-value should \(X\between Y\) return when \(X\) or \(Y\) is empty? 1. It might be nice if \(X\subseteq Y\) would imply \(X\between Y\). This argues for setting \[(X=\varnothing\lor Y=\varnothing)\Longrightarrow X\between Y.\] 2. It might be nice if \(X\between Y\) were monotone on both arguments (i.e. if \(X\between Y\) and \(X\subseteq X^{\prime}\) then \(X^{\prime}\between Y\)). This argues for setting \[(X=\varnothing\lor Y=\varnothing)\Longrightarrow X\between Y.\] 3. It might be nice if \(X\between X\) always -- after all, should a set _not_ intersect itself? -- and this argues for setting \[\varnothing\between\varnothing,\] even if we also set \(\varnothing\between Y\) for nonempty \(Y\). All three choices are defensible, and they are consistent with the following nice property: \[X\between Y\Longrightarrow(X\between X\lor Y\between Y).\] We choose the second -- if \(X\) or \(Y\) is empty then \(X\between Y\) -- because it gives the simplest definition that \(X\between Y\) precisely when \(X\cap Y\neq\varnothing\). We list some elementary properties of \(\between\): **Lemma 3.1.3**.: 1. \(X\between X\) if and only if \(X\neq\varnothing\). 2. \(X\between Y\) if and only if \(Y\between X\). 3. \(X\between(Y\cup Z)\) if and only if \((X\between Y)\vee(X\between Z)\). 4. If \(X\subseteq X^{\prime}\) and \(X\neq\varnothing\) then \(X\between X^{\prime}\). 5. Suppose \(X\between Y\). Then \(X\subseteq X^{\prime}\) implies \(X^{\prime}\between Y\), and \(Y\subseteq Y^{\prime}\) implies \(X\between Y^{\prime}\). 6. If \(X\between Y\) then \(X\neq\varnothing\) and \(Y\neq\varnothing\). **Proof:** By facts of sets intersection. \(\sqcap\)\(\sqcup\) ### Transitive open sets and value assignments **Remark 3.2.1**.: **(Taking stock of topens)** Transitive sets are of interest because values of continuous functions are strongly correlated on them. This is Theorem 3.2.3, especially part 2 of Theorem 3.2.3. A transitive _open_ set -- a _topen_ -- is even more important, because an open set corresponds in our semitopological model to a _quorum_ (a collection of participants that can make progress), so a transitive open set is a collection of participants that can make progress and are guaranteed to do so in consensus, where algorithms succeed. For this and other reasons, we very much care about finding topens and understanding when points are associated with topen sets (e.g. by having topen neighbourhoods). As we develop the maths, this will then lead us on to consider various regularity properties (Definition 4.1.3). But first, we start with transitive sets and topens: **Definition 3.2.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Suppose \(T\subseteq\mathsf{P}\) is any set of points. 1. Call \(T\)**transitive** when \[\forall O,O^{\prime}\in\mathsf{Open}.O\between T\between O^{\prime}\Longrightarrow O \between O^{\prime}.\] 2. Call \(T\)**topen** when \(T\) is nonempty transitive and open.9 Footnote 9: The empty set is trivially transitive and open, so it would make sense to admit it as a (degenerate) topen. However, it turns out that we mostly need the notion of ‘topen’ to refer to certain kinds of neighbourhoods of points (we will call them _communities_; see Definition 4.1.3). It is therefore convenient to exclude the empty set from being topen, because while it is the neighbourhood of every point that it contains, it is not a neighbourhood of any point. We may write \[\mathsf{Topen}=\{T\in\mathsf{Open}\mid T\text{ is topen}\}.\] 3. Call \(S\) a **maximal topen** when \(S\) is a topen that is not a subset of any strictly larger topen.10 Footnote 10: Transitive open’ \(\rightarrow\) ‘topen’, like ‘closed and open’ \(\rightarrow\) ‘clopen’. **Theorem 3.2.3**.: Suppose that: * \((\mathsf{P},\mathsf{Open})\) is a semitopology. * Val is a semitopology of values (a nonempty set with the discrete semitopology; see Definition 2.1.6(1)). * \(f:\mathsf{P}\rightarrow\mathsf{Val}\) is a value assignment (Definition 2.1.6(2)). * \(T\subseteq\mathsf{P}\) is a transitive set (Definition 3.2.2) -- in particular this will hold if \(T\) is topen -- and \(p,p^{\prime}\in T\). Then: 1. If \(f\) is continuous at \(p\) and \(p^{\prime}\) then \(f(p)=f(p^{\prime})\). 2. As a corollary, if \(f\) is continuous on \(T\), then \(f\) is constant on \(T\). In words we can say: Continuous value assignments are constant across transitive sets. **Proof:** Part 2 follows from part 1 since if \(f(p)=f(p^{\prime})\) for _any_\(p,p^{\prime}\in T\), then by definition \(f\) is constant on \(T\). So we now just need to prove part 1 of this result. Consider \(p,p^{\prime}\in T\). By continuity on \(T\), there exist open neighbourhoods \(p\in O\subseteq f\)-\({}^{1}(f(p))\) and \(p^{\prime}\in O^{\prime}\subseteq f\)-\({}^{1}(f(p^{\prime}))\). By construction \(O\between T\between O^{\prime}\) (because \(p\in O\cap T\) and \(p^{\prime}\in T\cap O^{\prime}\)). By transitivity of \(T\) it follows that \(O\between O^{\prime}\). Thus, there exists \(p^{\prime\prime}\in O\cap O^{\prime}\), and by construction \(f(p)=f(p^{\prime\prime})=f(p^{\prime})\). \(\sqcap\)\(\sqcup\) A notation will be useful: **Notation 3.2.4**.: Suppose \(\mathsf{X}\) is a set and \(f\) is some function on \(\mathsf{X}\) and \(X\subseteq\mathsf{X}\). Suppose further that it is known that \(f\) is constant on \(X\). In symbols: \[\exists c.\forall x\in X.f(x)=c.\] Then we may write \(f(X)\) for the unique constant value that \(f(x)\) takes as \(x\) ranges over \(X\).11 Footnote 11: We feel a bit guilty about this. A more principled approach might be to define \(f(X)=\{f(x)\mid x\in X\}\), and then write \(\{c\}\) for \(f(X)\) where \(f\) is known constant on \(X\). The reader is welcome to fill in the “\(\exists c.\forall x\in X.f(x)=c\wedge\ldots\)” as appropriate. Corollary 3.2.5 is an easy and useful consequence of Theorem 3.2.3: **Corollary 3.2.5**.: Suppose that: * \((\mathsf{P},\mathsf{Open})\) is a semitopology. * \(f:\mathsf{P}\to\mathsf{Val}\) is a value assignment to some set of values \(\mathsf{Val}\) (Definition 2.1.6). * \(f\) is continuous on topen sets \(T,T^{\prime}\in\mathsf{Topen}\). Then \[T\between T^{\prime}\quad\text{implies}\quad f(T)=f(T^{\prime}).\] Proof.: By Theorem 3.2.3\(f\) is constant on \(T\) and \(T^{\prime}\), so we can write \(f(T)\) and \(f(T^{\prime})\) as per Notation 3.2.4. We assumed that \(T\) and \(T^{\prime}\) intersect, and the result follows. A converse to Theorem 3.2.3 also holds: **Proposition 3.2.6**.: Suppose that: * \((\mathsf{P},\mathsf{Open})\) is a semitopology. * \(\mathsf{Val}\) is a semitopology of values with at least two elements (to exclude a denegerate case that no functions exist, or they exist but there is only one because there is only one value to map to). * \(T\subseteq\mathsf{P}\) is any set. Then * _if_ for every \(p,p^{\prime}\in T\) and every value assignment \(f:\mathsf{P}\to\mathsf{Val}\), \(f\) continuous at \(p\) and \(p^{\prime}\) implies \(f(p)=f(p^{\prime})\), * _then_ \(T\) is transitive. Proof.: We prove the contrapositive. Suppose \(T\) is not transitive, so there exist \(O,O^{\prime}\in\mathsf{Open}\) such that \(O\between T\between O^{\prime}\) and yet \(O\cap O^{\prime}=\varnothing\). We choose two distinct values \(v\neq v^{\prime}\in\mathsf{Val}\) and define \(f\) to map any point in \(O\) to \(v\) and any point in \(\mathsf{P}\setminus O\) to \(v^{\prime}\). Choose some \(p\in O\) and \(p^{\prime}\in O^{\prime}\). It does not matter which, and some such \(p\) and \(p^{\prime}\) exist, because \(O\) and \(O^{\prime}\) are nonempty by Lemma 3.1.3(6), since \(O\between T\) and \(O^{\prime}\between T\)). We note that \(f(p)=v\) and \(f(p^{\prime})=v^{\prime}\) and \(f\) is continuous at \(p\in O\) and \(p^{\prime}\in O^{\prime}\subseteq\mathsf{P}\setminus O\), yet \(f(p)\neq f(p^{\prime})\). **Remark 3.2.7**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(\mathsf{Val}\) is a semitopology of values with at least two elements. Say that a value assignment \(f:\mathsf{P}\to\mathsf{Val}\)**splits** a set \(T\subseteq\mathsf{P}\) when there exist \(p,p^{\prime}\in T\) such that \(f(p)\neq f(p^{\prime})\). Then Theorem 3.2.3 and Proposition 3.2.6 together say in words that: \(T\subseteq\mathsf{P}\) is transitive if and only if it cannot be split by a value assignment. Intuitively, transitive sets characterise areas of guaranteed consensus. ### Examples and discussion of transitive sets and topens We may routinely order sets by subset inclusion; including open sets, topens, closed sets, and so on, and we may talk about maximal, minimal, greatest, and least elements. We include the (standard) definition for reference: **Notation 3.3.1**.: Suppose \((\mathsf{P},\leq)\) is a poset. Then: 1. Call \(p\in\mathsf{P}\)**maximal** when \(\forall p^{\prime}.p\mathord{\leq}p^{\prime}\Longrightarrow p^{\prime}=p\) and **minimal** when \(\forall p^{\prime}.p^{\prime}\mathord{\leq}p\Longrightarrow p^{\prime}=p\). 2. Call \(p\in\mathsf{P}\)**greatest** when \(\forall p.p^{\prime}\leq p\) and **least** when \(\forall p^{\prime}.p\leq p^{\prime}\). **Example 3.3.2**.: **(Examples of transitive sets)** 1. \(\{p\}\) is transitive, for any single point \(p\in\mathsf{P}\). 2. The empty set \(\varnothing\) is (trivially) transitive, but not open because we insist in Definition 3.2.2(2). 3. Call a set \(P\subseteq\mathsf{P}\)_topologically indistinguishable_ when (using Notation 3.1.1) for every open set \(O\), \[P\between O\Longleftrightarrow P\subseteq O.\] It is easy to check that if \(P\) is topologically indistinguishable, then it is transitive. **Example 3.3.3**.: **(Examples of topens)** 1. Take \(\mathsf{P}=\{0,1,2\}\), with open sets \(\varnothing\), \(\mathsf{P}\), \(\{0\}\), \(\{2\}\), and \(\{0,1,2\}\). This has two maximal topens \(\{0\}\) and \(\{2\}\), and an isolated point \(1\), as illustrated in Figure 3 (top-left diagram). 2. Take \(\mathsf{P}=\{0,1,2\}\), with open sets \(\varnothing\), \(\mathsf{P}\), \(\{0\}\), \(\{2\}\), \(\{1,2\}\), and \(\{0,2\}\). This has two maximal topens \(\{0\}\) and \(\{2\}\), and an isolated point \(1\), as illustrated in Figure 3 (top-right diagram). 3. Take \(\mathsf{P}=\{0,1,2,3,4\}\), with open sets generated by \(\{0,1\}\), \(\{1\}\), \(\{3\}\), and \(\{3,4\}\). This has two maximal topens \(\{0,1\}\) and \(\{2,3\}\), and an isolated point \(0\), as illustrated in Figure 3 (lower-left diagram). 4. Take \(\mathsf{P}=\{0,1,2,*\}\), with open sets generated by \(\{0\}\), \(\{1\}\), \(\{2\}\), \(\{0,1,*\}\), and \(\{1,2,*\}\). This has three maximal topens \(\{0\}\), \(\{1\}\), and \(\{2\}\), and an isolated point \(*\), as illustrated in Figure 3 (lower-right diagram). 5. Take the all-but-one semitopology from Example 2.1.7(7) for a set \(\mathsf{P}\) with at least three (and possibly infinitely many) elements; opens are \(\varnothing\), \(\mathsf{P}\), and \(\mathsf{P}\setminus\{x\}\) for every \(x\in\mathsf{P}\), as illustrated in Figure 3 (lower-right diagram). This has a single maximal topen equal to \(\mathsf{P}\) itself. 6. The semitopology in Figure 5 has no topen sets at all (\(\varnothing\) is transitive and open, but by definition in Definition 3.2.2(2) topens have to be nonempty). ### Closure properties of transitive sets **Remark 3.4.1**.: Transitive sets have some nice closure properties which we treat in this Subsection -- here we mean 'closure' in the sense of "the set of transitive sets is closed under various operations", and not in the topological sense of 'closed sets'. Topens -- nonempty transitive _open_ sets -- will have even better closure properties, which emanate from the requirement in Lemma 3.4.3 that at least one of the transitive sets \(T\) or \(T^{\prime}\) is open. We will explore the closure properties of topens in detail in Subsection 3.5, but for now we can just notice that the openness requirement hints at one view of and motivation for topens as being "a subset of the transitive sets having particularly good closure properties". **Lemma 3.4.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(T\subseteq\mathsf{P}\). Then: 1. If \(T\) is transitive and \(T^{\prime}\subseteq T\), then \(T^{\prime}\) is transitive. 2. If \(T\) is topen and \(\varnothing\neq T^{\prime}\subseteq T\) is nonempty and open, then \(T^{\prime}\) is topen. Figure 3: Examples of topens (Example 3.3.3) **Proof:** 1. By Definition 3.2.2 it suffices to consider open sets \(O\) and \(O^{\prime}\) such that \(O\between T^{\prime}\between O^{\prime}\), and prove that \(O\between O^{\prime}\). But this is simple: by Lemma 3.1.3(5) \(O\between T\between O^{\prime}\), so \(O\between O^{\prime}\) follows by transitivity of \(T\). 2. Direct from part 1 of this result and Definition 3.2.2(2). \(\sqcap\)\(\sqcup\) **Lemma 3.4.3**.: Suppose that: * \((\mathsf{P},\mathsf{Open})\) is a semitopology. * \(T,T^{\prime}\subseteq\mathsf{P}\) are transitive. * At least one of \(T\) and \(T^{\prime}\) is open. Then: 1. \(\forall O,O^{\prime}\in\mathsf{Open}.O\between T\between T^{\prime} \between O^{\prime}\). 2. If \(T\between T^{\prime}\) then \(T\cup T^{\prime}\) is transitive. **Proof:** 1. We simplify using Definition 3.2.2 and our assumption that one of \(T\) and \(T^{\prime}\) is open. We consider the case that \(T^{\prime}\) is open: \[O\between T\between T^{\prime}\between O^{\prime} \Longrightarrow O\between T^{\prime}\between O^{\prime} T\] transitive, \(T^{\prime}\) open \[\Longrightarrow O\between O^{\prime} T^{\prime}\] transitive. The argument for when \(T\) is open, is precisely similar. 2. Suppose \(O\between T\cup T^{\prime}\between O^{\prime}\). By Lemma 3.1.3(3) (at least) one of the following four possibilities must hold: \[O\between T\wedge T\between O^{\prime},\quad O\between T^{\prime}\wedge T \between O^{\prime},\quad O\between T\wedge T^{\prime}\between O^{\prime},\quad \text{or}\quad O\between T^{\prime}\wedge T^{\prime}\between O^{\prime}.\] If \(O\between T\ \wedge\ T^{\prime}\between O^{\prime}\) then by part 1 of this result we have \(O\between O^{\prime}\) as required. The other possibilities are no harder. \(\sqcap\)\(\sqcup\) ### Closure properties of topens Definition 3.5.1 will be useful in Lemma 3.5.2(2): **Definition 3.5.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Call a nonempty set of nonempty open sets \(\mathcal{O}\subseteq\mathsf{Open}_{\neq\varnothing}\) a **clique** when its elements pairwise intersect.12 In symbols: Footnote 12: We call this a _clique_, because if we form the _intersection graph_ (see Definition 11.1.1) with nodes elements of \(\mathcal{O}\) and with an (undirected) edge between \(O\) and \(O^{\prime}\) when \(O\betweentie O^{\prime}\), then \(\mathcal{O}\) is a clique precisely when its intersection graph is indeed a clique. We could also call \(\mathcal{O}\) a _connected_ or (echoing Definition 6.2.1) a _compatible_ set of opens. \[\mathcal{O}\subseteq\mathsf{Open}\text{ is a clique}\quad\text{when}\quad\forall O,O^{\prime}\in\mathcal{O}.O\betweentie O^{\prime}.\] Note that if \(\mathcal{O}\) is a clique then every \(O\in\mathcal{O}\) is nonempty, since \(\varnothing\betweentie O\) is impossible (Notation 3.1.1). **Lemma 3.5.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then: 1. If \(T\) and \(T^{\prime}\) are an intersecting pair of topens (i.e. \(T\betweentie T^{\prime}\)), then \(T\cup T^{\prime}\) is topen. 2. If \(\mathcal{T}\) is a clique of topens (Definition 3.5.1), then \(\bigcup\mathcal{T}\) is topen. 3. If \(\mathcal{T}=(T_{\alpha}\mid\alpha<\beta)\) is an ascending chain of topens then \(\bigcup\mathcal{T}\) is topen. **Proof:** 1. \(T\cup T^{\prime}\) is open because by Definition 2.1.2(2) open sets are closed under arbitrary unions, and by Lemma 3.4.3(2) \(T\cup T^{\prime}\) is transitive. 2. \(\bigcup\mathcal{T}\) is open by Definition 2.1.2(2). Also, if \(O\betweentie\bigcup\mathcal{T}\betweentie O^{\prime}\) then there exist \(T,T^{\prime}\in\mathcal{T}\) such that \(O\betweentie T\) and \(T^{\prime}\betweentie O^{\prime}\). We assumed \(T\betweentie T^{\prime}\), so by Lemma 3.4.3(1) (since \(T\) and \(T^{\prime}\) are open) we have \(O\betweentie O^{\prime}\) as required. 3. An ascending chain is pairwise intersecting. We use part 2 of this result. \(\sqcap\)\(\sqcup\) **Corollary 3.5.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then every topen \(T\) is contained in a unique maximal topen. **Proof:** Consider \(\mathcal{T}\) defined by \[\mathcal{T}=\{T\cup T^{\prime}\mid T^{\prime}\text{ topen}\wedge T\between T ^{\prime}\}.\] By Lemma 3.5.2(2) this is a set of topens. By construction they all contain \(T\), and by our assumption that \(T\neq\varnothing\) they pairwise intersect (since they all contain \(T\), at least). By Lemma 3.5.2(3) therefore, \(\bigcup\mathcal{T}\) is a transitive open set. It is easy to check that this is a unique maximal transitive open set that contains \(T\). \(\sqcap\)\(\sqcup\) **Theorem 3.5.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then any \(P\subseteq\mathsf{P}\), and in particular \(\mathsf{P}\) itself, can be partitioned into: * Some disjoint collection of maximal topens. * A set of other points, which are not contained in any topen. In Definition 4.1.3 we will call these points _irregular_. See also Corollary 4.3.3. Proof.: Routine from Corollary 3.5.3. **Remark 3.5.5**.: It may be useful to put Theorem 3.5.4 in the context of the terminology, results, and examples that will follow below. We will have Definition 4.1.3(3) and Theorem 4.2.6. These will allow us to call a point \(p\) contained in some maximal topen \(T\)_regular_, and to call the maximal topen \(T\) of a regular point its _community_. Then Theorem 3.5.4 says that a semitopology \(\mathsf{P}\) can be partitioned into: * Regular points, which partition into disjoint communities -- each community is, in a sense made formal in Theorem 3.2.3, a coalitions of strongly-correlated regular points acting together -- and * a set of irregular points, which have no commmunity and so are not members of any such coalition. We examples in Example 3.3.3 and Figure 3, and we will see more elaborate examples below (see in particular the collection in Example 5.3.6). In the special case that the entire space consists of a single topen community, there are no irregular points and all participants are guaranteed to reach the _same_ consensus, if they reach consensus at all. For the application of a single blockchain trying to arrive at consensus, this discussion tells us that we want it to consist, semitopologically, of a single topen. ### Intertwined points #### The basic definition, and some lemmas **Definition 3.6.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). 1. Call \(p\) and \(p^{\prime}\)**intertwined** when \(\{p,p^{\prime}\}\) is transitive. Unpacking Definition 3.2.2 this means: \[\forall O,O^{\prime}\in\mathsf{Open}.(p\in O\wedge p^{\prime}\in O^{\prime}) \Longrightarrow O\betweentie O^{\prime}.\] By a mild abuse of notation, write \[p\betweentie p^{\prime}\quad\text{when}\quad p\text{ and }p^{\prime}\text{ are intertwined}.\] 2. Define \(p_{\betweentie}\) (read '\(p\)-intertwined') to be the set of points intertwined with \(p\). In symbols: \[p_{\betweentie}=\{p^{\prime}\in\mathsf{P}\mid p\betweentie p^{\prime}\}.\] **Example 3.6.2**.: We return to the examples in Example 3.3.3. There we note that: 1. \(1_{\betweentie}=\{0,1,2\}\) and \(0_{\betweentie}=\{0\}\) and \(2_{\betweentie}=\{2\}\). 2. \(1_{\betweentie}=\{1\}\) and \(0_{\betweentie}=\{0\}\) and \(2_{\betweentie}=\{2\}\). 3. \(x_{\betweentie}=\mathsf{P}\) for every \(x\). It might be tempting to suppose that points being intertwined should be transitive. Lemma 3.6.3 shows that this is not necessarily the case: **Lemma 3.6.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then the 'is intertwined' relation \(\between\) is not necessarily transitive. That is: \(p\between p^{\prime}\between p^{\prime\prime}\) does not necessarily imply \(p\between p^{\prime\prime}\). Proof.: It suffices to provide a counterexample. The semitopology from Example 3.3.3(1) (illustrated in Figure 3, top-left diagram) will do. Take \[\mathsf{P}=\{0,1,2\}\quad\text{and}\quad\mathsf{Open}=\{\varnothing,\mathsf{ P},\{0\},\{2\}\}.\] Then \[0\between 1\;\;\text{and}\;\;1\between 2,\quad\text{but}\quad\neg(0 \between 2).\] There is more to be said about Lemma 3.6.3 but will need more machinery to express it; we will pick up this thread again in Definition 5.5.1. #### 3.6.2 Pointwise characterisation of transitive sets **Lemma 3.6.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(T\subseteq\mathsf{P}\). Then the following are equivalent: 1. \(T\) is transitive. 2. \(\{p,p^{\prime}\}\) is transitive for every \(p,p^{\prime}\in T\). 3. \(p\between p^{\prime}\) for every \(p,p^{\prime}\in T\). Proof.: The equivalence of parts 2 and 3 above just restates Definition 3.6.1. We now prove equivalence of parts 1 and 2. * _Suppose_ \(T\) _is transitive._ By Lemma 3.4.2(1), \(\{p,p^{\prime}\}\) is transitive for every \(p,p^{\prime}\in T\). * _Suppose_ \(\{p,p^{\prime}\}\) _is transitive for every_ \(p,p^{\prime}\in T\)_._ Consider open sets \(O\) and \(O^{\prime}\) such that \(O\between T\between O^{\prime}\). Choose \(p\in O\cap T\) and \(p^{\prime}\in O\cap T^{\prime}\). By construction \(\{p,p^{\prime}\}\subseteq T\) so this is transitive. It follows that \(O\between O^{\prime}\) as required. **Theorem 3.6.5**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(T\subseteq\mathsf{P}\). Then the following are equivalent: 1. \(T\) is topen. 2. \(T\in\mathsf{Open}_{\neq\varnothing}\) and \(\forall p,p^{\prime}{\in}T{:}p\between p^{\prime}\). In words we can say: A topen is a nonempty open set of intertwined points. **Proof:** By Definition 3.2.2(2), \(T\) is topen when it is nonempty, open, and transitive. By Lemma 3.6.4 this last condition is equivalent to \(p\between\,p^{\prime}\) for every \(p,p^{\prime}\in T\). \(\sqcap\)\(\sqcup\) A value assignment is constant on a pair of intertwined points, where it is continuous: **Corollary 3.6.6**.: Suppose \(\mathsf{Val}\) is a semitopology of values and \(f:\mathsf{P}\to\mathsf{Val}\) is a value assignment (Definition 2.1.6) and \(p,p^{\prime}\in\mathsf{P}\) and \(p\between\,p^{\prime}\). Then if \(f\) continuous at \(p\) and \(p^{\prime}\) then \(f(p)=f(p^{\prime})\). **Proof:** \(\{p,p^{\prime}\}\) is transitive by Theorem 3.6.5; we use Theorem 3.2.3. \(\sqcap\)\(\sqcup\) **Remark 3.6.7**.: **(Intertwined as 'non-Hausdorff', and the significance of this to consensus)** Recall that we call a topological space \((\mathsf{P},\mathsf{Open})\)**Hausdorff** (or \(\mathbf{T_{2}}\)) when any two points can be separated by pairwise disjoint open sets. Using the \(\between\) symbol from Notation 3.1.1, we rephrase the Hausdorff condition as \[\forall p,p^{\prime}.p\neq p^{\prime}\Longrightarrow\exists O,O^{\prime}.(p \in O\wedge p^{\prime}\in O^{\prime}\wedge\neg(O\between\,O^{\prime})),\] and we can then simplify to this: \[\neg\exists p,p^{\prime}.p\neq p^{\prime}\wedge p\between\,p^{\prime}.\] Now note that the Hausdorff condition can be compactly written just as \[\forall p.p_{\between}=\{p\}. \tag{1}\] Note how distinct \(p\) and \(p^{\prime}\) being intertwined is the _opposite_ of being Hausdorff: \(p\between p^{\prime}\) when \(p^{\prime}\in p_{\between}\), and they _cannot_ be separated by pairwise disjoint open sets. Thus the assertion \(p\between p^{\prime}\) in Theorem 3.6.5 is a negation to the Hausdorff property: \[\exists p.p_{\between}\neq\{p\}.\] This is useful because for semitopologies as applied to consensus, * being Hausdorff means that the space is separated (which is probably a bad thing, if we are looking for a system with lots of points in consensus), whereas * lots of _'non-Hausdorff'_ intertwined points means by Theorems 3.6.5 and 3.5.4 that there are a relatively small number of relatively large toens -- ideally, everything is intertwined and there is just one topen -- such that by Theorem 3.2.3 the system will (where it reaches consensus) reach consensus on a single constant value assignment (which is a good thing). In the literature this might be called _avoiding forking_. ## 4 Open interiors, communities, and regular points ### Community of a (regular) point Definition 4.1.1 is standard: **Definition 4.1.1**.: **(Open interior)** Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(R\subseteq\mathsf{P}\). Define \(\mathit{interior}(R)\) the **interior** of \(R\) by \[\mathit{interior}(R)=\bigcup\{O\in\mathsf{Open}\mid O\subseteq R\}.\] **Lemma 4.1.2**.: Continuing the notation of Definition 4.1.1, \(\mathit{interior}(R)\) is the greatest open set contained in \(R\). **Proof:** Routine by the construction in Definition 4.1.1 and closure of open sets under unions (Definition 2.1.2(2)). **Definition 4.1.3**.: **(Community of a point, and regularity)** Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. Define \(K(p)\) the **community** of \(p\) by \[K(p)=\mathit{interior}(p_{\Downarrow}).\] The community of \(p\) is always an open set by Lemma 4.1.2.13 Footnote 13: But note that: * \(K(p)\) might be empty (see any \(x\in\mathbb{R}\) in Example 4.4.1(1)). * Extend \(K\) to subsets \(P\subseteq\mathsf{P}\) by taking a sets union: \[K(P)=\bigcup\{K(p)\mid p\in P\}.\] * Call \(p\)**regular** when its community is a open neighbourhood of \(p\). In symbols: \[p\text{ is regular }\quad\text{when }\quad p\in K(p)\in\mathsf{Topen}.\] * Call \(p\)**weakly regular** when its community is an open (but not necessarily topen) neighbourhood of \(p\). In symbols: \[p\text{ is weakly regular }\quad\text{when }\quad p\in K(p)\in\mathsf{Open}.\] * Call \(p\)**quasiregular** when its community is nonempty. In symbols: \[p\text{ is quasiregular }\quad\text{when }\quad\varnothing\neq K(p)\in\mathsf{Open}.\] 6. If \(p\) is not regular then we may call it **irregular** or just **not regular**. 7. If \(P\subseteq\mathsf{P}\) and every \(p\in P\) is regular/weakly regular/quasiregular/irregular then we may call \(P\)**regular/weakly regular/quasiregular/irregular** respectively (see also Definition 5.5.1(2)). **Remark 4.1.4**.: Our development will mostly be concerned with regular and weakly regular points. The property of being quasiregular is also interesting and will also turn up, though less often. Lemma 4.1.5 gives an initial overview of the relationships between these properties. A more detailed treatment follows, which repeats these main points and expands on them and puts them in a detailed context. **Lemma 4.1.5**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. If \(p\) is regular, then \(p\) is weakly regular. 2. If \(p\) is weakly regular, then \(p\) is quasiregular. 3. The converse implications need not hold (we sharpen this result in Theorem 5.5.4). 4. Furthermore, it is possible for a point \(p\) to not be quasiregular. Proof.: We consider each part in turn: 1. If \(p\) is regular then by Definition 4.1.3(3) \(p\in K(p)\in\mathsf{Topen}\), so certainly \(p\in K(p)\) and by Definition 4.1.3(4) \(p\) is weakly regular. 2. If \(p\) is weakly regular then by Definition 4.1.3(4) \(p\in K(p)\in\mathsf{Open}\), so certainly \(K(p)\neq\varnothing\) and by Definition 4.1.3(5) \(p\) is quasiregular. 3. To see that the converse implications need not hold, note that: * Point \(1\) in Example 4.1.6(2) (illustrated in Figure 3, top-left diagram) is weakly regular (\(K(1)=\{0,1,2\}\)) but not regular (\(K(1)\) is open but not open). * Point \(*\) in Example 4.1.6(3) (illustrated in Figure 3, lower-right diagram) is quasiregular (\(K(*)=\{1\}\) is nonempty but does not contain \(*\)). 4. To see that \(p\) may not even be quasiregular, take \(\mathsf{P}=\mathbb{R}\) (real numbers), with its usual topology (which is also a semitopology). Then \(x_{\Downarrow}=\{x\}\) and \(K(x)=\varnothing\) for every \(x\in\mathbb{R}\). More on this in Example 4.4.1(1) and the surrounding discussion. **Example 4.1.6**.: 1. In Figure 3 (bottom diagram), \(0\), \(1\), and \(2\) are three intertwined points and the entire space \(\{0,1,2\}\) consists of a single topen set. It follows that \(0\), \(1\), and \(2\) are all regular and their community is \(\{0,1,2\}\). 2. In Figure 3 (top-left diagram), \(0\) and \(2\) are regular and \(1\) is weakly regular but not regular (\(1\in K(1)=\{0,1,2\}\) but \(\{0,1,2\}\) is not topen). 3. In Figure 3 (lower-right diagram), \(0\), \(1\), and \(2\) are regular and \(*\) is quasiregular (\(K(*)=\{1\}\)). 4. In Figure 3 (top-right diagram), \(0\) and \(2\) are regular and \(1\) is neither regular, weakly regular, nor quasiregular (\(K(1)=\varnothing\)). 5. In a semitopology of values \((\mathsf{Val},\mathit{pow}(\mathsf{Val}))\) (Definition 2.1.6) every value \(v\in\mathsf{Val}\) is regular, weakly regular, and unconflicted. 6. In \(\mathbb{R}\) with its usual topology every point is unconflicted because the topology is Hausdorff and by Equation 1 in Remark 3.6.7 this means precisely that \(p_{\between}=\{p\}\) so that is intertwined just with itself; and every point is not weakly regular because \(K(p)=\mathit{interior}(p_{\between})=\varnothing\). **Example 4.1.7**.: When we started looking at semitopologies we gave some examples in Example 2.1.7. These may seem quite elementary now, but we run through them commenting on which spaces are regular, weakly regular, or quasiregular: 1. The initial semitopology is regular: it has no topen neighbourhoods, but also no points. The final semitopology is regular: it has one topen neighbourhood, containing one point. 2. \(\mathbb{B}\) with the discrete semitopology, is regular. It has two topen neighbourhoods: \(\{\_\mathsf{L}\}\) and \(\{\mathsf{T}\}\). 3. The trivial topology is regular; it has a single topen neighbourhood that is \(\mathsf{P}\) itself. 4. The supermajority semitopology is regular. It has one topen neighbourhood containing all of \(\mathsf{P}\). 5. The many semitopology is regular if \(\mathsf{P}\) is finite (because it is equal to the trivial semitopology), and not even quasiregular if \(\mathsf{P}\) is finite, because (for infinite \(\mathsf{P}\)) \(p_{\between}=\varnothing\) for every point. For example, if \(\mathsf{P}=\mathbb{N}\) and \(p\) is even and \(p^{\prime}\) is odd, then \(\mathit{evens}=\{2*n\mid n\in\mathbb{N}\}\) and \(\mathit{odds}=\{2*n+1\mid n\in\mathbb{N}\}\) are disjoint open neighbourhoods of \(p\) and \(p^{\prime}\) respectively. 6. The all-but-one semitopology is regular for \(\mathsf{P}\) having cardinality of \(3\) or more, since all points are intertwined so there is a single topen neighbourhood which is the whole space. If \(\mathsf{P}\) has cardinality \(2\) or \(1\) then we have a discrete semitopology (on two points or one point) and these too are regular, with two or one topen neighbourhoods. 7. The more-than-one semitopology is not even quasiregular for \(\mathsf{P}\) having cardinality of \(4\) or more. If \(\mathsf{P}\) has cardinality \(3\) then we get the bottom topology in Figure 3, which is regular. If \(\mathsf{P}\) has cardinality \(2\) then we get the trivial semitopology, which is regular. 8. Take \(\mathsf{P}=\mathbb{R}\) (the set of real numbers) and set \(O\subseteq\mathbb{R}\) to be open when it has the form \([0,r)\) or \((-r,0]\) for any strictly positive real number \(r>0\). The reader can check that this semitopology is regular. 9. For the automaton example we cannot comment, because it depends on the automaton. **Remark 4.1.8**.: Definition 4.1.3 is a key definition and we pause to discuss it: 1. We can ask: * It it always the case that the community of \(p\) exists? _(Yes)_ * Is the community of \(p\) always open? _(Yes)_ * Is it always topen? _(No)_ * Is it always an open (or a topen) neighbourhood for \(p\)? _(No)_ * Is it always nonempty? _(No)_ A wealth of behaviour is possible and is explored below, including in Lemma 4.2.3 and in the examples in Subsection 4.4. 2. Why is it interesting to consider \(p\) such that \(p\in K(p)\)? Clearly calling \(p\)'regular' suggests that non-regular behaviour is 'bad', and regular behaviour is 'good'. But what is this good behaviour that regularity implies? The immediate answer comes from Theorem 3.2.3 (continuous value assignments are constant on topens). This tells us that a regular \(p\) is surrounded by a topen neighbourhood of points \(K(p)=\mathit{interior}(p_{\bar{\vee}})\) that must be in consensus with it under continuous value assignments. Using our terminology _community_ and _regular_, Theorem 3.2.3 can then be read as asserting that _the community of a regular \(p\) shares its values_ -- if we are interested in consensus, this is clearly a useful observation. 3. We can sum up the above intuitively as follows: 1. We care about transitivity because it implies agreement. 2. We care about being open, because it implies a quorum, and so local consensus, and so (once local consensus has been reached) also local progress of the system. 3. Thus, a regular point is interesting because it is a participant in a maximal topen neighbourhood and therefore can _i)_ come to agreement and _ii)_ make progress. 4. A mathematical question then arises: how can the community of \(p\) can be (semi)topologically characterised? We will explore this theme, notably in Theorem 4.2.6, Proposition 5.2.6, and Theorem 5.3.2; see also Remark 5.3.1. ### Further exploration of (quasi-/weak) regularity and topen sets **Remark 4.2.1**.: Recall three common separation axioms from topology: 1. \(T_{0}\): if \(p_{1}\neq p_{2}\) then there exists some \(O\in\mathsf{Open}\) such that \((p_{1}\in O)\) xor \((p_{2}\in O)\), where xor denotes _exclusive or_. 2. \(T_{1}\): if \(p_{1}\neq p_{2}\) then there exist \(O_{1},O_{2}\in\mathsf{Open}\) such that \(p_{i}\in O_{j}\Longleftrightarrow i=j\) for \(i,j\in\{1,2\}\). 3. \(T_{2}\), or the _Hausdorff condition_: if \(p_{1}\neq p_{2}\) then there exist \(O_{1},O_{2}\in\mathsf{Open}\) such that \(p_{i}\in O_{j}\Longleftrightarrow i=j\) for \(i,j\in\{1,2\}\), and \(O_{1}\cap O_{2}=\varnothing\). Cf. the discussion in Remark 3.6.7. Even the weakest of the well-behavedness property for semitopologies that we consider in Definition 4.1.3 -- quasiregularity -- is in some sense strongly opposed to the space being Hausdorff/\(T_{2}\) (though not to being \(T_{1}\)), as Lemma 4.2.2 makes formal. **Lemma 4.2.2**.: 1. Every quasiregular Hausdorff semitopology is discrete. In more detail: if \((\mathsf{P},\mathsf{Open})\) is a semitopology that is quasiregular (Definition 4.1.3(5)) and Hausdorff (equation 1 in Remark 3.6.7), then \(\mathsf{Open}=\mathit{pow}(\mathsf{P})\). 2. There exists a (quasi)regular \(T_{1}\) semitopology that is not discrete. **Proof:** We consider each part in turn: 1. By the Hausdorff property, \(p_{\Uparrow}=\{p\}\). By the quasiregularity property, \(K(p)\neq\varnothing\). It follows that \(K(p)=\{p\}\). But by construction in Definition 4.1.3(1), \(K(p)\) is an open interior. Thus \(\{p\}\in\mathsf{Open}\). The result follows. 2. It suffices to provide an example. We use the bottom semitopology in Figure 3. Thus \(\mathsf{P}=\{0,1,2\}\) and \(\mathsf{Open}\) is generated by \(\{0,1\}\), \(\{1,2\}\), and \(\{2,0\}\). The reader can check that this is regular (since all three points are intertwined) and \(T_{1}\). \(\sqcap\)\(\sqcup\) Lemma 4.2.3 confirms in a different way that regularity (Definition 4.1.3(3)) is non-trivially distinct from weak regularity and quasiregularity: Lemma 4.2.3: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. \(K(p)\) is always open since it is an open interior by construction in Definition 4.1.3(1). 2. However, \(K(p)\) is not necessarily always open; equivalently \(K(p)\) is not necessarily transitive. (More on this later in Subsection 4.4.) **Proof:** It suffices to provide a counterexample. We consider the semitopology from Example 3.3.3(1) (illustrated in Figure 3, top-left diagram). We calculate that \(K(1)=\{0,1,2\}\) so that \(K(1)\) is an open neighbourhood of \(1\) -- but it is not transitive, and thus not open, since \(\{0\}\cap\{2\}=\varnothing\). Further checking reveals that \(\{0\}\) and \(\{2\}\) are two maximal topens within \(K(1)\). \(\sqcap\)\(\sqcup\) So what is \(K(p)\)? We start by characterising \(K(p)\) as the _greatest_ topen neighbourhood of \(p\), if this exists: Lemma 4.2.4: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and recall from Definition 4.1.3(3) that \(p\) is regular when \(K(p)\) is a topen neighbourhood of \(p\). 1. If \(K(p)\) is a topen neighbourhood of \(p\) (i.e. if \(p\) is regular) then \(K(p)\) is a maximal topen. 2. If \(p\in T\in\mathsf{Topen}\) is a maximal topen neighbourhood of \(p\) then \(T=K(p)\). **Proof:** 1. Since \(p\) is regular, by definition, \(K(p)\) is topen and is a neighbourhood of \(p\). It remains to show that \(K(p)\) is a maximal topen. Suppose \(T\) is a topen neighbourhood of \(p\); we wish to prove \(T\subseteq K(p)=\mathit{interior}(p_{\Downarrow})\). Since \(T\) is open it would suffice to show that \(T\subseteq p_{\Downarrow}\). By Theorem 3.6.5\(p\betweenarrow p^{\prime}\) for every \(p^{\prime}\in T\), and it follows immediately that \(T\subseteq p_{\Downarrow}\). 2. Suppose \(T\) is a maximal topen neighbourhood of \(p\). First, note that \(T\) is open, and by Theorem 3.6.5\(T\subseteq p_{\Downarrow}\), so \(T\subseteq K(p)\). Now consider any open \(O\subseteq p_{\Downarrow}\). Note that \(T\cup O\) is an open subset of \(p_{\Downarrow}\), so by Theorem 3.6.5\(T\cup O\) is topen, and by maximality \(T\cup O\subseteq T\) and thus \(O\subseteq T\). It follows that \(K(p)\subseteq T\). \(\sqcap\)\(\sqcup\) **Remark 4.2.5**.: We can use Lemma 4.2.4 to characterise regularity in five equivalent ways: see Theorem 4.2.6 and Corollary 4.2.8. Other characterisations will follow but will require additional machinery to state (the notion of _closed neighbourhood_; see Definition 5.2.1). See Corollary 5.2.9 and Theorem 5.3.2. **Theorem 4.2.6**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent: 1. \(p\) is regular, or in full: \(p\in K(p)\in\mathsf{Topen}\). 2. \(K(p)\) is a greatest topen neighbourhood of \(p\). 3. \(K(p)\) is a maximal topen neighbourhood of \(p\). 4. \(p\) has a maximal topen neighbourhood. 5. \(p\) has some topen neighbourhood. **Proof:** We prove a cycle of implications: 1. If \(K(p)\) is a topen neighbourhood of \(p\) then it is maximal by Lemma 4.2.4(1). Furthermore this maximal topen neighbourhood of \(p\) is necessarily greatest, since if we have two maximal topen neighbourhoods of \(p\) then their union is a larger topen neighbourhood of \(p\) by Lemma 3.5.2(1) (union of intersecting topens is topen). 2. If \(p_{\Downarrow}\) is a greatest topen neighbourhood of \(p\), then certainly it is a maximal topen neighbourhood of \(p\). 3. If \(p_{\Downarrow}\) is a maximal topen neighbourhood of \(p\), then certainly \(p\) has a maximal topen neighbourhood. 4. If \(p\) has a maximal topen neighbourhood then certainly \(p\) has a topen neighbourhood. 5. Suppose \(p\) has a topen neighbourhood \(T\). By Corollary 3.5.3 we may assume without loss of generality that \(T\) is a maximal topen. We use Lemma 4.2.4(2). \(\sqcap\)\(\sqcup\) Theorem 4.2.6 has numerous corollaries: **Corollary 4.2.7**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\) and \(\{p\}\in\mathsf{Open}\). Then \(p\) is regular. Proof.: We noted in Example 3.3.2(1) that a singleton \(\{p\}\) is always transitive, so if \(\{p\}\) is also open, then it is topen, so that \(p\) has a topen neighbourhood and by Theorem 4.2.6(5)\(p\) is topen.14 Footnote 14: It does not follow from \(p\in\{p\}\in\mathsf{Topen}\) that \(K(p)=\{p\}\): consider \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{0\},\{0,1\}\}\) and \(p=0\); then \(\{p\}\in\mathsf{Topen}\) yet \(K(p)=\{0,1\}\). **Corollary 4.2.8**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent: 1. \(p\) is regular. 2. \(p\) is weakly regular and \(K(p)=K(p^{\prime})\) for every \(p^{\prime}\in K(p)\). It might be useful to look at Example 3.3.3(2) and Figure 3 (top-right diagram). In that example the point \(1\) is _not_ regular, and its community \(\{0,1,2\}\) is not a community for \(0\) or \(2\). Proof.: We prove two implications, using Theorem 4.2.6: * Suppose \(p\) is regular. By Lemma 4.1.5(1)\(p\) is weakly regular. Now consider \(p^{\prime}\in K(p)\). By Theorem 4.2.6\(K(p)\) is topen, so it is a topen neighbourhood of \(p^{\prime}\). By Theorem 4.2.6\(K(p^{\prime})\) is a greatest topen neighbourhood of \(p^{\prime}\). But by Theorem 4.2.6\(K(p)\) is also a greatest topen neighbourhood of \(p\), and \(K(p)\between K(p^{\prime})\) since they both contain \(p^{\prime}\). By Lemma 3.5.2(1) and maximality, they are equal. * Suppose \(p\) is weakly regular and suppose \(K(p)=K(p^{\prime})\) for every \(p^{\prime}\in K(p)\), and consider \(p^{\prime},p^{\prime\prime}\in K(p)\). Then \(p^{\prime}\between p^{\prime\prime}\) holds, since \(p^{\prime\prime}\in K(p^{\prime})=K(p)\). By Theorem 3.6.5\(K(p)\) is topen, and by weak regularity \(p\in K(p)\), so by Theorem 4.2.6\(p\) is regular as required. **Corollary 4.2.9**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). Then if \(p\) is regular and \(p^{\prime}\in K(p)\) then \(p^{\prime}\) is regular and has the same community. Proof.: Suppose \(p\) is regular -- so by Definition 4.1.3(3)\(p\in K(p)\in\mathsf{Topen}\) -- and suppose \(p^{\prime}\in K(p)\). Then by Corollary 4.2.8\(K(p)=K(p^{\prime})\), so \(p^{\prime}\in K(p^{\prime})\in\mathsf{Topen}\) and by Theorem 4.2.6\(p^{\prime}\) is regular. **Corollary 4.2.10**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then the following are equivalent for \(T\subseteq\mathsf{P}\): * \(T\) is a maximal topen. * \(T\neq\varnothing\) and \(T=K(p)\) for every \(p\in T\). Proof.: If \(T\) is a maximal topen and \(p\in T\) then by Theorem 4.2.6(3)\(T=K(p)\). If \(T\neq\varnothing\) and \(T=K(p)\) for every \(p\in T\), then \(K(p)=K(p^{\prime})\) for every \(p^{\prime}\in K(p)\) and by Corollary 4.2.8\(p\) is regular, so that by Definition 4.1.3(3)\(T=K(p)\in\mathsf{Topen}\) as required. ### Intersection and partition properties of regular spaces Proposition 4.3.1 is useful for consensus. Suppose we are a regular point \(q\) and we have reached consensus with some topen neighbourhood \(O\ni q\). Suppose further that our topen neighbourhood \(O\) intersects with the maximal topen neighbourhood \(K(p)\) of some other regular point \(p\). Then Proposition 4.3.1 tells us that were inside \(K(p)\) all along: **Proposition 4.3.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\) is regular. Suppose \(O\in\mathsf{Topen}\) is a topen. Then \[O\betweennsuit K(p)\quad\text{if and only if}\quad O\subseteq K(p).\] **Proof:** The right-to-left implication is immediate from Notation 3.1.1(1), given that topens are nonempty by Definition 3.2.2(2). For the left-to-right implication, suppose \(O\betweennsuit K(p)\). By Theorem 4.2.6\(K(p)\) is a maximal topen, and by Lemma 3.5.2(1)\(O\cup K(p)\) is topen. Then \(O\subseteq K(p)\) follows by maximality. **Proposition 4.3.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(p,p^{\prime}\in\mathsf{P}\) are regular. Then \[K(p)\betweennsuit K(p^{\prime})\quad\Longleftrightarrow\quad K(p)=K(p^{\prime})\] **Proof:** We prove two implications. * Suppose there exists \(p^{\prime\prime}\in K(p)\cap K(p^{\prime})\). By Corollary 4.2.9 (\(p^{\prime\prime}\) is regular and) \(K(p)=K(p^{\prime\prime})=K(p^{\prime})\). * Suppose \(K(p)=K(p^{\prime})\). By assumption \(p\in K(p)\), so \(p\in K(p^{\prime})\). Thus \(p\in K(p)\cap K(p^{\prime})\). We obtain a simple characterisation of regular semitopological spaces: **Corollary 4.3.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then the following are equivalent: 1. \((\mathsf{P},\mathsf{Open})\) is regular. 2. \(\mathsf{P}\) partitions into topen sets: there exists some set of topen sets \(\mathcal{T}\) such that \(T\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 2. Suppose \(\mathcal{T}\) is a topen partition of \(\mathsf{P}\). By definition for every point \(p\) there exists \(T\in\mathcal{T}\) such that \(p\in T\) and so \(p\) has a topen neighbourhood. By Theorem 4.2.6(5&1)\(p\) is regular. We prove equivalence of parts 2 and 3: 1. Suppose \(\mathcal{T}\) is a topen partition of \(\mathsf{P}\), and suppose \(X\subseteq\mathcal{P}\). Then trivially \(X\subseteq\bigcup\mathcal{T}\). 2. Suppose every \(X\subseteq\mathsf{P}\) has a cover of topen sets. Then \(\mathsf{P}\) has a cover of topen sets; write it \(\mathcal{T}\). By Lemma 3.5.2(1) we may assume without loss of generality that \(\mathcal{T}\) is a partition, and we are done. **Notation 4.3.4**.: Call a semitopology \((\mathsf{P},\mathsf{Open})\)**singular** when it contains a single maximal topen subset. **Remark 4.3.5**.: The moral we take from the results and examples above (and those to follow) is that the world we are entering has rather different well-behavedness criteria than those familiar from the study of typical Hausdorff topologies like \(\mathbb{R}\). Put crudely: 1. 'Bad' spaces are spaces that are not regular. \(\mathbb{R}\) with its usual topology (which is also a semitopology) is an example of a 'bad' semitopology; it is not even quasiregular. 2. 'Good' spaces are spaces that are regular. The supermajority and all-but-one semitopologies from Example 2.1.7(5&7) are typical examples of 'good' semitopologies; both are singular regular spaces. 3. Corollary 4.3.3 shows that the 'good' spaces are just the (disjoint, possibly infinite) unions of singular regular spaces. So to sum this up: modulo disjoint unions, the study of consensus behaviour is the study of semitopological spaces that consist of a single topen set of points that are all intertwined with one another. ### Examples of communities and (ir)regular points By Definition 4.1.3 a point \(p\) is regular when its community is a topen neighbourhood. Then a point is _not_ regular when its community is _not_ a topen neighbourhood of \(p\). We saw one example of this in Lemma 4.2.3. In this subsection we take a moment to investigate the possible behaviour in more detail. **Example 4.4.1**.: 1. Take \(\mathsf{P}\) to be \(\mathbb{R}\) the real numbers, with its usual topology (which is also a semitopology). Then \(x_{\betweenbetween}=\{x\}\) and \(K(x)=\varnothing\) for every \(x\in\mathbb{R}\). In particular, no \(x\in\mathbb{R}\) is regular. 2. We continue the semitopology from Example 3.3.3(1) (illustrated in Figure 3, top-left diagram), as used in Lemma 4.2.3: * \(\mathsf{P}=\{0,1,2\}\). * Open is generated by \(\{0\}\) and \(\{2\}\). Then: * \(0_{\between}=\{0,1\}\) and \(K(0)=\mathit{interior}(0_{\between})=\{0\}\). * \(2_{\between}=\{1,2\}\) and \(K(2)=\mathit{interior}(2_{\between})=\{2\}\). * \(1_{\between}=\{0,1,2\}\) and \(K(1)=\{0,1,2\}\). 3. We take, as illustrated in Figure 4 (left-hand diagram): * \(\mathsf{P}=\{0,1,2,3,4\}\). * Open is generated by \(\{1,2\}\), \(\{0,1,3\}\), \(\{0,2,4\}\), \(\{3\}\), and \(\{4\}\). Then: * \(x_{\between}=\{0,1,2\}\) and \(K(x)=\mathit{interior}(x_{\between})=\{1,2\}\) for \(x\in\{0,1,2\}\). * \(x_{\between}=\{x\}=K(x)\) for \(x\in\{3,4\}\). 4. We take, as illustrated in Figure 4 (right-hand diagram): * \(\mathsf{P}=\{0,1,2,3,4\}\). * Open is generated by \(\{1\}\), \(\{2\}\), \(\{3\}\), \(\{4\}\), \(\{0,1,2,3\}\), and \(\{0,1,2,4\}\). Then: * \(0_{\between}=\{0,1,2\}\) and \(K(0)=\{1,2\}\). * \(K(0)\) is not transitive and consists of two distinct topens \(\{1\}\) and \(\{2\}\). * \(0\not\in K(0)\). See Remark 5.3.3 for further discussion of this example. **Lemma 4.4.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then precisely one of the following possibilities must hold, and each is possible: 1. \(p\) is regular: \(p\in K(p)\) and \(K(p)\) is topen (nonempty, open, and transitive). 2. \(K(p)\) is topen, but \(p\not\in K(p)\). 3. \(K(p)=\varnothing\). 4. \(K(p)\) is open but not transitive. (Both \(p\in K(p)\) and \(p\not\in K(p)\) are possible.) Proof.: 1. To see that \(p\) can be regular, consider \(\mathsf{P}=\{0\}\) with the discrete topology. Then \(p\in K(p)=\{0\}\). 2. To see that it is possible for \(K(p)\) to be topen but \(p\) is not in it, consider Example 4.4.1(3). There, \(\mathsf{P}=\{0,1,2,3,4\}\) and \(1_{\between}=\{0,1,2\}\) and \(K(1)=\{0,2\}\). Then \(K(1)\) is topen, but \(1\not\in K(1)\). 3. To see that \(K(p)=\varnothing\) is possible, consider Example 4.4.1(1) (the real numbers \(\mathbb{R}\) with its usual topology). Then by Remark 3.6.7\(r_{\between}=\{r\}\) and so \(K(x)=\mathit{interior}(\{r\})=\varnothing\). (See also Example 5.3.6(2) for a more elaborate example.) 4. To see that it is possible for \(K(p)\) to be an open neighbourhood of \(p\) but not transitive, see Example 4.4.1(2). There, \(\mathsf{P}=\{0,1,2\}\) and \(1\in 1_{\between}=\{0,1,2\}=K(1)\), but \(\{0,1,2\}\) is not transitive (it contains two disjoint topens: \(\{0\}\) and \(\{2\}\)). To see that it is possible for \(K(p)\) to be open and nonempty yet not contain \(p\) and not be transitive, see Example 4.4.1(4) for \(p=0\), and see also Remark 5.3.3 for a discussion of the connection with minimal closed neighbourhoods. The possibilities above are clearly mutually exclusive and exhaustive. ## 5 Closed sets ### Closed sets In Subsection 5.1 we check that some familiar properties of closures carry over from topologies to semitopologies. There are no technical surprises, but this in itself is a mathematical result that needs checked. Then, we will use this to study the relation between closures and sets of intertwined points. **Definition 5.1.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(p\in\mathsf{P}\) and \(P\subseteq\mathsf{P}\). Then: 1. Define \(|P|\subseteq\mathsf{P}\) the **closure** of \(P\) to be the set of points \(p\) such that every open neighbourhood of \(p\) intersects \(P\). In symbols using Notation 3.1.1: \[|P|=\{p^{\prime}\in\mathsf{P}\ |\ \forall O\in\mathsf{Open}.p^{\prime}\in O \Longrightarrow P\between O\}.\] 2. As is standard, we may write \(|p|\) for \(|\{p\}|\). Unpacking definitions for reference: \[|p|=\{p^{\prime}\in\mathsf{P}\ |\ \forall O{\in}\mathsf{Open}.p^{\prime}\in O \Longrightarrow p\in O\}.\] Lemma 5.1.2: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(P,P^{\prime}\subseteq\mathsf{P}\). Then taking the closure of a set is: 1. _Monotone:_ If \(P\subseteq P^{\prime}\) then \(|P|\subseteq|P^{\prime}|\). 2. _Increasing:_\(P\subseteq|P|\). 3. _Idempotent:_\(|P|=||P||\). Proof: By routine calculations from Definition 5.1.1. Lemma 5.1.3: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(P\subseteq\mathsf{P}\) and \(O\in\mathsf{Open}\). Then \[P\between\!O\quad\text{if and only if}\quad|P|\between\!O.\] Proof: Suppose \(P\between O\). Then \(|P|\between O\) using Lemma 5.1.2(2). Suppose \(|P|\between O\). Pick \(p\in|P|\cap O\). By construction of \(|P|\) in Definition 5.1.1\(p\in O\Longrightarrow P\between\!O\). It follows that \(P\between\!O\) as required. Definition 5.1.4: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(C\subseteq\mathsf{P}\). 1. Call \(C\)**closed** when \(C=|C|\). 2. Call \(C\)**clopen** when \(C\) is closed and open. 3. Write \(\mathsf{Closed}\) for the **set of closed sets** (as we wrote \(\mathsf{Open}\) for the open sets; the ambient semitopology will always be clear or understood). Lemma 5.1.5: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(S\subseteq\mathsf{P}\). Then \(|S|\) is closed and contains \(S\). In symbols: \[S\subseteq|S|\in\mathsf{Closed}.\] Proof: From Definition 5.1.4(1) and Lemma 5.1.2(2 & 3). Example 5.1.6: 1. Take \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{0\},\{0,1\}\}\). Then the reader can verify that: * \(\{0\}\) is open. * The closure of \(\{1\}\) is \(\{1\}\) and \(\{1\}\) is closed. * The closure of \(\{0\}\) is \(\{0,1\}\). * \(\varnothing\) and \(\{0,1\}\) are the only clopen sets. 2. Now take \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{0\},\{1\},\{0,1\}\}\).15 Then the reader can verify that: * Every set is clopen. * The closure of every set is itself. Footnote 15: Following Definition 2.1.6 and Example 2.1.7(3), this is just \(\{0,1\}\) with the _discrete semitopology_. **Remark 5.1.7**.: There are two standard definitions for when a set is closed: when it is equal to its closure (as per Definition 5.1.4(1)), and when it is the complement of an open set. In topology these are equivalent. We do need to check that the same holds in semitopology, but as it turns out the proof is routine: **Lemma 5.1.8**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then: 1. Suppose \(C\in\mathsf{Closed}\) is closed (by Definition 5.1.4: \(C=|C|\)). Then \(\mathsf{P}\setminus C\) is open. 2. Suppose \(O\in\mathsf{Open}\) is open. Then \(\mathsf{P}\setminus O\) is closed (by Definition 5.1.4: \(|\mathsf{P}\setminus O|=\mathsf{P}\setminus O\)). **Proof:** 1. Suppose \(p\in\mathsf{P}\setminus C\). Since \(C=|C|\), we have \(p\in\mathsf{P}\setminus|C|\). Unpacking Definition 5.1.1, this means precisely that there exists \(O_{p}\in\mathsf{Open}\) with \(p\in O_{p}\not\!\!\!/\,C\). We use Lemma 2.3.2. 2. Suppose \(O\in\mathsf{Open}\). Combining Lemma 2.3.2 with Definition 5.1.1 it follows that \(O\not\!\!\!/\,|\mathsf{P}\setminus O|\) so that \(|\mathsf{P}\setminus O|\subseteq\mathsf{P}\setminus O\). Furthermore, by Lemma 5.1.2(2) \(\mathsf{P}\setminus O\subseteq|\mathsf{P}\setminus O|\). \(\sqcap\)\(\sqcup\) **Corollary 5.1.9**.: If \(C\in\mathsf{Closed}\) then \(\mathsf{P}\setminus C=\bigcup_{O\in\mathsf{Open}}O\not\!\!\!/\,C\). **Proof:** By Lemma 5.1.8(1) \(\mathsf{P}\setminus C\subseteq\bigcup_{O\in\mathsf{Open}}O\not\!\!\!/\,C\). Conversely, if \(O\not\!\!\!/\,C\) then \(O\subseteq\mathsf{P}\setminus C\) by Definition 5.1.1(1). \(\sqcap\)\(\sqcup\) **Corollary 5.1.10**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(R\subseteq\mathsf{P}\) and \(\mathcal{C}\subseteq\mathit{pow}(\mathsf{P})\). Then: 1. \(\varnothing\) and \(\mathsf{P}\) are closed. 2. If every \(R\in\mathcal{C}\) is closed, then \(\bigcap\mathcal{C}\) is closed. Or succinctly in symbols: \[\mathcal{C}\subseteq\mathsf{Closed}\Longrightarrow\bigcap\mathcal{C}\in \mathsf{Closed}.\] 3. \(|R|\) is equal to the intersection of all the closed sets that contain it. In symbols: \[|R|=\bigcap\{C\in\mathsf{Closed}\mid R\subseteq C\}.\] **Proof:** 1. Immediate from Lemma 5.1.8(2). 2. From Lemma 5.1.8 and Definition 2.1.2(1&2). 3. By Lemma 5.1.5\(\bigcap\{C\in\mathsf{Closed}\mid R\subseteq C\}\subseteq|R|\). By construction \(R\subseteq\bigcap\{C\in\mathsf{Closed}\mid R\subseteq C\}\), and using Lemma 5.1.2(1) and part 2 of this result we have \[|R|\stackrel{{ L5.1.2(1)}}{{\subseteq}}|\bigcap\{C\in\mathsf{Closed }\mid R\subseteq C\}|\stackrel{{ pt.2}}{{=}}\bigcap\{C\in\mathsf{Closed}\mid R \subseteq C\}.\] The usual duality between forming closures and interiors, remains valid in semitopologies: **Lemma 5.1.11**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then: 1. If \(O\in\mathsf{Open}\) is open then \(O\subseteq\mathit{interior}(|O|)\). The inclusion may be strict. 2. If \(C\in\mathsf{Closed}\) is closed then \(|\mathit{interior}(C)|\subseteq C\). The inclusion may be strict. Proof.: By routine calculations, as for topologies. To see examples of the strict inclusion, consider \(\mathbb{R}\) with the usual topology and: 1. \(O=(0,1)\cup(1,2)\) is open and \(O\subsetneq\mathit{interior}(|O|)=(0,2)\). 2. \(C=\{0\}\) is closed and \(|\mathit{interior}(C)|=\varnothing\subsetneq C\). ### Closed neighbourhoods and intertwined points **Definition 5.2.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. 1. Call \(C\subseteq\mathsf{P}\) a **closed neighbourhood of \(p\in\mathsf{P}\)** when \(C\) is closed and \(p\in\mathit{interior}(C)\). 2. Call \(C\subseteq\mathsf{P}\) a **closed neighbourhood** when \(C\) is closed and \(\mathit{interior}(C)\neq\varnothing\). In words: a closed neighbourhood is a closed set with a nonempty open interior. **Remark 5.2.2**.: 1. If \(C\) is a closed neighbourhood of \(p\) in the sense of Definition 5.2.1(1) then \(C\) is a closed neighbourhood in the sense of Definition 5.2.1(2), just because if \(p\in\mathit{interior}(C)\) then \(\mathit{interior}(C)\neq\varnothing\). 2. For \(C\) to be a closed neighbourhood of \(p\) it is not enough for \(p\in C\). We require \(p\in\mathit{interior}(C)\), which is a stronger condition. For instance take the semitopology \(\mathsf{P}=\{0,1,2\}\) and \(\mathsf{Open}=\{\varnothing,\mathsf{P},\{0\},\{2\}\}\) from Figure 3 (top-left diagram), and consider \(p=1\) and \(C=\{0,1\}\). Then \(p\in C\) but \(p\not\in\mathit{interior}(C)=\{0\}\), so that \(C\) is not a closed neighbourhood of \(p\). Recall from Definition 3.6.1 the notions of \(p\between p^{\prime}\) and \(p_{\between}\). Proposition 5.2.3 packages up our material for convenient use in later results. **Proposition 5.2.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). Then: 1. We can characterise when \(p^{\prime}\) is intertwined with \(p\) as follows: \[p\between p^{\prime}\quad\text{if and only if}\quad\forall O\mathsf{\in Open}.p\in O \Longrightarrow p^{\prime}\in|O|.\] 2. As a corollary, \[p_{\between}=\bigcap\{|O|\mid p\in O\in\mathsf{Open}\}.\] 3. Equivalently: \[p_{\between} =\bigcap\{C\in\mathsf{Closed}\mid p\in\mathit{interior}(C)\}\] \[=\bigcap\{C\in\mathsf{Closed}\mid C\text{ a closed neighbourhood of }p\}\qquad\text{ Definition \ref{lem:p}.1}.\] Thus in particular, if \(C\) is a closed neighbourhood of \(p\) then \(p_{\between}\subseteq C\). 4. \(p_{\between}\) is closed and (by Lemma 5.1.8(1)) \(\mathsf{P}\setminus p_{\between}\) is open. **Proof:** 1. We just rearrange Definition 3.6.1. So16 Footnote 16: The proof relies on pushing around bracketed scopes, so we bracket everywhere for extra clarity. \[\forall O,O^{\prime}\in\mathsf{Open}.((p\in O\wedge p^{\prime}\in O^{\prime} )\Longrightarrow O\between O^{\prime})\] rearranges to \[\forall O\in\mathsf{Open}.(p\in O\Longrightarrow\forall O^{\prime}\in \mathsf{Open}.(p^{\prime}\in O^{\prime}\Longrightarrow O\between O^{\prime})).\] We now observe from Definition 5.1.1 that this is precisely \[\forall O\in\mathsf{Open}.(p\in O\Longrightarrow p^{\prime}\in|O|).\] 2. We just rephrase part 1 of this result. 3. Using part 2 of this result it would suffice to prove \[\bigcap\{|O|\mid p\in O\in\mathsf{Open}\}=\bigcap\{C\in\mathsf{Closed}\mid p \in\mathit{interior}(C)\}.\] We will do this by proving that for each \(O\)-component on the left there is a \(C\) on the right with \(C\subseteq|O|\); and for each \(C\)-component on the right there is an \(O\) on the left with \(|O|\subseteq C\): * Consider some \(O\in\mathsf{Open}\) with \(p\in O\). We set \(C=|O|\), so that trivially \(C\subseteq|O|\). By Lemma 5.1.11(1) \(O\subseteq\mathit{interior}(|O|)\), so \(p\in\mathit{interior}(C)\). * Consider some \(C\in\mathsf{Closed}\) such that \(p\in\mathit{interior}(C)\). We set \(O=\mathit{interior}(C)\). Then \(p\in O\), and by Lemma 5.11(2) \(|O|\subseteq C\). 4. We combine part 2 of this result with Corollary 5.10(2). **Remark 5.2.4**.: We can relate Proposition 5.2.3 to concepts from topology. Write \(nbhd(p)=\{O\in\mathsf{Open}\mid p\in\mathsf{Open}\}\) and call this the _neighbourhood semifilter_ of \(p\in\mathsf{P}\) (cf. Example 7.1.4). Write \(nbhd^{c}(p)=\{C\in\mathsf{Closed}\mid p\in\mathsf{Closed}\}\) and call this the _closed neighbourhood system_ of \(p\in\mathsf{P}\). Then: * Proposition 5.2.3(2) identifies \(p_{\Downarrow}\) as the the set of **cluster points** of \(nbhd(p)\); see [1, Definition 2, page 69], [1, page 52], or Wikipedia (permalink). * Proposition 5.2.3(3) identifies \(p_{\Downarrow}\) as the set of **convergence points** of \(nbhd^{c}(p)\). **Remark 5.2.5**.: Recall that Theorem 4.2.6 characterised regularity in multiple ways, including as the existence of a greatest open neighbourhood. Proposition 5.2.6 below does something similar, for weak regularity and the existence of closed neighbourhoods (Definition 5.2.1). We make a further connection in Theorem 5.3.2. **Proposition 5.2.6**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent: 1. \(p\) is weakly regular, or in full: \(p\in K(p)\) (Definition 4.1.3(3)). 2. \(p_{\Downarrow}\) is a closed neighbourhood of \(p\) (Definition 5.2.1(1)). 3. The poset of closed neighbourhoods of \(p\) ordered by subset inclusion, has a least element. 4. \(p_{\Downarrow}\) is least in the poset of closed neighbourhoods of \(p\) ordered by subset inclusion. Proof.: We prove a cycle of implications: * Suppose \(p\in\mathit{interior}(p_{\Downarrow})\). By Proposition 5.2.3(4) \(p_{\Downarrow}\) is closed, so this makes it a closed neighbourhood of \(p\) as per Definition 5.2.1. * Suppose \(p_{\Downarrow}\) is a closed neighbourhood of \(p\). By Proposition 5.2.3(3) \(p_{\Downarrow}\) is the intersection of _all_ closed neighbourhoods of \(p\), and it follows that this poset has \(p_{\Downarrow}\) as a least element. * Assume the poset of closed neighbourhoods of \(p\) has a least element; write it \(C\). So \(C=\bigcap\{C^{\prime}\in\mathsf{Closed}\mid C^{\prime}\) is a closed neighbourhood of \(p\}\) and thus by Proposition 5.2.3(3) \(C=p_{\Downarrow}\). * If \(p_{\Downarrow}\) is least in the poset of closed neighbourhoods of \(p\) ordered by subset inclusion, then in particular \(p_{\Downarrow}\) is a closed neighbourhood of \(p\) and it follows from Definition 5.2.1 that \(p\in\mathit{interior}(p_{\Downarrow})\). Recall from Definition 4.1.3 that \(K(p)=\mathit{interior}(p_{\Downarrow})\): **Lemma 5.2.7**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then \(|K(p)|\subseteq p_{\Downarrow}\). **Proof:** By Proposition 5.2.3(4) \(p_{\Downarrow}\) is closed; we use Lemma 5.1.11(2). **Theorem 5.2.8**: _Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then:_ 1. _If_ \(p\) _weakly regular then_ \(|K(p)|=p_{\Downarrow}\)_. In symbols:_ \[p\in K(p)\quad\text{implies}\quad|K(p)|=p_{\Downarrow}.\] 2. _As an immediately corollary, if_ \(p\) _is regular then_ \(|K(p)|=p_{\Downarrow}\)_._ **Proof:** We consider each part in turn: 1. If \(p\in K(p)=\mathit{interior}(p_{\Downarrow})\) then \(|K(p)|\) is a closed neighbourhood of \(p\), so by Proposition 5.2.3(3) \(p_{\Downarrow}\subseteq|K(p)|\). By Lemma 5.2.7\(|K(p)|\subseteq p_{\Downarrow}\). 2. By Lemma 4.1.5(1) if \(p\) is regular then it is weakly regular. We use part 1 of this result. We can combine Theorem 5.2.8 with Corollary 4.2.8: **Corollary 5.2.9**: _Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent:_ 1. \(p\) _is regular._ 2. \(p\) _is weakly regular and_ \(p_{\Downarrow}=p^{\prime}_{\Downarrow}\) _for every_ \(p^{\prime}\in K(p)\)_._ **Proof:** Suppose \(p\) is regular and \(p^{\prime}\in K(p)\). Then \(p\) is weakly regular by Lemma 4.1.5(1), and \(K(p)=K(p^{\prime})\) by Corollary 4.2.8, and \(p_{\Downarrow}=p^{\prime}_{\Downarrow}\) by Theorem 5.2.8. Suppose \(p\) is weakly regular and \(p_{\Downarrow}=p^{\prime}_{\Downarrow}\) for every \(p^{\prime}\in K(p)\). By Definition 4.1.3(1) also \(K(p)=\mathit{interior}(p_{\Downarrow})=\mathit{interior}(p^{\prime}_{ \Downarrow})=K(p^{\prime})\) for every \(p^{\prime}\in K(p)\), and by Corollary 4.2.8\(p\) is regular. **Remark 5.2.10**: _Note a subtlety to Corollary 5.2.9: it is possible for \(p\) to be regular, yet it is not the case that \(p_{\Downarrow}=p^{\prime}_{\Downarrow}\) for every \(p^{\prime}\in p_{\Downarrow}\) (rather than for every \(p^{\prime}\in K(p)\)). For an example consider the top-left semitopology in Figure 3, taking \(p=0\) and \(p^{\prime}=1\); then \(1\in 0_{\Downarrow}\) but \(0_{\Downarrow}=\{0,1\}\) and \(1_{\Downarrow}=\{0,1,2\}\)._ To understand why this happens the interested reader can look ahead to Subsection 5.5: in the terminology of that Subsection, \(p^{\prime}\) needs to be _unconflicted_ in Corollaries 4.2.8 and 5.2.9. ### Regularity, maximal topens, and minimal closed neighbourhoods **Remark 5.3.1**.: Recall we have seen an arc of results which * started with Theorem 4.2.6 and Corollary 4.2.8 -- characterisations of regularity \(p\in K(p)\in\mathsf{Topen}\) in terms of maximal topens -- and * passed through Proposition 5.2.6 -- characterisation of weak regularity \(p\in K(p)\in\mathsf{Open}\) in terms of minimal closed neighbourhoods. We are now ready to complete this arc by stating and proving Theorem 5.3.2. This establishes a pleasing -- and not-at-all-obvious -- duality between 'has a maximal topen neighbourhood' and 'has a minimal closed neighbourhood'. **Theorem 5.3.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent: 1. \(p\) is regular. 2. \(K(p)\) is a maximal/greatest topen neighbourhood of \(p\). 3. \(p\) is weakly regular and \(p_{\Uparrow}\) is a minimal closed neighbourhood (Definition 5.2.1).17 Footnote 17: We really do mean “\(p_{\Uparrow}\) is minimal amongst closed neighbourhoods” and _not_ the weaker condition “\(p_{\Uparrow}\) is minimal amongst closed neighbourhoods of \(p\)”! That weaker condition is treated in Proposition 5.2.6. See Remark 5.3.4. **Proof:** Equivalence of parts 1 and 2 is just Theorem 4.2.6(2). For equivalence of parts 2 and 3 we prove two implications: * Suppose \(p\) is regular. By Lemma 4.1.5(1) \(p\) is weakly regular. Now consider a closed neighbourhood \(C^{\prime}\subseteq p_{\Uparrow}\). Note that \(C^{\prime}\) has a nonempty interior by Definition 5.2.1(2), so pick any \(p^{\prime}\) such that \[p^{\prime}\in\mathit{interior}(C^{\prime})\subseteq C^{\prime}\subseteq p_{ \Uparrow}.\] It follows that \(p^{\prime}\in K(p)=\mathit{interior}(p_{\Uparrow})\), and \(p\) is regular, so by Corollary 5.2.9\(p^{\prime}_{\Uparrow}=p_{\Uparrow}\), and then by Proposition 5.2.6(3) (since \(p^{\prime}\in C^{\prime}\)) \(p^{\prime}_{\Uparrow}\subseteq C^{\prime}\). Putting this all together we have \[p_{\Uparrow}=p^{\prime}_{\Uparrow}\subseteq C^{\prime}\subseteq p_{\Uparrow},\] so that \(C^{\prime}=p_{\Uparrow}\) as required. * Suppose \(p\) is weakly regular and suppose \(p_{\Uparrow}\) is minimal in the poset of closed neighbourhoods ordered by subset inclusion. Consider some \(p^{\prime}\in K(p)\). By Proposition 5.2.3(3) \(p^{\prime}_{\Uparrow}\subseteq p_{\Uparrow}\), and by minimality it follows that \(p^{\prime}_{\Uparrow}=p_{\Uparrow}\). Thus also \(K(p^{\prime})=K(p)\). Now \(p^{\prime}\in K(p)\) was arbitrary, so by Corollary 4.2.8\(p\) is regular as required. **Remark 5.3.3**.: Recall Example 4.4.1(4), as illustrated in Figure 4 (right-hand diagram). This has a point \(0\) whose community \(K(0)=\{1,2\}\) is not a single topen (it contains two topens: \(\{1\}\) and \(\{2\}\)). A corollary of Theorem 5.3.2 is that \(0_{\Downarrow}=\{0,1,2\}\) cannot be a minimal closed neighbourhood, because if it were then \(0\) would be regular and \(K(0)\) would be a maximal open neighbourhood of \(0\). We check, and see that indeed, \(0_{\Downarrow}\) contains _two_ distinct minimal closed neighbourhoods: \(\{0,1\}\) and \(\{0,2\}\). **Remark 5.3.4**.: Theorem 5.3.2(3) looks like Proposition 5.2.6(4), but * Proposition 5.2.6(4) regards the _poset of closed neighbourhoods of \(p\)_ (closed sets with a nonempty open interior that contains \(p\)), * Theorem 5.3.2(3) regards the _poset of all closed neighbourhoods_ (closed sets with a nonempty open interior, not necessarily including \(p\)). So the condition used in Theorem 5.3.2(3) is strictly stronger than the condition used in Proposition 5.2.6(4). Correspondingly, the regularity condition in Theorem 5.3.2(1) can be written as \(p\in K(p)\in\mathsf{Topen}\), and (as noted in Lemma 4.1.5 and Example 4.1.6(2)) this is strictly stronger than the condition \(p\in K(p)\) used in Proposition 5.2.6(1). Corollary 5.3.5 makes Remark 3.6.7 (intertwined is the opposite of Hausdorff) a little more precise: **Corollary 5.3.5**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a Hausdorff semitopology (so every two points have a pair of disjoint neighbourhoods). Then if \(p\in\mathsf{P}\) is regular, then \(\{p\}\) is clopen. Proof.: Suppose \(\mathsf{P}\) is Hausdorff and consider \(p\in\mathsf{P}\). By Remark 3.6.7\(p_{\Downarrow}=\{p\}\). From Theorem 5.3.2(3)\(\{p\}\) is closed and has a nonempty open interior which must therefore also be equal to \(\{p\}\). By Corollary 4.2.7 (or from Theorem 5.3.2(2)) this interior is transitive. We conclude with one more example: **Example 5.3.6**.: Figure 5: An unconflicted, irregular space (Proposition 5.5.3) in which every point is intertwined only with itself (Example 5.5.2) 1. \(\mathbb{Q}^{2}\) with open sets generated by any covering collection of pairwise non-parallel **rational lines** -- meaning a set of solutions to a linear equation \(a.x+b.y=c\) for \(a\), \(b\), and \(c\) integers -- is a semitopology. This consists of a single (maximal) topen: lines are pairwise non-parallel, so any two lines intersect and (looking to Theorem 3.6.5) all points are intertwined. There is only one closed set with a nonempty open interior, which is the whole space. 2. \(\mathbb{Q}^{2}\) with open sets generated by all (possibly parallel) rational lines, is a semitopology. It has no topen sets and (looking to Theorem 3.6.5) no two distinct points are intertwined. For any line \(l\), its complement \(\mathbb{Q}^{2}\setminus l\) is a closed set, given by the union of all the lines parallel to \(l\). Thus every closed set is also an open set, and vice versa, and every line \(l\) is an example of a minimal closed neighbourhood (itself), whose interior is not a topen. ### Relation between \(p_{\between}\) and \(|p|\) **Remark 5.4.1**.: Recall the definitions of \(p_{\between}\) and \(|p|\): * The set \(|p|\) is the _closure_ of \(p\). By Definition 5.1.1 this is the set of \(p^{\prime}\) such that every open neighbourhood \(O^{\prime}\ni p^{\prime}\) intersects with \(\{p\}\). By Definition 5.1.4\(|p|\) is closed. * The set \(p_{\between}\) is the set of points _intertwined_ with \(p\). By Definition 3.6.1(2) this is the set of \(p^{\prime}\) such that every open neighbourhood \(O^{\prime}\ni p^{\prime}\) intersects with every open neighbourhood \(O\ni p\). By Proposition 5.2.3(4)\(p_{\between}\) is closed. We take a moment to study how \(p_{\between}\) and \(|p|\) are related. Lemma 5.4.2 rephrases Remark 5.4.1 more precisely by looking at it through sets complements. We will use it in Lemma 10.4.4(2): **Lemma 5.4.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. \(\mathsf{P}\setminus|p|=\bigcup\{O\in\mathsf{Open}\mid p\not\in O\}\in\mathsf{ Open}\). In words: \(\mathsf{P}\setminus|p|\) is the union of the **open** sets that avoid \(p\). 2. \(\mathsf{P}\setminus p_{\between}=\bigcup\{C\in\mathsf{Closed}\mid p\not\in C\}\in \mathsf{Open}\). In words: \(\mathsf{P}\setminus p_{\between}\) is the union of the **closed** sets that avoid \(p\). 3. \(\mathsf{P}\setminus p_{\between}=\bigcup\{O^{\prime}\in\mathsf{Open}\mid\exists O \in\mathsf{Open}.p\in O\wedge O^{\prime}\not\between O\}\in\mathsf{Open}\). In words: \(\mathsf{P}\setminus p_{\between}\) is the union of the open sets that avoid some neighbourhood of \(p\). **Proof:** 1. Immediate from Definitions 3.6.1 and 5.1.1.18 Openness is from Definition 2.1.2(2). 2. We reason as follows using Proposition 5.2.3(3): \[\mathsf{P}\setminus p_{\between}=\bigcup\{\mathsf{P}\setminus|O|\mid p\in O\}= \bigcup\{C\in\mathsf{Closed}\mid p\not\in C\}.\] Openness is Proposition 5.2.3(4). 3. From part 2 of this result using Definition 5.1.1, or by a routine argument direct from Definition 3.6.1. Openness is from Definition 2.1.2(2). **Example 5.4.3**.: The reader can easily prove from Lemma 5.4.2(1&3) that \(|p|\subseteq p_{\between}\). We take a moment to note that the subset inclusion may be strict. Take \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{1\},\{0,1\}\}\) (this space has a name: the _Sierpinski topology_). Then: * \(|0|=\{0\}\) (because \(\{1\}\) is open), but * \(0_{\between}=\{0,1\}\) (because every open neighbourhood of \(0\) intersects with every open neighbourhood of \(1\)). Thus we see that \(|0|=\{0\}\subsetneq\{0,1\}=0_{\between}\), and \(0\) is regular since \(0\in\mathit{interior}(0_{\between})=\{0,1\}\in\mathsf{Topen}\). ### (Un)conflicted points: transitivity of \(\between\) In Lemma 3.6.3 we asked whether the 'is intertwined with' relation \(\between\) from Definition 3.6.1(1) is transitive -- answer: not necessarily. Transitivity of \(\between\) is a natural condition. We now have enough machinery to study it in more detail, and this will help us gain a deeper understanding of the properties of not-necessarily-regular points. **Definition 5.5.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. 1. Call a point \(p\)**conflicted** when there exist \(p^{\prime}\) and \(p^{\prime\prime}\) such that \(p^{\prime}\between p\) and \(p\between p^{\prime\prime}\) yet \(\neg(p^{\prime}\between p^{\prime\prime})\). 2. If \(p^{\prime}\between p\between p^{\prime\prime}\) implies \(p^{\prime}\between p^{\prime\prime}\) always, then call \(p\)**unconflicted**. 3. Continuing Definition 4.1.3(7), if \(P\subseteq\mathsf{P}\) and every \(p\in P\) is conflicted/unconflicted, then we may call \(P\)**conflicted/unconflicted** respectively. **Example 5.5.2**.: We consider some examples: 1. In Figure 3 top-left diagram, \(0\) and \(2\) are unconflicted and intertwined with themselves, and \(1\) is conflicted (being intertwined with \(0\), \(1\), and \(2\)). If the reader wants to know what a conflicted point looks like: it looks like \(1\). 2. In Figure 3 top-right diagram, \(0\) and \(2\) are unconflicted and intertwined with themselves, and \(1\) is conflicted (being intertwined with \(0\), \(1\), and \(2\)). 3. In Figure 3 lower-left diagram, \(0\) and \(1\) are unconflicted and intertwined with themselves, and \(3\) and \(4\) are unconflicted and intertwined with themselves, and \(2\) is conflicted (being intertwined with \(0\), \(1\), \(2\), \(3\), and \(4\)). 4. In Figure 3 lower-right diagram, all points are unconflicted, and \(0\) and \(2\) are intertwined just with themselves, and \(1\) and \(*\) are intertwined with one another. 5. In Figure 5, all points are unconflicted and intertwined only with themselves. **Proposition 5.5.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. If \(p\) is regular then it is unconflicted. Equivalently by the contrapositive: if \(p\) is conflicted then it is not regular. 2. \(p\) may be unconflicted and neither quasiregular, weakly regular, nor regular. 3. There exists a semitopological space such that * every point is unconflicted (so \(\between\) is a transitive relation) yet * every point is irregular, not weakly regular, and not quasiregular (so there are no regular points, no weakly regular points, and no quasiregular points). Proof.: We consider each part in turn: 1. So consider \(q\between p\between q^{\prime}\). We must show that \(q\between q^{\prime}\), so consider open neighbourhoods \(Q\ni q\) and \(Q^{\prime}\ni q^{\prime}\). By assumption \(p\) is regular, so unpacking Definition 4.1.3(3) \(K(p)\) is a topen (transitive and open) neighbourhood of \(p\). By assumption \(Q\between\ K(p)\between Q^{\prime}\), and by transitivity of \(K(p)\) (Definition 3.2.2(1)) we have \(Q\between Q^{\prime}\) as required. 2. Consider the semitopology illustrated in Figure 5. Note that the point \(0\) is not conflicted (because it is not intertwined with any other point), but it is also neither quasiregular, weakly regular, nor regular, because its community is the empty set. 3. As for the previous part, noting that the same holds of points \(1\), \(2\), and \(3\) in Figure 5. We can combine Proposition 5.5.3 with a previous result Lemma 4.1.5 to get a precise and attractive relation between being * regular (Definition 4.1.3(3)), * weakly regular (Definition 4.1.3(4)), and * unconflicted (Definition 5.5.1), as follows: **Theorem 5.5.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent: * \(p\) is regular. * \(p\) is weakly regular and unconflicted. More succinctly we can write: _regular = weakly regular + unconflicted_.19 **Proof:** We prove two implications: * If \(p\) is regular then it is weakly regular by Lemma 4.1.5 and unconflicted by Proposition 5.5.3(1). * Suppose \(p\) is weakly regular and unconflicted. By Definition 4.1.3(4) \(p\in K(p)\) and by Lemma 3.6.4(3) it would suffice to show that \(q\betweenq q^{\prime}\) for any \(q,q^{\prime}\in K(p)\). So consider \(q,q^{\prime}\in K(p)\). Now by Definition 4.1.3(1) \(K(p)=\mathit{interior}(p_{\betweenq})\) so in particular \(q,q^{\prime}\in p_{\betweenq}\). Thus \(q\betweenq p\betweenq q^{\prime}\), and since \(p\) is unconflicted \(q\betweenq q^{\prime}\) as required. \(\sqcap\) ## 6 Semiframes: compatible complete semilattices ### Complete join-semilattices, and morphisms between them We recall some (mostly standard) definitions and facts: **Definition 6.1.1**.: 1. A **poset**\((\mathsf{X},\leq)\) is a set \(\mathsf{X}\) of **elements** and a relation \(\leq\subseteq\mathsf{X}\times\mathsf{X}\) that is transitive, reflexive, and antisymmetric. 2. A poset \((\mathsf{X},\leq)\) is a **complete join-semilattice** when every \(X\subseteq\mathsf{X}\) (\(X\) may be empty or equal to all of \(\mathsf{X}\)) has a least upper bound -- or **join** -- \(\bigvee X\in\mathsf{X}\). All semilattices in this paper will be join (rather than meet) semilattices, so we may omit the word 'join' and just call this a _complete semilattice_ henceforth. 3. If \((\mathsf{X},\leq)\) is a complete semilattice then we may write \[\mbox{\rm\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0 pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0.0pt]{0.0pt} {7.0pt}\rule[0.0pt]{0.0pt}{7.0pt}\rule[0. **Remark 6.1.3**.: In Definition 6.1.1(2) we insist that \(g(\mathsf{T}_{\mathsf{X}^{\prime}})=\mathsf{T}_{\mathsf{X}}\); i.e. we want our notion of morphism to preserve the top element. This is does not follow from Definition 6.1.1(1), because \(g\) need not be surjective onto \(\mathsf{X}\), so we need to add this as a separate condition. Contrast with \(g(\mathsf{L}_{\mathsf{X}})=\mathsf{L}_{\mathsf{X}^{\prime}}\), which does follow from Definition 6.1.1(1), because \(\mathsf{L}_{\mathsf{X}}\) is the least upper bound of \(\varnothing\). We want \(g(\mathsf{T}_{\mathsf{X}^{\prime}})=\mathsf{T}_{\mathsf{X}}\) because our intended model is that \((\mathsf{X},\leq)=(\mathsf{Open},\subseteq)\) is the semilattice of open sets of a semitopology \((\mathsf{P},\mathsf{Open})\), and similarly for \((\mathsf{X}^{\prime},\leq^{\prime})\), and \(g\) is equal to \(f^{\cdot 1}\) where \(f:(\mathsf{P},\mathsf{Open})\to(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) is a continuous function. We recall a standard result: **Lemma 6.1.4**.: Suppose \((\mathsf{X},\leq)\) is a complete join-semilattice. Then: 1. If \(x_{1},x_{2}\in\mathsf{X}\) then \(x_{1}\leq x_{2}\) if and only if \(x_{1}\!\!\!\mathsf{V}x_{2}=x_{2}\). 2. If \(f:(\mathsf{X},\leq)\to(\mathsf{X}^{\prime},\leq^{\prime})\) is a semilattice morphism (Definition 6.1.2) then \(f\) is **monotone**: if \(x_{1}\leq x_{2}\) then \(f(x_{1})\leq f(x_{2})\), for every \(x_{1},x_{2}\in\mathsf{X}\). Proof.: We consider each part in turn: 1. Suppose \(x_{1}\leq x_{2}\). By the definition of a least upper bound, this means precisely that \(x_{2}\) is a least upper bound for \(\{x_{1},x_{2}\}\). It follows that \(x_{1}\!\!\!\mathsf{V}x_{2}=x_{2}\). The converse implication follows just by reversing this reasoning. 2. Suppose \(x_{1}\leq x_{2}\). By part 1 of this result \(x_{1}\!\!\!\mathsf{V}x_{2}=x_{2}\), so \(f(x_{1}\!\!\!\mathsf{V}x_{2})=f(x_{2})\). By Definition 6.1.2\(f(x_{1})\!\!\!\mathsf{V}f(x_{2})=f(x_{2})\). By part 1 of this result \(f(x_{1})\leq f(x_{2})\). ### The compatibility relation Definition 6.2.1 is a simple idea, but so far as we are aware it is novel: **Definition 6.2.1**.: Suppose \((\mathsf{X},\leq)\) is a complete semilattice. A **compatibility relation**\(*\subseteq\mathsf{X}\times\mathsf{X}\) is a relation on \(\mathsf{X}\) such that: 1. \(*\) is _commutative_, so if \(x,x^{\prime}\in\mathsf{X}\) then \[x*x^{\prime}\quad\text{if and only if}\quad x^{\prime}*x.\] 2. \(*\) is **properly reflexive**,20 by which we mean \[\forall x\in\mathsf{X}\backslash\{\mathsf{L}_{\mathsf{X}}\}.x*x.\] Note that it will follow from the axioms of a compatibility relation that \(x*x\Longleftrightarrow x\neq\mathsf{L}_{\mathsf{X}}\); see Lemma 6.3.7(2). 3. \(*\) satisfies a **distributive law**, that if \(x\in\mathsf{X}\) and \(X^{\prime}\subseteq\mathsf{X}\) then \[x*\bigvee X^{\prime}\Longleftrightarrow\exists x^{\prime}\!\in\!X^{\prime}.x*x^ {\prime}.\] Thus we can say: a compatibility relation \(*\subseteq\mathsf{X}\times\mathsf{X}\) is a commutative properly reflexive completely distributive relation on \(\mathsf{X}\). When \(x*x^{\prime}\) holds, we may call \(x\) and \(x^{\prime}\)**compatible**. **Remark 6.2.2**.: The compatibility relation \(*\) is what it is and we will study it in this paper. But we take a moment to discuss some intuitions, and to put it in the context of some natural generalisations: 1. We can think of \(*\) as an _abstract intersection_. It lets us observe whether \(x\) and \(x^{\prime}\) intersect -- but without having to explicitly represent this intersection as a join \(x\!\mathbin{\mathsf{\Lambda}}\!x^{\prime}\) in the semilattice itself. We call \(*\) a _compatibility relation_ following an intuition of \(x,x^{\prime}\in\mathsf{X}\) as observations, and \(x*x^{\prime}\) holds when there is some possible world at which it is possible to observe \(x\) and \(x^{\prime}\) together. More on this in Example 6.3.3. 2. We can think of \(*\) as a _generalised intersection_; so a semiframe is an instance of a frame with a _generalised_ join. In this paper we concentrate on the case where \(x*x^{\prime}\) measures whether \(x\) and \(x^{\prime}\) intersect, but there are plenty of other possibilities. Here are some natural ways to proceed: 1. \((\mathsf{X},\leq)\) is a complete join-semilattice and \(*:(\mathsf{X}\times\mathsf{X})\to\mathsf{X}\) is any commutative distributive map. In this paper, we can set \(x*x^{\prime}\in\{\mathbf{L}_{\mathsf{X}},\mathbf{T}_{\mathsf{X}}\}\subseteq \mathsf{X}\). 2. \((\mathsf{X},\leq)\) is a complete join-semilattice and \(*:(\mathsf{X}\times\mathsf{X})\to\mathbb{N}\) is any commutative distributive map. We think of \(x*x^{\prime}\) as returning the _size_ of the intersection of \(x\) and \(x^{\prime}\). 3. Any complete join-semilattice \((\mathsf{X},\leq)\) is of course a (generalised) semiframe by taking \(x*x^{\prime}=\bigvee\{x^{\prime\prime}\mid x^{\prime\prime}\leq x,\ x^{\prime \prime}\leq x^{\prime}\}\). 4. We can generalise further, in more than one direction. We would take \((\mathsf{X},\leq)\) and \((\mathsf{X}^{\prime},\leq^{\prime})\) to be complete join-semilattices and \(*:(\mathsf{X}\times\mathsf{X})\to\mathsf{X}^{\prime}\) to be any commutative distributive map (which generalises the above). We could also take \(\mathsf{X}\) to be a cocomplete symmetric monoidal category [13, Section VII]: a category with all colimits and with a (symmetric) monoid action \(*\) that distributes over (commutes with) colimits. See also Remark 12.1.6. **Lemma 6.2.3**.: Suppose \((\mathsf{X},\leq)\) is a complete semilattice and suppose \(*\subseteq\mathsf{X}\times\mathsf{X}\) is a compatibility relation on \(\mathsf{X}\). Then: 1. \(*\) is monotone on both arguments. That is: if \(x_{1}*x_{2}\) and \(x_{1}\leq x_{1}^{\prime}\) and \(x_{2}\leq x_{2}^{\prime}\), then \(x_{2}*x_{2}^{\prime}\). 2. If \(x_{1},x_{2}\in\mathsf{X}\) have a non-\(\mathsf{L}\) lower bound \(\mathsf{L}_{\mathsf{X}}\lneq x\leq x_{1},x_{2}\), then \(x_{1}*x_{2}\). In words we can write: \(*\) reflects non-\(\mathsf{L}\) lower bounds. 3. The converse implication to part 2 need not hold: it may be that \(x_{1}*x_{2}\) (\(x_{1}\) and \(x_{2}\) are compatible) but the greatest lower bound of \(\{x_{1},x_{2}\}\) is \(\mathsf{L}\). Proof: We consider each part in turn: 1. We argue much as for Lemma 6.1.4(1). Suppose \(x_{1}*x_{2}\) and \(x_{1}\leq x_{1}^{\prime}\) and \(x_{2}\leq x_{2}^{\prime}\). By Lemma 6.1.4\(x_{1}\mathbf{\mathsf{V}}x_{1}^{\prime}=x_{1}^{\prime}\) and \(x_{2}\mathbf{\mathsf{V}}x_{2}^{\prime}=x_{2}^{\prime}\). It follows using distributivity and commutativity (Definition 6.2.1(3&1)) that \(x_{1}*x_{2}\) implies that \((x_{1}\mathbf{\mathsf{V}}x_{1}^{\prime})*(x_{2}*x_{2}^{\prime})\), and thus that \(x_{1}^{\prime}*x_{2}^{\prime}\) as required. 2. Suppose \(\mathsf{L}_{\mathsf{X}}\neq x\leq x_{1},x_{2}\), so \(x\) is a non-\(\mathsf{L}_{\mathsf{X}}\) lower bound. By assumption \(*\) is properly reflexive (Definition 6.2.1(2)) so (since \(x\neq\mathsf{L}_{\mathsf{X}}\)) \(x*x\). By part 1 of this result it follows that \(x_{1}*x_{2}\) as required. 3. It suffices to provide a counterexample. Define \((\mathsf{X},\leq,*)\) by: * \(\mathsf{X}=\{\mathsf{L},0,1,\mathbf{\mathsf{T}}\}\). * \(\mathsf{L}\leq 0,1\leq\mathbf{\mathsf{T}}\) and \(\neg(0\leq 1)\) and \(\neg(1\leq 0)\). * \(x*x^{\prime}\) for every \(0\neq x,x^{\prime}\in\mathsf{X}\). We note that \(0*1\) but the greatest lower bound of \(\{0,1\}\) is \(\mathsf{L}\). ### The definition of a semiframe **Definition 6.3.1**.: A **semiframe** is a tuple \((\mathsf{X},\leq,*)\) such that 1. \((\mathsf{X},\leq)\) is a complete semilattice (Definition 6.1.1), and 2. \(*\) is a compatibility relation on it (Definition 6.2.1). Slightly abusing terminology, we can say that a semiframe is a _compatible complete semilattice_. Semiframes are new, so far as we know, but they are a simple and natural idea. We consider some elementary ways to generate examples, starting with arguably the simplest possible instance: **Example 6.3.2**.: **(The empty semiframe)** Suppose \((\mathsf{X},\leq,*)\) is a semiframe. 1. If \(\mathsf{X}\) is a singleton set, so that \(\mathsf{X}=\{\bullet\}\) for some element \(\bullet\), then we call \((\mathsf{X},\leq,\bullet)\) the **empty** or **singleton** semiframe. The reader can check that then necessarily \(\bullet=\mathsf{L}_{\mathsf{X}}=\mathbf{\mathsf{T}}_{\mathsf{X}}\) and \(\bullet\leq\bullet\) and \(\neg(\bullet*\bullet)\). 2. If \(\mathsf{X}\) has more than one element then we call \((\mathsf{X},\leq,\bullet)\)**nonempty**. Then reader can check that then necessarily \(\mathsf{L}_{\mathsf{X}}\neq\mathbf{\mathsf{T}}_{\mathsf{X}}\). Thus, \(({\sf X},\leq,*)\) is nonempty if and only if \(\mbox{\sf L}_{\sf X}\neq\mbox{\sf T}_{\sf X}\). We call a singleton semiframe _empty_, because this corresponds to the semiframe of open sets of the empty topology, which has no points and one open set, \(\varnothing\). Example 6.3.3 continues Remark 6.2.2: **Example 6.3.3**.: 1. Suppose \(({\sf P},{\sf Open})\) is a semitopology. Then the reader can check that the _semiframe of open sets_\(({\sf P},\subseteq,\between)\) is a semiframe. We will study this example in detail; see Definition 6.3.4 and Lemma 6.3.5. 2. Suppose \(({\sf X},\leq,\mbox{\sf L},\mbox{\sf T})\) is a bounded lattice with finite meets and all joins. Then \(({\sf X},\leq,*)\) is a semiframe, where \(x*x^{\prime}\) when \(x\mbox{\sf\char 127.4}x^{\prime}\neq\mbox{\sf L}\). 3. Suppose \(({\sf X},\leq)\) is a poset with all joins. Then \(({\sf X},\leq,*)\) is a semiframe, where \(x*x^{\prime}\) when \(\bigvee\{x^{\prime\prime}\mid x^{\prime\prime}\leq x,\ x^{\prime\prime}\leq x ^{\prime}\}\neq\mbox{\sf L}\). 4. Take \({\sf X}=\{\mbox{\sf L},0,1,\mbox{\sf T}\}\) with \(\mbox{\sf L}\leq 0\leq\mbox{\sf T}\) and \(\mbox{\sf L}\leq 1\leq\mbox{\sf T}\) (so \(0\) and \(1\) are incomparable). There are two possible semiframe structures on this, characterised by choosing \(0*1\) or \(\neg(0*1)\). 5. See also the semiframes used in Lemmas 7.2.7. Definition 6.3.4 is just an example of semiframes for now, though we will see much more of it later: **Definition 6.3.4**.: **(Semitopology \(\rightarrow\) semiframe)** Suppose \(({\sf P},{\sf Open})\) is a semitopology. Define \(\mbox{\sf Fr}({\sf P},{\sf Open})\) the **semiframe of open sets** (cf. Example 6.3.3(1)) by: * \(\mbox{\sf Fr}({\sf P},{\sf Open})\) has elements open sets \(O\in{\sf Open}\). * \(\leq\) is subset inclusion. * \(*\) is \(\between\) (sets intersection). We may write \[({\sf Open},\subseteq,\between)\quad\mbox{as a synonym for}\quad\mbox{\sf Fr}({\sf P},{\sf Open}).\] **Lemma 6.3.5**.: Suppose \(({\sf P},{\sf Open})\) is a semitopology. Then \(({\sf Open},\subseteq,\between)\) is indeed a semiframe. **Proof:** As per Definition 6.3.1 we must show that \({\sf Open}\) is a complete semilattice (Definition 6.1.1) and \(\between\) is a compatibility relation (Definition 6.2.1) -- commutative, properly reflexive, and distributive and satisfies a distributive law that if \(O\between\bigcup O^{\prime}\) then \(O\between O^{\prime}\) for some \(O^{\prime}\in{\cal O}^{\prime}\). These are all facts of sets. \(\sqcap\)\(\sqcup\) **Remark 6.3.6**.: Definition 6.3.4 and Lemma 6.3.5 are the start of our development. Once we have built more machinery, we will have a pair of translations: * Definition 6.3.4 and Lemma 6.3.5 go from semitopologies to semiframes. * Definition 7.4.3 and Lemma 7.4.4 go from semiframes to semitopologies. These translations are part of a dual pair of functors between categories of semitopologies and semiframes, as described in Definitions 9.1.1 and 9.2.1 and Proposition 9.3.7. In semitopologies, we have real points and open sets that are sets of real points; everything is concrete. Semiframes are more abstract: we have a join-complete semilattice, and a compatibility relation. The duality will show how these two worlds interact and reflect each other. We conclude with a simple technical lemma: Lemma 6.3.7: Suppose \((\mathsf{X},\leq,*)\) is a semiframe (a complete semilattice with a compatibility relation) and \(x\in\mathsf{X}\). Then: 1. \(\neg(x*\mathsf{L}_{\mathsf{X}})\) and in particular \(\neg(\mathsf{L}_{\mathsf{X}}*\mathsf{L}_{\mathsf{X}})\). 2. \(x*x\) if and only if \(x\neq\mathsf{L}_{\mathsf{X}}\). 3. \(x*\mathsf{T}_{\mathsf{X}}\) if and only if \(x\neq\mathsf{L}_{\mathsf{X}}\). 4. \(\mathsf{T}_{\mathsf{X}}*\mathsf{T}_{\mathsf{X}}\) holds precisely if \(\mathsf{X}\) is nonempty (Example 6.3.2). Proof: We consider each part in turn: 1. Recall from Definition 6.1.1(3) that \(\mathsf{L}_{\mathsf{X}}=\bigvee\varnothing\). By distributivity (Definition 6.2.1(3)) \[x*\mathsf{L}_{\mathsf{X}}\Longleftrightarrow\exists x^{\prime}\in\varnothing.x*x^{\prime}\Longleftrightarrow\bot.\] 2. We just combine part 1 of this result with Definition 6.2.1(2). 3. Suppose \(x\neq\mathsf{L}_{\mathsf{X}}\). Then \(\mathsf{L}_{\mathsf{X}}\lesssim x\leq x\leq\mathsf{T}_{\mathsf{X}}\), and by Lemma 6.2.3(2) \(x*\mathsf{T}_{\mathsf{X}}\). Suppose \(x=\mathsf{L}_{\mathsf{X}}\). Then \(\neg(x*\mathsf{T}_{\mathsf{X}})\) by combining commutativity of \(*\) (Definition 6.2.1(1)) with part 1 of this result. 4. If \(\mathsf{X}\) is nonempty then by Example 6.3.2\(\mathsf{L}_{\mathsf{X}}\neq\mathsf{T}_{\mathsf{X}}\) and so \(\mathsf{T}_{\mathsf{X}}*\mathsf{T}_{\mathsf{X}}\) holds by part 2 of this result. However, in the degenerate case that \(\mathsf{X}\) has one element then \(\mathsf{T}_{\mathsf{X}}=\mathsf{L}_{\mathsf{X}}\) and \(\mathsf{T}_{\mathsf{X}}*\mathsf{T}_{\mathsf{X}}\) does not hold. **Remark 6.3.8**.: We note in passing that semiframes naturally support a notion of complementation, by defining \(x^{c}=\bigvee\{y\mid\neg(y*x)\}\). More on this in Definition 10.4.2 and Lemma 10.4.4. ## 7 Semifilters and abstract points ### The basic definition, and discussion Definition 7.1.1: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(F\subseteq\mathsf{X}\). Then: 1. Call \(F\)**prime** when for every \(x,x^{\prime}\in\mathsf{X}\), \[x\not\mathsf{V}x^{\prime}\in F\quad\text{implies}\quad x\in F\lor x^{\prime} \in F.\] 2. Call \(F\)**completely prime** when for every (possibly empty) \(X\subseteq\mathsf{X}\), \[\bigvee X\in F\quad\text{implies}\quad\exists x\in X.x\in F.\] (This condition is used in Lemma 7.3.2, which is needed for Lemma 7.4.2.) 3. Call \(F\)**up-closed** when \(x\in F\) and \(x\leq x^{\prime}\) implies \(x^{\prime}\in F\). 4. Call \(F\)**compatible** when its elements are **pairwise compatible**, by which we mean that \(x*x^{\prime}\) for every \(x,x^{\prime}\in F\). 5. A **semifilter** is a nonempty, up-closed, compatible subset \(F\subseteq\mathsf{X}\). 6. Call a semifilter \(F\subseteq\mathsf{X}\)**maximal** when it is contained in no strictly greater semifilter. 7. An **abstract point** is a completely prime semifilter. 8. Write \[\mathsf{Points}(\mathsf{X},\leq,*)\] for the set of abstract points of \((\mathsf{X},\leq,*)\). Lemma 7.1.2: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(F\subseteq\mathsf{X}\) is compatible. Then \(\mathsf{L}_{\mathsf{X}}\not\in F\). **Proof:** By compatibility, \(x*x\) for every \(x\in F\). We use Lemma 6.3.7(1). **Notation 7.1.3**.: We will generally write \(F\subseteq\mathsf{X}\) for a subset of \(\mathsf{X}\) that is intended to be a semifilter, or for which in most cases of interest \(F\) is a semifilter. We will generally write \(P\subseteq\mathsf{X}\) when the subset is intended to be an abstract point, or when in most cases of interest \(P\) is an abstract point. Example 7.1.4: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. We recall some (standard) facts about abstract points, which carry over from topologies and frames: 1. Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and set \((\mathsf{X},\leq,*)=(\mathsf{Open},\subseteq,\between)\). The reader can check that \((\mathsf{X},\leq,*)\) is indeed a semiframe (see Definition 6.3.4 and Lemma 6.3.5). If \(p\in\mathsf{P}\) then the _neighbourhood semifilter_ \[nbhd(p)=\{O\in\mathsf{Open}\mid p\in O\}\] is an abstract point -- more on this in Definition 8.2.1 and Proposition 8.2.3. Intuitively, \(nbhd(p)\) abstractly represents \(p\) as the set of all of its open approximations in \(\mathsf{Open}\). 2. Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then \((\mathsf{Open},\subseteq,\between)\) could contain an abstract point that is not the neighbourhood semifilter \(nbhd(p)\) of a point \(p\in\mathsf{P}\). Set \((\mathsf{X},\leq,*)=(\mathsf{Open}_{\mathbb{Q}},\subseteq,\between)\), where \((\mathbb{Q},\mathsf{Open}_{\mathbb{Q}})\) is the rational numbers with the usual open set topology. Set \(P_{\pi}\) to be the set of all open sets \(O\in\mathsf{Open}_{\mathbb{Q}}\) such that there exist \(q_{1},q_{2}\in\mathbb{Q}\) such that \(q_{1}<\pi<q_{2}\) and \((q_{1},q_{2})\subseteq O\). Note that \(\pi\not\in\mathbb{Q}\), but \(P\) is the set of open sets 'approximating' \(\pi\). 3. We mention one more (standard) example. Consider \(\mathbb{N}\) with the **final segment** semitopology such that opens are either \(\varnothing\) or sets \(n_{\geq}=\{n^{\prime}\in\mathbb{N}\mid n^{\prime}\geq n\}\). Then \(\{n_{\geq}\mid n\in\mathbb{N}\}\) is an abstract point. Intuitively, this approximates a point at infinity, which we can understand as \(\omega\). **Remark 7.1.5**.: _Note on design:_ The notion of semifilter from Definition 7.1.1 is, obviously, based on the standard notion of filter. We just replace the join-directedness condition 'if \[x,x^{\prime}\in F\] then \[x\mathbf{\wedge}x^{\prime}\in F\] , with a weaker compatibility condition 'if \[x,x^{\prime}\in F\] then \[x*x^{\prime}\] This is in keeping with our move from frames to semiframes, which weakens from joins \(\wedge\) to the compatibility relation \(*\). Note that a semifilter or abstract point need not be join-directed; consider \(nbhd(0)\) in the (semiframes of open sets of the) semitopologies in the left-hand and middle examples in Figure 6. In both cases, \(\{0,1\},\{0,2\}\in nbhd(0)\) but \(\{0\}\not\in nbhd(0)\) because \(\{0\}\) is not an open set. Thus in particular, the standard result in frames that a finite filter has a non-\(\bot\) least element (obtained as the join of all the elements in the filter), does not hold for semifilters in semiframes. A counterexample is given in Remark 7.1.6 below, or see Proposition 7.2.8(1). **Remark 7.1.6**.: We continue Remark 7.1.5. As the reader may know, a semiframe still has greatest lower bounds, because we can build them as \(x\mathbf{\wedge}x^{\prime}=\bigvee\{x^{\prime\prime}\mid x^{\prime \prime}\leq x,\ x^{\prime\prime}\leq x^{\prime}\}\). It is just that this greatest lower bound may be unhelpful. To see why, consider again the examples in Figure 6. In the left-hand and middle examples in Figure 6, the greatest lower bound of \(\{0,1\}\) and \(\{0,2\}\) exists in the semiframe of open sets: but it is \(\varnothing\) the emptyset in the left-hand and middle example, not \(\{0\}\). In the right-hand example, the greatest lower bound of \(\{0,*,1\}\) and \(\{0,*,2\}\) is \(\{0\}\), not \(\{0,*\}\). So the reader could ask whether perhaps we should add the following weakened directedness condition to the definition of semifilters (and thus to abstract points): _If \(x,x^{\prime}\in F\) and \(x\mathbf{\wedge}x^{\prime}\neq\bot\) then \(x\mathbf{\wedge}x^{\prime}\in F\)._ Intuitively, this insists that semifilters are closed under _non-\(\bot\)_ greatest lower bounds. However, there are two problems with this: * It would break our categorical duality proof in the construction of \(g^{\circ}\) in Lemma 9.3.3; see the discussion in Remark 9.3.4. This technical difficulty may be superable, but \(\ldots\) Figure 6: Examples of open neighbourhoods (Remarks 7.1.5 and 8.2.2) *...the condition is probably not what we want anyway. It would mean that the set of open neighbourhoods of \(*\) in the right-hand example of Figure 6, would not be a semifilter, because it contains \(\{0,*,1\}\) and \(\{0,*,2\}\) but not its (non-\(\varnothing\)) greatest lower bound \(\{0\}\). ### Properties of semifilters #### 7.2.1 Things that are familiar from filters **Lemma 7.2.1**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then: 1. \(\mathsf{T}_{\mathsf{X}}\in F\). 2. \(\mathsf{L}_{\mathsf{X}}\not\in F\). Proof.: We consider each part in turn: 1. By nonemptiness (Definition 7.1.1(7)) \(F\) is nonempty, so there exists some \(x\in F\). By definition \(x\leq\mathsf{T}_{\mathsf{X}}\). By up-closure (Definition 7.1.1(3)) \(\mathsf{T}_{\mathsf{X}}\in F\) follows. 2. By assumption in Definition 7.1.1(4) elements in \(F\) are pairwise compatible (so \(x*x\) for every \(x\in F\)). We use Lemma 7.1.2. **Lemma 7.2.2**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. It is possible for a semifilter \(F\subseteq\mathsf{X}\) to be completely prime but not maximal. Proof.: We give a standard example (which also works for frames and filters). Take \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{0\},\{0,1\}\}\). Then \(P^{\prime}=\{\{0,1\}\}\) is an abstract point, but it is not a maximal semifilter (it is not even maximal abstract point) since \(P^{\prime}\) is contained in the strictly larger semifilter \(\{\{0\},\{0,1\}\}\) (which is itself also a strictly larger abstract point). **Lemma 7.2.3**.: If \((\mathsf{X},\leq,*)\) is a finite semiframe (meaning that \(\mathsf{X}\) is finite) then the properties of * being a prime semifilter (Definition 7.1.1(1)) and * being a completely prime semifilter (Definition 7.1.1(2)), coincide. Proof.: This is almost trivial, except that if \(X=\varnothing\) in the condition for being completely prime then we get that \(\mathsf{L}_{\mathsf{X}}\not\in P\) -- but we know that anyway from Lemma 7.2.1(2), from the compatibility condition on semifilters. **Lemma 7.2.4**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then: 1. The union of a countably ascending chain of semifilters in X, is a semifilter in X. 2. As a corollary, every semifilter \(F\subseteq\mathsf{X}\) is contained in some maximal semifilter \(F^{\prime}\subseteq\mathsf{X}\) (assuming Zorn's lemma). **Proof:** We consider each part in turn: 1. By a straightforward verification of the conditions of being a semifilter from Definition 7.1.1(5). 2. Direct application of Zorn's lemma. \(\sqcap\)\(\sqcup\) **Remark 7.2.5**.: 1. Lemma 7.2.1(2) has a small twist to it. In the theory of _filters_, it does not follow from the property of being nonempty, up-closed, and closed under all joins and finite meets, that \(\mathbf{\perp}_{\mathsf{X}}\not\in F\); this must be added as a distinct condition if required. In contrast, we see in the proof of Lemma 7.2.1(2) that for semifilters, \(\mathbf{\perp}_{\mathsf{X}}\not\in F\) follows from the compatibility condition. 2. Lemma 7.2.3 matters in particular to us here, because we are particularly interested in abstracting the behaviours of finite semitopologies, because our original motivation for looking at both of these structures comes from looking at real networks, which are finite.21 Footnote 21: This is carefully worded. We care about abstracting properties of finite semitopologies, but from this is does _not_ follow that we only care about semitopologies and semiframes that are actually finite. There is a difference! See Remark 12.1.5. #### Things that are different from filters **Remark 7.2.6**.: Obviously, by definition semifilters are necessarily compatible but not necessarily closed under meets. But aside from this fact, we have so far seen semiframes and semifilters behave more-or-less like frames and filters, modulo small details like that mentioned in Remark 7.2.5(1). But there are also differences, as we will now briefly explore. In the theory of (finite) frames, the following facts hold: 1. _Every filter_ \(F\) _has a greatest lower bound_ \(x\)_, and_ \(F=x^{\leq}=\{x^{\prime}\mid x\leq x^{\prime}\}\)_. Just take_ \(x=\bigwedge F\) _the join of all of its (finitely many) elements. This is not_ \(\mathbf{\perp}\)_, by the filter's finite intersection property._ 2. _Every filter can be extended to a maximal filter._ Just extend the filter to a maximal filter using Zorn's lemma (as in Lemma 7.2.4). 3. _Every maximal filter is completely prime._ It is a fact of finite frames that a maximal filter is prime,22 and since we assume the frame is finite, it is also completely prime. Footnote 22: A succinct proof is in Wikipedia (permalink). 4. _Every non-\(\mathbf{\perp}\) element_ \(x\neq\mathbf{\perp}_{\mathsf{X}}\) _in a finite frame is contained in some abstract point._ Just form \(\{x^{\prime}\mid x\leq x^{\prime}\}\), observe it is a filter, form a maximal filter above it, and we get an abstract point. 5. _As a corollary, if the frame is nonempty (so_ \(\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt} \vrule height 0.0pt width 0.0pt}}\nolimits\neq\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\)_; see Example 6.3.2) then it has at least one abstract point._ In Lemma 7.2.7 and Proposition 7.2.8 we consider some corresponding _non-properties_ of (finite) semiframes. **Lemma 7.2.7**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. It is possible for \(\mathsf{Points}(\mathsf{X},\leq,*)\) to be empty, even if \((\mathsf{X},\leq,*)\) is nonempty (Example 6.3.2(2)). This is possible even if \(\mathsf{X}\) is finite, and even if \(\mathsf{X}\) is infinite. Proof.: It suffices to provide an example. We define a semiframe as below, and as illustrated in Figure 7: * \(\mathsf{X}=\{\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 6.0pt width 0.0pt depth 0.0pt} \vrule height 0.0pt width 0.0pt}}\nolimits,0,1,2,3,\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\}\). * Let \(x\leq x^{\prime}\) when \(x=x^{\prime}\) or \(x=\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\) or \(x^{\prime}=\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\). * Let \(x*x^{\prime}\) when \(x\mathord{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits x^{\prime}\neq\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\).23 Footnote 23: Unpacking what that means, we obtain this: \(x\neq\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\wedge x=x^{\prime}\) or \(x\neq\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\wedge x^{\prime}=\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\) or \(x^{\prime}\neq\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\wedge x=\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\). Footnote 24: See also a discussion of the design of the notion of semifilter, in Remarks 7.1.6 and 9.3.4. Then \((\mathsf{X},\leq,*)\) has no abstract points. For suppose \(P\) is one such. By Lemma 7.2.1\(\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits P\). Note that \(\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits=0\mathord{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits 1=2\mathord{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits 3\). By assumption \(P\) is completely prime, we know that \(0\in P\lor 1\in P\), and also \(2\in P\lor 3\in P\). But this is impossible because \(0\), \(1\), \(2\), and \(3\) are not compatible. For the infinite case, we just increase the width of the semiframe by taking \(\mathsf{X}=\{\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\}\cup\mathbb{N}\cup\{\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\}\}\). **Proposition 7.2.8**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then: 1. It is not necessarily the case that \(F\) has a non-\(\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\) greatest lower bound (even if \(\mathsf{X}\) is finite). 2. Every semifilter can be extended to a maximal semifilter, but... 3....this maximal semifilter is not necessarily prime (even if \(\mathsf{X}\) is finite). 4. There may exist a non-\(\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits\) element \(x\neq\mathop{\hbox{\vrule height 0.5pt depth 0.0pt width 0.0pt \vrule height 0.0pt width 0.0pt}}\nolimits_{\mathsf{X}}\) that is contained in no abstract point. Proof.: We consider each part in turn: 1. Consider \((\mathit{pow}(\{0,1,2\}),\subseteq,\between)\) and take \[F=\{\{0,1\},\ \{1,2\},\ \{0,2\},\ \{0,1,2\}\}.\] The greatest lower bound of \(F\) is \(\varnothing\). 2. This is Lemma 7.2.4. 3. \(F\) from part 1 of this result is maximal, and it cannot be extended to a point \(P\supseteq F\). For suppose such a \(P\) exists; so \(P\) is a semifilter that contains \(F\) and is (completely) prime. Since \(\{0,1\}\in P\) we must by primeness have \(\{0\}\in P\) or \(\{1\}\in P\). Suppose \(\{0\}\in P\). Since \(\{1,2\}\in P\) we must by primeness have \(\{1\}\in P\) or \(\{2\}\in P\). In either case we lose compatibility.25 4. We just take \(x=0\in\mathsf{X}\) from the example in Lemma 7.2.7 (see Figure 7). Since this semiframe has no abstract points at all, there is no abstract point that contains \(x\). **Remark 7.2.9**.: For now, we will just read Proposition 7.2.8 as a caution not to assume that semiframes and semifilters behave like frames and filters. Sometimes they do, and sometimes they do not; we need to check. We now proceed to build our categorical duality, culminating with Theorem 9.4.1. Once that machinery is constructed, we will continue our study of the fine structure of semifilters in Section 10. ### Sets of abstract points **Definition 7.3.1**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and recall \(\mathsf{Points}(\mathsf{X},\leq,*)\) from Definition 7.1.1(7). Define a map \(\mathit{Op}:\mathsf{X}\to\mathit{pow}(\mathsf{Points}(\mathsf{X},\leq,*))\) by \[\mathit{Op}(x)=\{P\in\mathsf{Points}(\mathsf{X},\leq,*)\mid x\in P\}.\] **Lemma 7.3.2**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(X\subseteq\mathsf{X}\). Then \[\mathit{Op}(\bigvee X)=\bigcup_{x\in X}\mathit{Op}(x).\] In words: we can say that \(\mathit{Op}\) commutes with joins, and that \(\mathit{Op}\) commutes with taking least upper bounds. **Proof:** Suppose \(P\in\mathsf{Points}(\mathsf{X},\leq,*)\). There are two sub-cases: * _Suppose_ \(X\neq\varnothing\)_._ We reason as follows: \[\begin{split} P\in\mathit{Op}(\bigvee X)& \Longleftrightarrow\bigvee X\in P&\text{Definition \ref{eq:C}}\text{ \ref{eq:C}}\\ &\Longleftrightarrow\exists x\in X.x\in P&\text{Definition \ref{eq:C}}\text{ \ref{eq:C}}(2)\\ &\Longleftrightarrow P\in\bigcup_{x\in X}\mathit{Op}(x)&\text{ Definition \ref{eq:C}}\text{ \ref{eq:C}}.\end{split}\] Figure 7 gives another counterexample, and in a rather interesting way: the semitopology has four maximal semifilters \(\{i,\mathsf{T}\}\) for \(i\in\{0,1,2,3\}\), but by Lemma 7.2.7 it has no prime semifilters at all. Figure 7: A semiframe with no abstract points (Lemma 7.2.7) * _Suppose_ \(X=\varnothing\)_._ By the least upper bound property we have \(\bigvee X=\bot_{\mathsf{X}}\), and we need to show that \(P\in\mathit{Op}(\bot_{\mathsf{X}})\) if and only if \(\exists x\mathord{\in}\varnothing.x\in P\), i.e. we need to show that \(\mathit{Op}(\bot_{\mathsf{X}})=\varnothing\). This follows by Lemma 7.2.1(2) which proves that \(\bot_{\mathsf{X}}\in P\) is impossible. \(\sqcap\)\(\sqcup\) **Proposition 7.3.3**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(x,x^{\prime}\in\mathsf{X}\). Then: 1. If \(x\leq x^{\prime}\) then \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\). 2. If \(\mathit{Op}(x)\between\mathit{Op}(x^{\prime})\) then \(x*x^{\prime}\). 3. \(\mathit{Op}(\mathsf{T}_{\mathsf{X}})=\mathsf{Points}(\mathsf{X},\leq,*)\) and \(\mathit{Op}(\bot_{\mathsf{X}})=\varnothing\). 4. \(\mathit{Op}(\bigvee X)=\bigcup_{x\in X}\mathit{Op}(x)\) for \(X\subseteq\mathsf{X}\). **Proof:** We consider each part in turn: 1. _We prove that_ \(x\leq x^{\prime}\) _implies_ \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\)_. Suppose \(x\leq x^{\prime}\), and consider some abstract point \(P\in\mathit{Op}(x)\). By Definition 7.3.1\(x\in P\), and by up-closure of \(P\) (Definition 7.1.1(3)) \(x^{\prime}\in P\), so by Definition 7.3.1\(P\in\mathit{Op}(x^{\prime})\). \(P\) was arbitrary, and it follows that \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\). 2. _We prove that_ \(\mathit{Op}(x)\between\mathit{Op}(x^{\prime})\) _implies_ \(x*x^{\prime}\)_. Suppose there exists an abstract point \(P\in\mathit{Op}(x)\cap\mathit{Op}(x^{\prime})\). By Definition 7.3.1\(x,x^{\prime}\in P\), and by compatibility of \(P\) (Definition 7.1.1(4)) \(x*x^{\prime}\). 3. Unpacking Definition 7.3.1, it suffices to show that \(\mathsf{T}_{\mathsf{X}}\in P\) and \(\bot_{\mathsf{X}}\not\in P\), for every abstract point \(P\in\mathsf{Points}(\mathsf{X},\leq,*)\). This is from Lemma 7.2.1(1). 4. This is just Lemma 7.3.2. \(\sqcap\)\(\sqcup\) **Remark 7.3.4**.: Proposition 7.3.3 carries a clear suggestion that \((\{\mathit{Op}(x)\mid x\in\mathsf{X}\},\subseteq,\between)\) is trying, in some sense, to be an isomorphic copy of \((\mathsf{X},\leq,*)\). Lemm 7.3.5 notes that it may not quite manage this, because there may not be enough points (indeed, there may not be any abstract points at all). This will (just as for topologies and frames) lead us to the notion of a _spatial_ semiframe in Definition 8.1.2 and Proposition 8.1.4. **Lemma 7.3.5**.: The converse implications in Proposition 7.3.3(1&2) need not hold. That is: 1. There exists a semiframe \((\mathsf{X},\leq,*)\) and \(x,x^{\prime}\in\mathsf{X}\) such that \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\) yet \(\neg(x\leq x^{\prime})\). 2. There exists a semiframe \((\mathsf{X},\leq,*)\) and \(x,x^{\prime}\in\mathsf{X}\) such that \(x*x^{\prime}\) yet \(\neg(\mathit{Op}(x)\between\mathit{Op}(x^{\prime}))\). **Proof:** The example from Lemma 7.2.7 (as illustrated in Figure 7) is a counterexample for both cases: * \(\mathit{Op}(0)\subseteq\mathit{Op}(1)\) because both are equal to the empty set, yet \(\neg(0\leq 1)\); and * \(\mathsf{T}*\mathsf{T}\) yet \(\neg(\mathit{Op}(\mathsf{T})\between Op(\mathsf{T}))\). ### \(\mathsf{St}(\mathsf{X},\leq,*)\): the semitopology of abstract points Recall from Definition 7.1.1(7) that an abstract point in a semiframe \((\mathsf{X},\leq,*)\) is a nonempty up-closed compatible completely prime subset of \(\mathsf{X}\), and recall from Definition 7.3.1 that \[Op(x)=\{P\in\mathsf{Points}(\mathsf{X},\leq,*)\mid x\in P\},\] or in words: \(Op(x)\) is the set of abstract points that contain \(x\). **Definition 7.4.1**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then define \(Op(\mathsf{X},\leq,*)\) by \[Op(\mathsf{X},\leq,*)=\{Op(x)\mid x\in\mathsf{X}\}.\] **Lemma 7.4.2**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then: 1. \(Op(\mathsf{X},\leq,*)\) from Definition 7.4.1 is closed under arbitrary sets union. 2. As a corollary, \((Op(\mathsf{X},\leq,*),\subseteq)\) (in words: \(Op(\mathsf{X},\leq,*)\) ordered by subset inclusion) is a complete join-semilattice. **Proof:** Part 1 is just Lemma 7.3.2. The corollary part 2 is then just a fact, since \(Op(\mathsf{X})\subseteq pow(\mathsf{Points}(\mathsf{X},\leq,*))\), and sets union is the join (least upper bound) in the powerset lattice. \(\sqcap\)\(\sqcap\) Recall from Definition 6.3.4 and Lemma 6.3.5 that we showed how to go from a semitopology \((\mathsf{P},\mathsf{Open})\) to a semiframe \((\mathsf{Open},\subseteq,\between)\). We now show how to go in the other direction: **Definition 7.4.3**.: **(Semiframe \(\rightarrow\) semitopology)** Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Define the **semitopology of abstract points**\(\mathsf{St}(\mathsf{X},\leq,*)\) by \[\mathsf{St}(\mathsf{X},\leq,*)=\bigl{(}\mathsf{Points}(\mathsf{X},\leq,*), \mathsf{Op}(\mathsf{X},\leq,*)\bigr{)}.\] Unpacking this a little: 1. The set of points of \(\mathsf{St}(\mathsf{X},\leq,*)\) is the set of abstract points \(\mathsf{Points}(\mathsf{X},\leq,*)\) from Definition 7.1.1(7) -- namely, the completely prime nonempty up-closed compatible subsets of \(\mathsf{X}\).25 Footnote 25: There are no guarantees in general about _how many_ abstract points exist; e.g. Lemma 7.2.7 gives an example of a semiframe that has no abstract points at all and so maps to the empty semitopology. Later on in Definition 8.1.2 we consider conditions to ensure the existence of abstract points. 2. Open sets \(\mathsf{Open}(\mathsf{X},\leq,*)\) are the \(Op(x)\) from Definition 7.3.1: \[Op(x)=\{P\in\mathsf{Points}(\mathsf{X},\leq,*)\mid x\in P\}.\] **Lemma 7.4.4**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then \(\mathsf{St}(\mathsf{X},\leq,*)\) from Definition 7.4.3 is indeed a semitopology. **Proof:** From conditions 1 and 2 of Definition 2.1.2, we need to check that \(\mathit{Op}(\mathsf{X},\leq,*)\) contains \(\varnothing\) and \(\mathsf{Points}(\mathsf{X},\leq,*)\) and is closed under arbitrary unions. This is from Proposition 7.3.3(3&4). \(\sqcap\)\(\sqcup\) Recall from Definitions 7.4.3 and 6.3.4 that \(\mathsf{St}(\mathsf{X},\leq,*)\) is a semitopology, and \(\mathsf{Fr}\,\mathsf{St}(\mathsf{X},\leq,*)\) is a semiframe each of whose elements is the set of abstract points of \((\mathsf{X},\leq,*)\) that contain some \(x\in\mathsf{X}\): **Lemma 7.4.5**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then \(\mathit{Op}:(\mathsf{X},\leq,*)\to\mathsf{Fr}\,\mathsf{St}(\mathsf{X},\leq,*)\) is surjective. **Proof:** Direct from Definition 7.4.3(2). \(\sqcap\)\(\sqcup\) We conclude with Definition 7.4.6 and Proposition 7.4.7, which are standard properties of the construction in Definition 7.4.3. **Definition 7.4.6**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. We recall some standard terminology: 1. Call \(p,p^{\prime}\in\mathsf{P}\)**topologically indistinguishable** when \[\forall\mathit{O}\in\mathsf{Open}.p\in\mathit{O}\Longleftrightarrow p^{ \prime}\in\mathit{O}.\] Otherwise, call \(p\) and \(p^{\prime}\)**topologically distinguishable**. 2. Call \((\mathsf{P},\mathsf{Open})\) a \(T_{0}\)**space** when if \(p\) and \(p^{\prime}\) are topologically indistinguishable, then they are equal. Thus, a space is \(T_{0}\) when two points are topologically indistinguishable precisely when they are equal. **Proposition 7.4.7**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then \(\mathsf{St}(\mathsf{X},\leq,*)\) (Definition 7.4.3) is a \(T_{0}\) space. **Proof:** Suppose \(P,P^{\prime}\in\mathsf{Points}(\mathsf{X},\leq,*)\). Unpacking Definition 7.1.1(7), this means that \(P\) and \(P^{\prime}\) are completely prime nonempty up-closed compatible subsets of \(\mathsf{X}\). It is immediate that if \(P=P^{\prime}\) then \(P\) and \(P^{\prime}\) are topologically indistinguishable. Now suppose \(P\) and \(P^{\prime}\) are topologically indistinguishable in \(\mathsf{St}(\mathsf{X},\leq,*)\); to prove \(P=P^{\prime}\) it would suffice to show that \(x\in P\Longleftrightarrow x\in P^{\prime}\), for arbitrary \(x\in\mathsf{X}\). By Definition 7.4.3(2), every open set in \(\mathsf{St}(\mathsf{X},\leq,*)\) has the form \(\mathit{Op}(x)\) for some \(x\in\mathsf{X}\). We reason as follows: \[x\in P \Longleftrightarrow P\in\mathsf{Op}(x)\qquad\text{ Definition \ref{prop:P}}\] \[\Longleftrightarrow P^{\prime}\in\mathsf{Op}(x)\qquad P,\,P^{ \prime}\text{ top. indisting.}\] \[\Longleftrightarrow x\in P^{\prime}\qquad\qquad\text{ Definition \ref{prop:P}}\] Since \(x\) was arbitrary and \(P,P^{\prime}\subseteq\mathsf{Open}\), it follows that \(P=P^{\prime}\) as required. \(\sqcap\)\(\sqcup\) ## 8 Spatial semiframes, and sober semitopologies ### Definition of spatial semiframes **Remark 8.1.1**.: We continue Remark 7.3.4. We saw in Example 7.1.4(2&3) that there may be _more_ abstract points than there are concrete points, and in Remark 7.3.4 that there may also be _fewer_. In the theory of frames, the condition of being _spatial_ means that he abstract points and concrete points correspond. We imitate this terminology for a corresponding definition on semiframes: **Definition 8.1.2**.: **(Spatial semiframe)** Call a semiframe \((\mathsf{X},\leq,*)\)**spatial** when: 1. \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\) implies \(x\leq x^{\prime}\), for every \(x,x^{\prime}\in\mathsf{X}\). 2. \(x*x^{\prime}\) implies \(\mathit{Op}(x)\between Op(x^{\prime})\), for every \(x,x^{\prime}\in\mathsf{X}\). **Remark 8.1.3**.: Not every semiframe is spatial, just as not every frame is spatial. Lemma 7.2.7 gives an example of a semiframe that is not spatial because it has no points at all, as illustrated in Figure 7. We check that the conditions in Definition 8.1.2 correctly strengthen the implications in Proposition 7.3.3 to become logical equivalences: **Proposition 8.1.4**.: Suppose \((\mathsf{X},\leq,*)\) is a spatial semiframe and \(x,x^{\prime}\in\mathsf{X}\). Then: 1. \(x\leq x^{\prime}\) if and only if \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\). 2. \(x*x^{\prime}\) if and only if \(\mathit{Op}(x)\between Op(x^{\prime})\). 3. \(x=x^{\prime}\) if and only if \(\mathit{Op}(x)=\mathit{Op}(x^{\prime})\). 4. \(\mathit{Op}(\mathsf{T}_{\mathsf{X}})=\mathsf{Points}(\mathsf{X},\leq,*)\) and \(\mathit{Op}(\mathsf{L}_{\mathsf{X}})=\varnothing\). 5. \(\mathit{Op}(\bigvee X)=\bigcup_{x\in X}\mathit{Op}(x)\) for \(X\subseteq\mathsf{X}\). **Proof:** We consider each part in turn: 1. _We prove that_ \(x\leq x^{\prime}\) _if and only if_ \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\)_. The right-to-left implication is direct from Definition 8.1.2(1). The left-to-right implication is Proposition 7.3.3(1). 2. _We prove that_ \(x*x^{\prime}\) _if and only if_ \(\mathit{Op}(x)\between Op(x^{\prime})\)_. The left-to-right implication is direct from Definition 8.1.2(2). The right-to-left implication is Proposition 7.3.3(2). 3. _We prove that_ \(x=x^{\prime}\) _if and only if_ \(\mathit{Op}(x)=\mathit{Op}(x^{\prime})\)_. If \(x=x^{\prime}\) then \(\mathit{Op}(x)=\mathit{Op}(x^{\prime})\) is immediate. If \(\mathit{Op}(x)=\mathit{Op}(x^{\prime})\) then \(\mathit{Op}(x)\subseteq\mathit{Op}(x^{\prime})\) and \(\mathit{Op}(x^{\prime})\subseteq\mathit{Op}(x)\). By part 1 of this result (or direct from Definition 8.1.2(1)) \(x\leq x^{\prime}\) and \(x^{\prime}\leq x\). By antisymmetry of \(\leq\) it follows that \(x=x^{\prime}\). 4. This is just Proposition 7.3.3(3) 5. This is just Lemma 7.3.2. \(\sqcap\)\(\sqcup\) Definition 8.1.5 will be useful in a moment:26 Footnote 26: More on this topic later on in Definition 9.2.1, when we build the category of semiframes. **Definition 8.1.5**.: Suppose \((\mathsf{X},\leq,*)\) and \((\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\) are semiframes. Then an **isomorphism** between them is a function \(g:\mathsf{X}\to\mathsf{X}^{\prime}\) such that: 1. \(g\) is a bijection between \(\mathsf{X}\) and \(\mathsf{X}^{\prime}\). 2. \(x_{1}\leq x_{2}\) if and only if \(g(x_{1})\leq g(x_{2})\). 3. \(x_{1}*x_{2}\) if and only if \(g(x_{1})*g(x_{2})\). **Lemma 8.1.6**.: Suppose \((\mathsf{X},\leq,*)\) and \((\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\) are semiframes and \(g:\mathsf{X}\to\mathsf{X}^{\prime}\) is an isomorphism between them. Then \(g(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{ \rm\scriptsize X}}}}}})=g(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{ \mbox{\rm\scriptsize X}}}}})\) and \(g(\mbox{\sf\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}}})=\mbox{ \rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}^{\prime}}}\). **Proof:** By construction \(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}}}\leq x\) for every \(x\in\mathsf{X}\). It follows from Definition 8.1.5(2) that \(g(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}}})\leq g(x)\) for every \(x\in\mathsf{X}\); but \(g\) is a bijection, so \(g(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}}})\leq x^{\prime}\) for every \(x^{\prime}\in\mathsf{X}^{\prime}\). It follows that \(g(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}}})=\mbox{ \rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}^{\prime}}}\). By similar reasoning we conclude that \(g(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}})=\mbox{\rm\_}{\mbox{ \rm\_}{\mbox{\rm\scriptsize X}^{\prime}}}\). \(\sqcap\)\(\sqcup\) **Remark 8.1.7**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Just for this Remark, define \[Op(\mathsf{X})=\{\,Op(x)\mid x\in\mathsf{X}\}.\] Then the intuitive content of Proposition 8.1.4 is that a semiframe \((\mathsf{X},\leq,*)\) is spatial when \((\mathsf{X},\leq,*)\) is isomorphic (in the sense made formal by Definition 8.1.5) to \((\,Op(\mathsf{X}),\subseteq,\between)\). And, because \(\,Op(\mbox{\rm\_}{\mbox{\rm\_}{\mbox{\rm\scriptsize X}}})=\mathsf{Points}( \mathsf{X},\leq,*)\) we can write a slogan: _A semiframe is spatial when it is (up to isomorphism) generated by its abstract points._ We will go on to prove in Proposition 8.2.6 that every semitopology generates a spatial semiframe -- and in Theorem 9.4.1 we will tighten and extend the slogan above to a full categorical duality. ### The neighbourhood semifilter \(nbhd(p)\) #### 8.2.1 The definition and basic lemma **Definition 8.2.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Define \(nbhd(p)\subseteq\mathsf{Open}\) the neighbourhood semifilter of \(p\) (as standard) by \[nbhd(p)=\{O\in\mathsf{Open}\mid p\in\mathsf{Open}\}.\] **Remark 8.2.2**.: If \((\mathsf{P},\mathsf{Open})\) is a topology, then \(nbhd(p)\) is a filter (a nonempty up-closed down-directed set) and is often called the _neighbourhood filter_ of \(p\). We are working with semitopologies, so \(\mathsf{Open}\) is not necessarily closed under intersections, and \(nbhd(p)\) is not necessarily a filter (it is still a compatible set, because every \(O\in nbhd(p)\) contains \(p\)). Figure 6 illustrates examples of this: e.g. in the left-hand example \(\{0,1\},\{0,2\}\in nbhd(0)\) but \(\{0\}\not\in nbhd(0)\) (because this is not an open set). **Proposition 8.2.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\) and \(O\in\mathsf{Open}\). Then: 1. \(nbhd(p)\) (Definition 8.2.1) is an abstract point (a completely prime semifilter) in the semiframe \(\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) (Definition 6.3.4). In symbols: \[nbhd(p)\in\mathsf{Points}(\mathsf{Fr}(\mathsf{P},\mathsf{Open})).\] 2. The following are equivalent: \[nbhd(p)\in\mathit{Op}(O)\quad\Longleftrightarrow\quad O\in nbhd(p)\quad \Longleftrightarrow\quad p\in O.\] 3. We have an equality: \[nbhd^{-1}(\mathit{Op}(O))=O.\] **Proof:** We consider each part in turn: 1. From Definition 7.1.1(7), we must check that \(nbhd(p)\) is a nonempty, completely prime, up-closed, and compatible subset of \(\mathsf{Open}\) when considered as a semiframe as per Definition 6.3.4. All properties are by facts of sets; we give brief details: * \(nbhd(p)\) is nonempty because \(p\in\mathsf{P}\in\mathsf{Open}\). * \(nbhd(p)\) is completely prime because it is a fact of sets that if \(P\subseteq\mathsf{Open}\) and \(p\in\bigcup P\) then \(p\in O\) for some \(O\in P\). * \(nbhd(p)\) is up-closed because it is a fact of sets that if \(p\in O\) and \(O\subseteq O^{\prime}\) then \(p\in O^{\prime}\). * \(nbhd(p)\) is compatible because if \(p\in O\) and \(p\in O^{\prime}\) then \(O\between O^{\prime}\). 2. By Definition 7.3.1, \(\mathit{Op}(O)\) is precisely the set of abstract points \(P\) that contain \(O\), and by part 1 of this result \(nbhd(p)\) is one of those points. By Definition 8.2.1, \(nbhd(p)\) is precisely the set of open sets that contain \(p\). The equivalence follows. 3. We reason as follows: \[p\in nbhd^{-1}(\mathit{Op}(O)) \Longleftrightarrow nbhd(p)\in\mathit{Op}(O)\quad\text{\rm Fact of function inverse}\] \[\Longleftrightarrow p\in O\quad\quad\quad\quad\quad\quad\quad \text{\rm Part 2 of this result}\] \(\sqcap\)\(\sqcup\) **Corollary 8.2.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(O,O^{\prime}\in\mathsf{Open}\). Then: 1. \(Op(O)\subseteq Op(O^{\prime})\) if and only if \(O\subseteq O^{\prime}\). 2. \(Op(O)\between Op(O^{\prime})\) if and only if \(O\between O^{\prime}\). 3. As a corollary, \(nbhd^{-1}(\varnothing)=\varnothing\) and \(nbhd^{-1}(\mathsf{Points}(\mathsf{Open},\subseteq,\between))=\mathsf{P}\); i.e. \(nbhd^{-1}\) maps the bottom/top element to the bottom/top element. **Proof:** We consider each part in turn: 1. If \(Op(O)\subseteq Op(O^{\prime})\) then \(nbhd^{-1}(Op(O))\subseteq nbhd^{-1}(Op(O^{\prime}))\) by facts of inverse images, and \(O\subseteq O^{\prime}\) follows by Proposition 8.2.3(3). If \(O\subseteq O^{\prime}\) then \(Op(O)\subseteq Op(O^{\prime})\) by Proposition 7.3.3(1). 2. If \(O\between O^{\prime}\) then there exists some point \(p\in\mathsf{P}\) with \(p\in O\cap O^{\prime}\). By Proposition 8.2.3(1) \(nbhd(p)\) is an abstract point, and by Proposition 8.2.3(2) \(nbhd(p)\in Op(O)\cap Op(O^{\prime})\); thus \(Op(O)\between Op(O^{\prime})\). If \(Op(O)\between Op(O^{\prime})\) then \(O\between O^{\prime}\) by Proposition 7.3.3(2). 3. Routine from Proposition 7.3.3(3) (or from Lemma 8.1.6). \(\sqcap\) #### 8.2.2 Application to semiframes of open sets **Proposition 8.2.5**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then: 1. \(nbhd^{-1}\) bijects open sets of \(\mathsf{St}(\mathsf{Open},\subseteq,\between)\) (as defined in Definition 7.3.3(2)), with open sets of \((\mathsf{P},\mathsf{Open})\), taking \(Op(O)\) to \(O\). 2. \(nbhd^{-1}\) is an isomorphism between the semiframe of open sets of \(\mathsf{St}(\mathsf{Open},\subseteq,\between)\), and the semiframe of open sets of \((\mathsf{P},\mathsf{Open})\) (Definition 8.1.5). **Proof:** We consider each part in turn: 1. Unpacking Definition 7.3.3(2), an open set in \(\mathsf{St}\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) has the form \(Op(O)\) for some \(O\in\mathsf{Open}\). By Proposition 8.2.3(3) \(nbhd^{-1}(Op(O))=O\), and so \(nbhd^{-1}\) is surjective and injective. 2. Unpacking Definition 8.1.5 it suffices to check that: * \(nbhd^{-1}\) is a bijection, and maps \(Op(O)\) to \(O\). * \(Op(O)\subseteq Op(O^{\prime})\) if and only if \(O\subseteq O^{\prime}\). * \(Op(O)\between Op(O^{\prime})\) if and only if \(O\between O^{\prime}\). The first condition is part 1 of this result; the second and third are from Corollary 8.2.4. **Proposition 8.2.6**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then the semiframe \(\mathsf{Fr}(\mathsf{P},\mathsf{Open})=(\mathsf{Open},\subseteq,\between)\) from Definition 6.3.4 is spatial. Proof.: The properties required by Definition 8.1.2 are that \(\mathit{Op}(O)\subseteq\mathit{Op}(O^{\prime})\) implies \(O\subseteq O^{\prime}\), and \(O\between O^{\prime}\) implies \(\mathit{Op}(O)\between Op(O^{\prime})\). Both of these are immediate from Proposition 8.2.5(2). #### 8.2.3 Application to characterise \(T_{0}\) spaces **Lemma 8.2.7**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). Then the following are equivalent: 1. \(nbhd(p)=\)\(nbhd(p^{\prime})\) (cf. also Lemma 10.7.1) 2. \(\forall O\!\in\!\mathsf{Open}.p\in O\Longleftrightarrow p\in O^{\prime}\) 3. \(p\) and \(p^{\prime}\) are topologically indistinguishable in \((\mathsf{P},\mathsf{Open})\). 4. \(nbhd(p)\) and \(nbhd(p)\) are topologically indistinguishable as (by Proposition 8.2.3(1)) abstract points in \(\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\). Proof.: Equivalence of parts 1 and 2 is direct from Definition 8.2.1. Equivalence of parts 2 and 3 is just Definition 7.4.6(1). Equivalence of parts 1 and 4 is from Proposition 7.4.7. **Corollary 8.2.8**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then the following are equivalent: 1. \((\mathsf{P},\mathsf{Open})\) is \(T_{0}\) (Definition 7.4.6(2)). 2. \(nbhd:\mathsf{P}\to\mathsf{Points}(\mathsf{Open},\subseteq,\between)\) is injective. Proof.: Suppose \((\mathsf{P},\mathsf{Open})\) is \(T_{0}\), and suppose \(nbhd(p)=\)\(nbhd(p^{\prime})\). By Lemma 8.2.7(1&3) \(p\) and \(p^{\prime}\) are topologically indistinguishable. By Definition 7.4.6(2) \(p=p^{\prime}\). Since \(p\) and \(p^{\prime}\) were arbitrary, \(nbhd\) is injective. Suppose \(nbhd\) is injective. Reversing the reasoning of the previous paragraph, we deduce that \((\mathsf{P},\mathsf{Open})\) is \(T_{0}\). ### Sober semitopologies Recall from Proposition 8.2.6 that if we go from a semitopology \((\mathsf{P},\mathsf{Open})\) to a semiframe \((\mathsf{Open},\subseteq,\between)\), then the result is not just any old semiframe -- it is a _spatial_ one. We now investigate what happens when we go from a semiframe to a semitopology using Definition 7.4.3. #### 8.3.1 The definition and a key result **Definition 8.3.1**.: Call a semitopology \((\mathsf{P},\mathsf{Open})\)**sober** when every abstract point \(P\) of \(\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) -- i.e. every completely prime nonempty up-closed compatible set of open sets -- is equal to the neighbourhood semifilter \(nbhd(p)\) of some unique \(p\in\mathsf{P}\). Equivalently: \((\mathsf{P},\mathsf{Open})\) is sober when \(nbhd:\mathsf{P}\to\mathsf{Points}(\mathsf{Fr}(\mathsf{P},\mathsf{Open}))\) (Definition 7.1.1(7)) is a bijection. **Remark 8.3.2**.: A bijection is a map that is injective and a surjective. We noted in Corollary 8.2.8 that a space is \(T_{0}\) when \(nbhd\) is injective. So the sobriety condition can be thought of as having two parts: * \(nbhd\) is injective and the space is \(T_{0}\), so it intuitively contains no 'unnecessary' duplicates of points; * \(nbhd\) is surjective, so the space contains 'enough' points that there is (precisely) one concrete point for every abstract point.27 Footnote 27: ‘Unnecessary’ and ‘enough’ are in scare quotes here because these are subjective terms. For example, if points represent computer servers on a network then we might consider it a _feature_ to not be \(T_{0}\) by having multiple points that are topologically indistinguishable — e.g. for backup, or to reduce latency — and likewise, we might consider it a feature to not have one concrete point for every abstract point, if this avoids redundancies. There is no contradiction here: a computer network based on a small non-sober space with multiple backups of what it has, may be a more efficient and reliable system than one based on a larger non-sober space that does not back up its servers but is full of redundant points. And, this smaller non-sober space may present itself to the user abstractly as the larger, sober space. Users may even forget about the computation that goes on under the hood of this abstraction, as illustrated by the following _true story_: The authors had a paper presenting an efficient algorithm rejected because it ‘lacked motivation’. Why? Because the algorithm was unnecessary: the reviewer claimed, apparently with a straight face, that guessing the answer until you got it right was computationally equivalent. **Example 8.3.3**.: We give some examples of sober and non-sober semitopologies. 1. \(\mathbb{R}\) with its usual topology (which is also a semitopology) is sober. 2. \(\mathbb{Q}\) with its usual topology (which is also a semitopology) is not sober: the set of open neighbourhoods of \(\pi\) is a completely prime semifilter, but is not the neighbourhood semifilter of a unique point in \(\mathbb{Q}\). 3. Any nonempty set with the discrete semitopology, is sober. 4. Take \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{0,1\}\}\). This has one abstract point \(P=\{\{0,1\}\}\) but two concrete points \(0\) and \(1\). It is therefore not sober. 5. Take \(\mathsf{P}=\mathbb{N}\) with the final topology; so \(O\in\mathsf{Open}\) when \(O=\varnothing\) or \(O=n_{\geq}\) for some \(n\in\mathbb{N}\), where \(n_{\geq}=\{n^{\prime}\in\mathbb{N}\mid n^{\prime}\geq n\}\). Take \(P=\{n_{\geq}\mid n\in\mathbb{N}\}\). The reader can check that this is an abstract point (up-closed, completely prime, compatible); however \(P\) is not the neighbourhood semifilter of any \(n\in\mathbb{N}\). Thus this space is not sober. **Proposition 8.3.4**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then \(\mathsf{St}(\mathsf{X},\leq,*)\) from Definition 7.4.3 is a sober semitopology. Proof.: We know from Lemma 7.4.4 that \(\mathsf{St}(\mathsf{X},\leq,*)\) is a semitopology. The issue is whether it is sober; thus by Definition 8.3.1(1) we wish to exhibit every abstract point \(P\) of \(\mathsf{Fr}\,\mathsf{St}(\mathsf{X},\leq,*)\) as a neighbourhood semifilter \(nbhd(p)\) for some unique abstract point \(p\) of \((\mathsf{X},\leq,*)\). The calculations to do so are routine, but we give details. Fix some abstract point \(P\) of \(\mathsf{Fr}\mathsf{St}(\mathsf{X},\leq,*)\). By Definition 7.1.1(7), \(P\) is a completely prime nonempty up-closed set of intersecting open sets in the semitopology \(\mathsf{St}(\mathsf{X},\leq,*)\), and by Definition 7.4.3(2) each open set in \(\mathsf{St}(\mathsf{X},\leq,*)\) has the form \(\mathit{Op}(x)=\{p\in\mathsf{Points}(\mathsf{X},\leq,*)\mid x\in p\}\) for some \(x\in\mathsf{X}\). We define \(p\subseteq\mathsf{X}\) as follows: \[p=\{x\in\mathsf{X}\mid\mathit{Op}(x)\in P\}\subseteq\mathsf{X}.\] By construction we have that \(x\in p\) if and only if \(\mathit{Op}(x)\in P\), and so \[nbhd(p) =\{\mathit{Op}(x)\mid p\in\mathit{Op}(x)\} \text{Definition \ref{eq:p-1}}\] \[=\{\mathit{Op}(x)\mid x\in p\} \text{Definition \ref{eq:p-2}}\] \[=\{\mathit{Op}(x)\mid\mathit{Op}(x)\in P\} \text{Construction of }p\] \[=P \text{Fact.}\] Now \(P\) is completely prime, nonempty, up-closed, and compatible and it follows by elementary calculations using Proposition 8.1.4 that \(p\) is also completely prime, nonempty, up-closed, and compatible -- so \(p\) is an abstract point of \((\mathsf{X},\leq,*)\). So we have that \[p\in\mathsf{Point}(\mathsf{X},\leq,*)\quad\text{and}\quad P=nbhd(p).\] To prove uniqueness of \(p\), suppose \(p^{\prime}\) is any other abstract point such that \(P=nbhd(p^{\prime})\). We follow the definitions: \(\mathit{Op}(x)\in nbhd(p^{\prime})\Longleftrightarrow\mathit{Op}(x)\in nbhd (p)\), and thus by Definition 8.2.1\(p^{\prime}\in\mathit{Op}(x)\Longleftrightarrow p\in\mathit{Op}(x)\), and thus by Definition 7.3.1\(x\in p^{\prime}\Longleftrightarrow x\in p\), and thus \(p^{\prime}=p\). #### 8.3.2 Soder topologies contrasted with sober semitopologies We will need Notation 8.3.5 for Remark 8.3.6: **Notation 8.3.5**.: Call a closed set **irreducible** when it cannot be written as the union of two proper closed subsets. **Remark 8.3.6**.: Topology has a wealth of separation actions. Three of them are: \(T_{0}\) (distinct points have distinct neighbourhood (semi)filters); \(T_{1}\) (distinct points have distinct open neighbourhoods); and \(T_{2}\), also known has the Hausdorff condition (distinct points have disjoint open neighbourhoods) -- see Remark 4.2.1 for formal statements. In the case of topologies, the following is known about sobriety: 1. Every finite \(T_{0}\) (and thus \(T_{1}\)) topological space is sober. 2. Every \(T_{2}\)/Hausdorff space (including infinite ones) is sober [13, page 475, Theorem 3]. 3. A topological space is sober if and only if every nonempty irreducible closed set is the closure of a unique point [13, page 475]. The situation for semitopologies is different, as we explore in the rest of this Subsection. Figure 8: Two counterexamples for sobriety Figure 9: Soberification of examples **Lemma 8.3.7**.: 1. It is not necessarily the case that a finite \(T_{0}\) semitopology (or even a finite \(T_{1}\) semitopology) is sober (Definition 8.3.1(1)). 2. It is not necessarily the case that if every nonempty irreducible closed set is the closure of a unique point, then a semitopology is sober. These non-implications hold even if the semitopology is regular (so \(p\in K(p)\in\mathsf{Topen}\) for every \(p\); see Definition 4.1.3(3)). Proof.: We provide a semitopology that is a counterexample for both parts. Consider the left-hand semitopology illustrated in Figure 8, so that: * \(\mathsf{P}=\{0,1,2\}\), and * \(\mathsf{Open}=\{\varnothing,\{0,1\},\{1,2\},\{0,2\},\{0,1,2\}\}\). We note that: * \((\mathsf{P},\mathsf{Open})\) is \(T_{0}\) and \(T_{1}\). * \((\mathsf{P},\mathsf{Open})\) is regular because all points are intertwined, so that \(K(p)=\mathsf{P}\) for every \(p\in\mathsf{P}\). * The nonempty irreducible closed sets are \(\{0\}\) (which is the complement of \(\{1,2\}\)), \(\{1\}\), and \(\{2\}\). Since these are singleton sets, they are certainly the closures of unique points. So \((\mathsf{P},\mathsf{Open})\) is \(T_{0}\), regular, and irreducible closed sets are the closures of unique points. We take as our filter \(P=\mathsf{Open}\setminus\{\varnothing\}\). The reader can check that \(P\) is completely prime, nonempty, up-closed, and compatible (\(P\) is also the greatest filter); but, \(P\) is not the neighbourhood semifilter of \(0\), \(1\), or \(2\) in \(\mathsf{P}\). Thus, \((\mathsf{P},\mathsf{Open})\) is not sober. **Remark 8.3.8**.: The counterexample used in Lemma 8.3.7 generalises, as follows: the reader can check that the _all-but-one_ semitopology from Example 2.1.7(7) on three or more points (so open sets are generated by \(\mathsf{P}\setminus\{p\}\) for every \(p\in\mathsf{P}\)) has similar behaviour. In topology, every Hausdorff space is sober. In semitopologies, this implication does not hold, and in a rather strong sense: **Lemma 8.3.9**.: 1. It is not necessarily the case that if a semitopology is Hausdorff, then it is sober. 2. Every quasiregular Hausdorff semitopology is sober. 3. Every quasiregular Hausdorff semitopology is discrete (the open sets are the full powerset). Proof.: We consider each part in turn: 1. It suffices to give a counterexample. Consider the right-hand semitopology illustrated in Figure 8 (which we also used, for different purposes, in Figure 5), so that: * \(\mathsf{P}=\{0,1,2,3\}\), and * \(\mathsf{Open}\) is generated by \(X=\{\{3,0\},\{0,1\},\{1,2\},\{2,3\}\}\). This is Hausdorff, but it is not sober: the reader can check that the up-closure \(\{3,0\}^{\leq}\subseteq\mathsf{Open}\) is nonempty, up-closed, compatible, and completely prime, but it is not the neighbourhood filter of any \(p\in\mathsf{P}\). 2. By Lemma 4.2.2, a quasiregular Hausdorff semitopology is discrete. The reader can easily check that a discrete semitopology is sober. 3. As part 2 of this result. **Remark 8.3.10**.: A bit more discussion of Lemma 8.3.9. 1. The space used in the counterexample for part 1 is Hausdorff, \(T_{1}\), and unconflicted (Definition 5.5.1(2)). It is not quasiregular (Definition 4.1.3(5)) because the community of every point is empty; see Proposition 5.5.3. 2. The implication holds if we add quasiregularity as a condition: every quasiregular Hausdorff space is sober. But, this holds for very bad reasons, because by Lemma 4.2.2 every quasiregular Hausdorff space is discrete. 3. Thus, the non-implication discussed in Lemma 8.3.9 is informative and tells us something interesting about semitopological sobriety. Semitopological sobriety is not just a weak form of topological sobriety. Indeed, if anything it is a rather strong condition, and it has its own distinct personality -- in particular, it does not like the \(T_{2}\)/Hausdorff separation axiom and refuses to coexist with it outside of the (trivial) discrete semitopology. So the examples above suggest that, in contrast the the situation in topologies where separation axioms tend to induce sobriety, in a semitopological context separation axioms (and especially Hausdorff separation) seem to be quite antithetical to sobriety. **Remark 8.3.11**.: We can inject the examples illustrated in Figure 8 (used in Lemmas 8.3.7 and 8.3.9) into _soberified_ versions of the spaces that are sober and have an isomorphic lattice of open sets. 1. The left-hand semitopology has abstract points (completely prime semifilters) generated as the \(\subseteq\)-up-closures of the following sets: \(\{A\}\), \(\{B\}\), \(\{C\}\), \(\{A,B\}\), \(\{B,C\}\), \(\{C,A\}\), and \(\{A,B,C\}\). Of these, \(\{A,B\}^{\subseteq}=nbhd(0)\), \(\{B,C\}^{\subseteq}=nbhd(1)\), and \(\{C,A\}^{\subseteq}=nbhd(2)\). The other completely prime semifilters are not generated as the neighbourhood semifilters of any point in the original space, so we add points as illustrated using \(\bullet\) in the left-hand diagram in Figure 9. This semitopology is sober, and has the same semiframe of open sets. 2. For the right-hand example, we again add a \(\bullet\) point for every abstract point in the original space that is not already the neighbourhood semifilter of a point in the original space. These abstract points are generated as the \(\subseteq\)-up-closures of \(\{A\}\), \(\{B\}\), \(\{C\}\), and \(\{D\}\). There is no need to add a \(\bullet\) for the abstract point generated as the \(\subseteq\)-up-closure of \(\{A,B\}\), because \(\{A,B\}^{\subseteq}=nbhd(0)\). Similarly \(\{B,C\}^{\subseteq}=nbhd(1)\), \(\{C,D\}^{\subseteq}=nbhd(2)\), and \(\{D,A\}^{\subseteq}=nbhd(3)\). Note that \(\{A,B,C\}\) does not generate an abstract point because it is not compatible: \(A\not\!\!\!\!\backslash C\). Similarly for \(\{B,C,D\}\), \(\{C,D,A\}\), \(\{D,A,C\}\), and \(\{A,B,C,D\}\). These soberified spaces are instances of a general construction described in Theorem 9.1.4. And, continuing the observation made in Remark 8.3.10, note that neither of these spaces, with their extra points, are Hausdorff. ## 9 Four categories, and functors between them ### The categories \(\mathsf{SemiTop}/\mathsf{Sober}\) of semitopologies/sober semitopologies **Definition 9.1.1**.: 1. Suppose \((\mathsf{P},\mathsf{Open})\) and \((\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) are semitopologies and \(f:\mathsf{P}\to\mathsf{P}^{\prime}\) is any function. Then call \(f\) a **morphism** of semitopologies when \(f\) is continuous, by which we mean (as standard) that \[O^{\prime}\in\mathsf{Open}^{\prime}\quad\text{implies}\quad f^{\text{-}1}(O^{ \prime})\in\mathsf{Open}.\] 2. We define \(\mathsf{SemiTop}\) the **category of semitopologies** such that: * objects are semitopologies, and * arrows are morphisms of semitopologies (continuous maps on points).28 Footnote 28: A discussion of possible alternatives, for future work, is in Remark 12.1.1. See also Remarks 7.1.5 and 7.1.6. 3. Write \(\mathsf{Sober}\) for the **category of sober semitopologies** and continuous functions between them. By construction, \(\mathsf{Sober}\) is the full subcategory of \(\mathsf{SemiTop}\), on its sober semitopologies. **Remark 9.1.2**.: For convenience reading Theorem 9.1.4 we recall some facts: 1. The _semiframe_ \[\mathsf{Fr}(\mathsf{P},\mathsf{Open})=(\mathsf{Open},\subseteq,\between)\] from Definition 6.3.4 has elements open sets \(O\in\mathsf{Open}\), preordered by subset inclusion and with a compatibility relation given by sets intersection. It is spatial, by Proposition 8.2.6. 2. An abstract point \(P\) in \(\mathsf{Points}(\mathsf{Fr}(\mathsf{P},\mathsf{Open}))\) is a completely prime nonempty up-closed compatible subset of \(\mathsf{Open}\). 3. \(\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) is by Definition 7.4.3 a semitopology whose set of points is \(\mathsf{Points}(\mathsf{Fr}(\mathsf{P},\mathsf{Open}))\), which is the set of abstract points in \(\mathsf{Fr}(\mathsf{P},\mathsf{Open})=(\mathsf{Open},\subseteq,\between)\), and whose open sets are given by \(Op(O)\) for \(O\in\mathsf{Open}\). It is sober, by Proposition 8.3.4. **Notation 9.1.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then define \[\mathsf{Soberify}(\mathsf{P},\mathsf{Open})=\mathsf{St}\,\mathsf{Fr}(\mathsf{ P},\mathsf{Open}).\] We may use \(\mathsf{Soberify}(\mathsf{P},\mathsf{Open})\) and \(\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) interchangeably, depending on whether we want to emphasise "this is a sober semitopology obtained from \((\mathsf{P},\mathsf{Open})\)" or "this is \(\mathsf{St}\) acting on \(\mathsf{Fr}(\mathsf{P},\mathsf{Open})=(\mathsf{Open},\subseteq,\between)\)". **Theorem 9.1.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then 1. \(nbhd:\mathsf{P}\to\mathsf{Points}(\mathsf{Fr}(\mathsf{P},\mathsf{Open}))\) is a morphism of semitopologies from \((\mathsf{P},\mathsf{Open})\) to \(\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})=\mathsf{Soberify}(\mathsf{ P},\mathsf{Open})\) 2. taking the arbitrary semitopology \((\mathsf{P},\mathsf{Open})\) to a sober semitopology \(\mathsf{Soberify}(\mathsf{P},\mathsf{Open})\), such that 3. \(nbhd^{\text{-}1}\) induces a bijection on open sets by mapping \(Op(O)\) to \(O\), and furthermore this is an isomorphism of the semiframes of open sets, in the sense of Definition 8.1.5. Proof.: We consider each part in turn: 1. Following Definition 9.1.1 we must show that \(nbhd\) is continuous (inverse images of open sets are open) from \((\mathsf{P},\mathsf{Open})\) to \(\mathsf{Soberify}(\mathsf{P},\mathsf{Open})\). So following Definition 7.4.3(2), consider \(Op(O)\in\mathsf{Open}(\mathsf{Soberify}(\mathsf{P},\mathsf{Open}))\). By Proposition 8.2.3(3) \[nbhd^{\text{-}1}(\,Op(O))=O\in\mathsf{Open}.\] Continuity follows. 2. \(\mathsf{Soberify}(\mathsf{P},\mathsf{Open})\) is sober by Proposition 8.3.4. 3. This is Proposition 8.2.5. **Remark 9.1.5**.: We can summarise Theorem 9.1.4 as follows: 1. \(nbhd\) is nearly injective, modulo only topological equivalence -- its kernel is topological indistinguishability. 2. We can think of \(\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) as being obtained from \((\mathsf{P},\mathsf{Open})\) by 1. quotienting topologically equivalent points to obtain a \(T_{0}\) space, and then 2. adding extra points to make it sober. See also the discussion in Remark 8.3.2 about what it means to have 'enough' points. 3. This is done without affecting the semiframe of open sets (up to isomorphism), with the semiframe bijection given by \(nbhd^{\text{-}1}\). In this sense, we can view \(\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) as a **soberification** of \((\mathsf{P},\mathsf{Open})\). ### The categories \(\mathsf{SemiFrame/Spatial}\) of semiframes/spatial semiframes **Definition 9.2.1**.: 1. Suppose \((\mathsf{X},\leq,*)\) and \((\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\) are semiframes (Definition 8.1.2) and \(g:\mathsf{X}\to\mathsf{X}^{\prime}\) is any function. Then call \(g\) a **morphism** of semiframes when: 1. \(g\) is a morphism of complete semilattices (Definition 6.1.2). 2. \(g\) is **compatible**, by which we mean that \(g(x^{\prime})*g(x^{\prime\prime})\) implies \(x^{\prime}*x^{\prime\prime}\) for every \(x^{\prime},x^{\prime\prime}\in\mathsf{X}^{\prime}\). 2. We define \(\mathsf{SemiFrame}\) the **category of semiframes** such that: * objects are semiframes, and * arrows are morphisms of spatial semiframes. 3. Write \(\mathsf{Spatial}\) for the **category of spatial semiframes** and semiframe morphisms between them. By construction, \(\mathsf{Spatial}\) is the full subcategory of \(\mathsf{SemiFrame}\), on its spatial semiframes. **Lemma 9.2.2**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then \(\mathit{Op}:(\mathsf{X},\leq,*)\to\mathsf{Fr}\,\mathsf{St}(\mathsf{X},\leq,*)\) is a morphism of semiframes and is surjective on underlying sets. Proof.: Following Definition 9.2.1(1) we must show that * \(\mathit{Op}\) is a semilattice morphism (Definition 6.1.2); commutes with joins and maps \(\mathsf{T}_{\mathsf{X}}\) to \(\mathsf{Points}(\mathsf{X},\leq,*)\)) and * is compatible with the compatibility relation \(*\), * and we must show that \(\mathit{Op}\) is surjective. We consider each property in turn: * _We show that_ \(\mathit{Op}\) _is a semilattice morphism._ \(\mathit{Op}(\bigvee X)=\bigvee_{x\in X}\mathit{Op}(x)\) by Lemma 7.3.2, and \(\mathit{Op}(\mathsf{T}_{\mathsf{X}})=\mathsf{Points}(\mathsf{X},\leq,*)\) by Proposition 7.3.3(3). * _We show that_ \(\mathit{Op}\) _is compatible with_ \(*\)_._ Unpacking Definition 9.2.1(1b), we must show that \(\mathit{Op}(x)\between\mathit{Op}(x^{\prime})\) implies \(x*x^{\prime}\). We use Proposition 7.3.3(2). * Surjectivity is Lemma 7.4.5. ### Functoriality of the maps **Definition 9.3.1**.: Suppose \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) is an arrow in \(\mathsf{SemiFrame}\). Define a mapping \(g^{\circ}:\mathsf{St}(\mathsf{X},\leq,*)\to\mathsf{St}(\mathsf{X}^{\prime}, \leq^{\prime},*^{\prime})\) by \[g^{\circ}:\mathsf{Points}(\mathsf{X},\leq,*) \longrightarrow \mathsf{Points}(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\] \[P \longmapsto P^{\prime}=\{x^{\prime}\in\mathsf{X}^{\prime}\mid g(x^{\prime}) \in P\}.\] **Remark 9.3.2**.: We will show that \(g^{\circ}\) from Definition 9.3.1 is an arrow in \(\mathsf{SemiTop}\). We will need to prove the following: * If \(P\in\mathsf{Points}(\mathsf{X},\leq,*)\) then \(g^{\circ}(P)\in\mathsf{Points}(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\). * \(g^{\circ}\) is a morphism of semitopologies. We do this in Lemmas 9.3.3 and 9.3.6 respectively. **Lemma 9.3.3**.: **(\(g^{\circ}\) well-defined)** Suppose \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) is an arrow in \(\mathsf{SemiFrame}\) and suppose \(P\in\mathsf{Points}(\mathsf{X},\leq,*)\). Then \(g^{\circ}(P)\) from Definition 9.3.1 is indeed in \(\mathsf{Points}(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\) -- and thus \(g^{\circ}\) is well-defined function from \(\mathsf{Points}(\mathsf{X},\leq,*)\) to \(\mathsf{Points}(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\). Proof.: For brevity write \[P^{\prime}=\{x^{\prime}\in\mathsf{X}^{\prime}\mid g(x^{\prime})\in P\}.\] We must check that \(P^{\prime}\) is a completely prime nonempty up-closed compatible subset of \(\mathsf{X}^{\prime}\). We consider each property in turn: 1. \(P^{\prime}\) _is completely prime._ Consider some \(X^{\prime}\subseteq P^{\prime}\) and suppose \(g(\bigvee X^{\prime})\in P\). By Definition 9.2.1(1a)\(g\) is a semilattice homomorphism, so by Definition 6.1.1(2)\(g(\bigvee X^{\prime})=\bigvee_{x^{\prime}\in X^{\prime}}g(x^{\prime})\). Thus \(\bigvee_{x^{\prime}\in X^{\prime}}g(x^{\prime})\in P\). By assumption \(P\) is completely prime, so \(g(x^{\prime})\in P\) for some \(x^{\prime}\in X^{\prime}\). Thus \(x^{\prime}\in P^{\prime}\) for that \(x^{\prime}\). Since \(X^{\prime}\) was arbitrary, it follows that \(P^{\prime}\) is completely prime. 2. \(P^{\prime}\) _is nonempty._ By assumption \(g\) is an arrow in \(\mathsf{SemiFrame}\) (i.e. a semiframe morphism) and unpacking Definition 9.2.1(1a) it follows that it is a semilattice homomorphism. In particular by Definition 6.1.1(2)\(g(\mathsf{T}_{\mathsf{X}^{\prime}})=\mathsf{T}_{\mathsf{X}}\), and by Lemma 7.2.1(1)\(\mathsf{T}_{\mathsf{X}}\in P\). Thus \(\mathsf{T}_{\mathsf{X}^{\prime}}\in P^{\prime}\), so \(P^{\prime}\) is nonempty. 3. \(P^{\prime}\) _is up-closed._ Suppose \(x^{\prime}\in P^{\prime}\) and \(x^{\prime}\leq x^{\prime\prime}\). By construction \(g(x^{\prime})\in P\). By Lemma 6.1.4 (because \(g\) is a semilattice morphism by Definition 9.2.1(1a))\(g\) is monotone, so \(g(x^{\prime})\leq g(x^{\prime\prime})\). By assumption in Definition 7.1.1(3)\(P\) is up-closed, so that \(g(x^{\prime\prime})\in P\) and thus \(x^{\prime\prime}\in P^{\prime}\) as required. 4. \(P^{\prime}\) _is compatible._ Suppose \(x^{\prime},x^{\prime\prime}\in P^{\prime}\). Thus \(g(x^{\prime}),g(x^{\prime\prime})\in P\). By assumption in Definition 7.1.1(4)\(P\) is compatible, so \(g(x^{\prime})*g(x^{\prime\prime})\). By compatibility of \(g\) (Definition 9.2.1(1b)) it follows that \(x^{\prime}*x^{\prime\prime}\). Thus \(P^{\prime}\) is compatible. **Remark 9.3.4**.: _Note on design:_ If we want to impose further conditions on being an abstract point (such as those discussed in Remark 7.1.6) then Lemma 9.3.3 would need to be extended to show that these further conditions are preserved by the \(g^{\circ}\) operation, so that for \(P\in\mathsf{Points}(\mathsf{X},\leq,*)\) an abstract point in \((\mathsf{X},\leq,*)\), \(g^{\circ}(P)=\{x^{\prime}\in\mathsf{X}^{\prime}\mid g(x^{\prime})\in P\}\) is an abstract point in \((\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\). For example: consider what would happen if we add the extra condition on semifilters from Remark 7.1.6. Then the \(P^{\prime}\) defined in the proof of Lemma 9.3 above might not be closed under this additional condition (it will be if \(g\) is surjective). This could be mended by closing \(P^{\prime}\) under greatest lower bounds that are not \(\bot\), but that in turn might compromise the property of being completely prime. These comments are not a proof that the problems would be insuperable; but suggest that complexity would be added. For this initial paper, we prefer to keep things simple! Suppose \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) is an arrow in \(\mathsf{SemiFrame}\), and suppose \(x^{\prime}\in\mathsf{X}^{\prime}\). Then \[(g^{\circ})^{\mbox{-}1}(\mathit{Op}(x^{\prime}))=\mathit{Op}(g(x^{\prime})).\] **Proof:** Consider an abstract point \(P\in\mathsf{Point}(\mathsf{Gr}(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime}))\). We just chase definitions: \[P\in(g^{\circ})^{\mbox{-}1}(\mathit{Op}(x^{\prime})) \Longleftrightarrow g^{\circ}(P)\in\mathit{Op}(x^{\prime})\] Fact of inverse image \[\Longleftrightarrow x^{\prime}\in g^{\circ}(P)\] Definition 7.3 \[\Longleftrightarrow g(x^{\prime})\in P\] Definition 9.3 \[\Longleftrightarrow P\in\mathit{Op}(g(x^{\prime})).\] Definition 7.3 \[\Longleftrightarrow\mathit{Op}(g(x^{\prime})).\] Definition 7.3 \[\Longleftrightarrow\mathit{Op}(g(x^{\prime}))=\mathit{Op}(g(x^{ \prime}))\] as required. The choice of \(P\) was arbitrary, so \((g^{\circ})^{\mbox{-}1}(\mathit{Op}(x^{\prime}))=\mathit{Op}(g(x^{\prime}))\) as required. (\(g^{\circ}\) continuous) Suppose \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) is an arrow in \(\mathsf{SemiFrame}\). Then \(g^{\circ}:\mathsf{St}(\mathsf{X},\leq,*)\to\mathsf{St}(\mathsf{X}^{\prime}, \leq^{\prime},*^{\prime})\) is continuous: \[(g^{\circ})^{\mbox{-}1}(\mathcal{O}^{\prime})\in\mathsf{Open}(\mathsf{St}( \mathsf{X},\leq,*))\] for every \(\mathcal{O}^{\prime}\in\mathsf{Open}(\mathsf{St}(\mathsf{X}^{\prime},\leq^{ \prime},*^{\prime}))\). **Proof:** By Definition 7.4, \(\mathcal{O}^{\prime}=\mathit{Op}(x^{\prime})\) for some \(x^{\prime}\in\mathsf{X}^{\prime}\). By Lemma 9.3 \((g^{\circ})^{\mbox{-}1}(\mathit{Op}(x^{\prime}))=\mathit{Op}(g(x^{\prime}))\). By Definition 7.4 \(\mathit{Op}(g(x^{\prime}))\in\mathsf{Open}(\mathsf{St}(\mathsf{X},\leq,*))\). (Functoriality) 1. Suppose \(f:(\mathsf{P},\mathsf{Open})\to(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) is an arrow in \(\mathsf{SemiTop}\) (thus: a continuous map on underlying points). Then \(f^{\mbox{-}1}\) is an arrow \(\mathsf{Fr}(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\to\mathsf{Fr}(\mathsf{ P},\mathsf{Open})\) in \(\mathsf{SemiFrame}\). 2. Suppose \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) is an arrow in \(\mathsf{SemiFrame}\). Then \(g^{\circ}\) from Definition 9.3 is an arrow \(\mathsf{St}(\mathsf{X},\leq,*)\to\mathsf{St}(\mathsf{X}^{\prime},\leq^{ \prime},*^{\prime})\) in \(\mathsf{SemiTop}\). 3. The assignments \(f\mapsto f^{\mbox{-}1}\) and \(g\mapsto g^{\circ}\) are **functorial** -- they map identity maps to identity maps, and commute with function composition. **Proof:** We consider each part in turn: 1. Following Definition 9.2.1, we must check that \(f^{\text{-}1}\) is a morphism of semiframes. We just unpack what this means and see that the required properties are just facts of taking inverse images: * \(f^{\text{-}1}\)_commutes with joins, i.e. with_\(\bigcup\). This is a fact of inverse images. * \(f^{\text{-}1}\)_maps_\(\mathsf{T}_{\mathsf{Fr}(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})}=\mathsf{P}^{\prime}\)_to_\(\mathsf{T}_{\mathsf{Fr}(\mathsf{P},\mathsf{Open})}=\mathsf{P}\). This is a fact of inverse images. * \(f^{\text{-}1}\)_is compatible, meaning that_\(f^{\text{-}1}(O^{\prime})\between f^{\text{-}1}(O^{\prime\prime})\)_implies_\(O^{\prime}\between O^{\prime\prime}\). This is a fact of inverse images. 2. We must check that \(g^{\circ}\) is continuous. This is Lemma 9.3.6. 3. Checking functoriality is entirely routine, but we sketch the reasoning anyway: * Consider the identity function \(id\) on some semitopology \((\mathsf{P},\mathsf{Open})\). Then \(id^{\text{-}1}\) should be the identity function on \((\mathsf{Open},\subseteq,\between)\). It is. * Consider maps \(f:(\mathsf{P},\mathsf{Open})\to(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) and \(f^{\prime}:(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\to(\mathsf{P}^{\prime \prime},\mathsf{Open}^{\prime\prime})\). Then \((f^{\prime}\circ f)^{\text{-}1}\) should be equal to \(f^{\text{-}1}\circ(f^{\prime})^{\text{-}1}\). It is. * Consider the identity function \(id\) on \((\mathsf{X},\leq,*)\). Then \(id^{\circ}\) should be the identity function on \(\mathsf{Points}(\mathsf{X},\leq,*)\). We look at Definition 9.3.1 and see that this amounts to checking that \(P=\{x\in\mathsf{X}\mid id(x)\in P\}\). It is. * Consider maps \(g:(\mathsf{X},\leq,*)\to(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\) and \(g^{\prime}:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X}^{ \prime\prime},\leq^{\prime\prime},*^{\prime\prime})\) and consider some \(P^{\prime\prime}\in\mathsf{Points}(\mathsf{X}^{\prime\prime},\leq^{\prime\prime },*^{\prime\prime})\). Then \((g^{\prime}\circ g)^{\circ}(P^{\prime\prime})\) should be equal to \((g^{\circ}\circ(g^{\prime})^{\circ})(P^{\prime\prime})\). We look at Definition 9.3.1 and see that this amounts to checking that \(\{x\in\mathsf{X}\mid g^{\prime}(g(x))\in P^{\prime\prime}\}=\{x\in\mathsf{X} \mid g(x)\in P^{\prime}\}\) where \(P^{\prime}=\{x^{\prime}\in\mathsf{X}^{\prime}\mid g^{\prime}(x^{\prime})\in P ^{\prime\prime}\}\). Unpacking these definitions, we see that the equality does indeed hold. \(\sqcap\)\(\sqcup\) ### Sober semitopologies are categorically dual to spatial semiframes We can now state the duality result between Sober and Spatial: **Theorem 9.4.1**.: The maps \(\mathsf{St}\) (Definition 7.4.3) and \(\mathsf{Fr}\) (Definition 6.3.4), with actions on arrows as described in Proposition 9.3.7, form a categorical duality between: * the category Sober of sober semitopologies (Definition 8.3.1), and all continuous compatible morphisms between them; and * the category Spatial of spatial semiframes and morphisms between them (Definition 9.2.1(3)). **Proof:** There are various things to check: * Proposition 8.3.4 shows that \(\mathsf{St}\) maps spatial semiframes to sober semitopologies. * Proposition 8.2.6 shows that \(\mathsf{Fr}\) maps sober semitopologies to spatial semiframes. * By Proposition 9.3.7 the maps \(f\mapsto f^{\text{-1}}\) (inverse image) and \(g\mapsto g^{\circ}\) (Definition 9.3.1) are functorial. * The equivalence morphisms are given by the bijections \(\mathit{Op}\) and \(nbhd\): * \(\mathit{Op}\) is from Definition 7.3.1. By Lemma 9.2.2\(\mathit{Op}\) is a morphism \((\mathsf{X},\leq,*)\to\mathsf{Fr}\,\mathsf{St}(\mathsf{X},\leq,*)\) in \(\mathsf{Spatial}\) that is surjective on underlying sets. Injectivity is from Proposition 8.1.4(3). * \(nbhd\) is from Definition 8.2.1. By Theorem 9.1.4\(nbhd\) is a morphism \((\mathsf{P},\mathsf{Open})\to\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) in \(\mathsf{Sober}\). It is a bijection on underlying sets by the sobriety condition in Definition 8.3.1. Finally, we must check naturality of \(\mathit{Op}\) and \(nbhd\), which means (as standard) checking commutativity of the following diagrams: \[\begin{CD}(\mathsf{P},\mathsf{Open})@>{nbhd}>{}>\mathsf{St}\,\mathsf{Fr}( \mathsf{P},\mathsf{Open})@>{(\mathsf{X},\leq,*)}>{}>\mathsf{Pr}\,\mathsf{St}( \mathsf{X},\leq,*)\\ @V{}V{f}V@V{}V{(f^{\text{-1}})^{\circ}}V@V{}V{g}V@V{}V{(g^{\circ})^{\text{-1}}} \\ (\mathsf{P}^{\prime},\mathsf{Open}^{\prime})@>{nbhd}>{}>\mathsf{St}\,\mathsf{Fr}( \mathsf{P}^{\prime},\mathsf{Open}^{\prime})@>{Op}>{}>\mathsf{Fr}\,\mathsf{St}( \mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\end{CD}\] We proceed as follows: * Suppose \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) in \(\mathsf{Spatial}\), so that \(g^{\circ}:\mathsf{St}(\mathsf{X},\leq,*)\to\mathsf{St}(\mathsf{X}^{\prime}, \leq^{\prime},*^{\prime})\) in \(\mathsf{Sober}\) and \((g^{\circ})^{\text{-1}}:\mathsf{Fr}\,\mathsf{St}(\mathsf{X}^{\prime},\leq^{ \prime},*^{\prime})\to\mathsf{Fr}\,\mathsf{St}(\mathsf{X},\leq,*)\) in \(\mathsf{Spatial}\). To prove naturality we must check that \[(g^{\circ})^{\text{-1}}(\mathit{Op}(x))=\mathit{Op}(g(x))\] for every \(x\in\mathsf{X}\). This is just Lemma 9.3.5. * Suppose \(f:(\mathsf{P},\mathsf{Open})\to(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) in \(\mathsf{SemiTop}\), so that \(f^{\text{-1}}:\mathsf{Fr}(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\to \mathsf{Fr}(\mathsf{P},\mathsf{Open})\) in \(\mathsf{Spatial}\) and \((f^{\text{-1}})^{\circ}:\mathsf{St}\,\mathsf{Fr}(\mathsf{P},\mathsf{Open})\to \mathsf{St}\,\mathsf{Fr}(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) in \(\mathsf{SemiTop}\). To prove naturality we must check that \[(f^{\text{-1}})^{\circ}(nbhd(p))=nbhd(f(p)).\] We just chase definitions, for an open set \(O^{\prime}\in\mathsf{Open}^{\prime}\): \[\begin{CD}O^{\prime}\in(f^{\text{-1}})^{\circ}(nbhd(p))@<{}<{}<{}<{}<{}<{}<{}<{ }<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{} <{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{} <{}<{}<{}<{}}<{{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}<{}}<{{}<{}<{}<{}}<{{}<{}} <{{}<{}<{}<{}}<{{}<{}<{}<{}}<{{}<{}<{}}<{}<{}<{}}<{{}<{}<{}}<{{}<{}}<{{}<{} <{}}<{{}<{}<{}}<{{}<{}<{}}<{{}}<{{}}<{{}<{}<{}}<{{}<{}}<{{}}<{{}<{}}<{{}}<{{} <{}}<{{}<{}}<{{}<{}}<{{}}<{{}<{}<{}}<{{}<{}}<{{}}<{{}}<{}<{{}}<{{}}<{{}}<{{} <{}}<{{}<{}}<{{}}<{{}}<{{}}<{{}}<{}<{}}<{{}}<{{}}<{{}<{}}<{{}}<{{}}<{{}}<{{} <{}}<{{}}<{{}}<{{}}<{{}<{}}<{{}}<{{}}<{}<{}<{{}}<{{}}<{{}}<{}}<{{}<{}}<{{}}<{{} <{}}<{{}}<{{}}<{{}}<{{}<{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{} <{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{{}}<{}<{{}}<{{}}<{{}}<{{}}<{}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}} <{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}}<{{}<{}<{{}}<{{}}<{{}}<{{ 2. A morphism between semitopologies is a continuous function, just as for topologies (Definition 9.1.1(1)). 3. A semiframe \((\mathsf{X},\leq,*)\) is a complete join-semilattice \((\mathsf{X},\leq)\) with a properly reflexive distributive _compatibility relation_\(*\) (Definition 6.3.1). 4. A morphism between semiframes is a morphism of complete join-semilattices with \(\mathsf{T}\) that is compatible with the compatibility relation (Definition 9.2.1(1)). 5. An _abstract point_ of a semitopology \((\mathsf{P},\mathsf{Open})\) is a nonempty up-closed compatible subset \(P\subseteq\mathsf{Open}\) (Definition 7.1.1(7)). 6. A semitopology is _sober_ when the neighbourhood semifilter map \(p\in\mathsf{P}\mapsto nbhd(p)=\{O\in\mathsf{Open}\mid p\in O\}\) is injective and surjective between the points of \(\mathsf{P}\) and the abstract points of \(\mathsf{P}\) (Definition 8.3.1). 7. By Theorem 9.1.4, and as discussed in Remark 9.1.5, every (possibly non-sober) semitopology \((\mathsf{P},\mathsf{Open})\) maps into its _soberification_\(\mathsf{St}\mathsf{Fr}(\mathsf{P},\mathsf{Open})\), which has an isomorphic semiframe of open sets. So even if our semitopology \((\mathsf{P},\mathsf{Open})\) is not sober, there is a standard recipe to make it so. 8. A semiframe is _spatial_ when \(x\in\mathsf{X}\mapsto\mathit{Op}(x)=\{P\in\mathsf{Point}\mid x\in P\}\) respects \(\leq\) and \(*\) in senses make formal in Definition 8.1.2 and Proposition 8.1.4. 9. Sober semitopologies and continuous functions between them, and spatial semiframes and semiframe morrhphisms between them, are categorically dual (Theorem 9.4.1). **Remark 9.4.3**.: A _categorical duality_ between two categories \(\mathbb{C}\) and \(\mathbb{D}\) is an equivalence between \(\mathbb{C}\) and \(\mathbb{D}^{op}\); this is an adjoint pair of functors whose unit and counit are natural isomorphisms. See [12, IV.4].29 Footnote 29: The Wikipedia page (permalink) is also exceptionally clear. There are many duality results in the literature. The duality between topologies and frames is described (for example) in [10, page 479, Corollary 4]. A duality between distributive lattices and coherent spaces is in [11, page 66]. There is the classic duality by Stone between Boolean algebras and compact Hausdorff spaces with a basis of clopen sets [10, 11]. An encyclopaedic treatment is in [13], with a rather good overview in Example 2.9 on page 17. Theorem 9.4.1 appends another item to this extensive canon. It also constructively moves us forward in studying semitopologies, because it gives us an algebraic treatment of semitopologies, and a formal framework for studying morphisms between semitopologies. For instance: taking morphisms to be continuous functions is sensible not just because this is also how things work for topologies, but also because this is what is categorically dual to the \(\leq/*\)-homomorphisms between semiframes (Definition 9.2.1). And of course, if we become interested in different notions of semitopology morphism (a flavour of these is given in Remark 12.1.1) then the algebraic framework gives us a distinct mathematical light with which to inspect and evaluate them. Note what Theorem 9.4.1 does _not_ do: it does not give a duality between all semitopologies and all semiframes; it gives a duality between sober semitopologies and spatial semiframes. This in itself is nothing new -- the topological duality is just the same -- but what is interesting is that our motivation in this paper for studying semitopologies comes from practical network systems. These tend to be (finite) non-sober semitopologies -- non-sober, because a guarantee of sobriety cannot be enforced, and anyway it is precisely the point of the exercise to achieve coordination, _without_ explicitly representing every possible constellation of cooperating agents with its own point. After all, this is part of what it means to be a _permissionless_ and _distributed_ system. It is true that by Theorem 9.1.4 every non-sober semitopology can be embedded into a sober one without affecting the semiframe of open sets, but this makes the system to which it corresponds larger, by adding points. So, the duality that Theorem 9.4.1 packages up is a mathematical statement, but not necessarily a directly practical one -- and this is as expected, because we knew from the start that this is an abstract result. \(nbhd\) maps a point to a set of (open) sets; and \(\mathit{Op}\) maps an (open) set of points to a set of sets of (open) sets. Of course these need not necessarily be computationally optimal. We have constructed an algebraic representation of semitopologies -- but this is not the last word on representing semitopologies. Other methodologies are also illuminating, and because our motivation comes from distributed systems, which are networks, we are particularly interested in representations based on ideas from graphs. We will investigate these in Section 11. ## 10 Semifilters and their well-behavedness conditions, dually We want to understand semifilters better, and in particular we want to understand how properties of semifilters and abstract points correspond to the well-behavedness properties which we found useful in studying semitopologies -- for example _topens_, _regularity_, and being _unconflicted_ (Definitions 3.2.2, 4.1.3 and 5.5.1). ### (Maximal) semifilters and transitive elements **Remark 10.1.1**.: **(Semifilters are not filters)** We know that semifilters do not necessarily behave like filters. For instance: 1. It is possible for a finite semifilter to have more than one minimal element, because the join-directedness condition of filters is replaced by a weaker compatibility condition (see also Remarks 7.1.5 and 7.1.6). 2. There are more semifilters than filters -- even if the underlying space is a topology. For example, the discrete semitopology on \(\{0,1,2\}\) (whose open sets are all subsets of the space) is a topology. Every filter in this space is a semifilter, but it also has a semifilter \(\{\{0,1\},\{1,2\},\{2,0\}\}\), which is not a filter. More on this in Subsection 7.2.2. In summary: semifilters are different and we cannot necessarily take their behaviour for granted without checking it. We now examine them in more detail. We start with some easy definitions and results: **Lemma 10.1.2**.: **(Characterisation of maximal semifilters)** Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then the following conditions are equivalent: 1. \(F\) is maximal. 2. For every \(x\in\mathsf{X}\), \(x*F\) if and only if \(x\in F\). **Proof:** We prove two implications: * _Suppose \(F\) is a maximal semifilter._ Suppose \(x\in F\). Then \(x*F\) is immediate from Notation 10.2.3(1) and semifilter compatibility (Definition 7.1.1(4)). Suppose \(x*F\); thus by Notation 10.2.3(1) \(x\) is compatible with (every element of) \(F\). We note that the \(\leq\)-up-closure of \(\{x\}\cup F\) is a semifilter (nonempty, up-closed, compatible). By maximality, \(x\in F\). * _Suppose \(x*F\) if and only if \(x\in F\), for every \(x\in\mathsf{X}\)._ Suppose \(F^{\prime}\) is a semifilter and \(F\subseteq F^{\prime}\). Consider \(x^{\prime}\in F^{\prime}\). Then \(x*F\) by compatibility of \(F^{\prime}\), and so \(x\in F\). Thus, \(F^{\prime}\subseteq F\). \(\sqcap\)\(\sqcup\) **Definition 10.1.3**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(x\in\mathsf{X}\). Call \(x\)**transitive** when: 1. \(x\neq\mathbf{\perp}_{\mathsf{X}}\). 2. \(x^{\prime}*x*x^{\prime\prime}\) implies \(x^{\prime}*x^{\prime\prime}\), for every \(x^{\prime},x^{\prime\prime}\in\mathsf{X}\). 'Being topen' in semitopologies (Definition 3.2.2(2)) corresponds to 'being transitive' in semiframes (Definition 10.1.3): **Lemma 10.1.4**.: **(Characterisation of topen sets)** Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(O\in\mathsf{Open}\). Then the following are equivalent: 1. \(O\) is topen in \((\mathsf{P},\mathsf{Open})\) in the sense of Definition 3.2.2(2). 2. \(O\) is transitive in \((\mathsf{Open},\subseteq,\between)\) in the sense of Definition 10.1.3.30 Footnote 30: _Confusing terminology alert:_ Definition 3.2.2(1) also has a notion of _transitive set_. The notion of transitive set is well-defined for a set that may not be open. In the world of semiframes, we just have elements of the semiframe (which correspond, intuitively, to open sets). Thus _transitive_ semiframe elements correspond to (nonempty) transitive open sets of a semitopology, which are called _topens_. **Proof:** We unpack the definitions and note that the condition for being topen -- being a nonempty open set that is transitive for \(\between\) -- is identical to the condition for being transitive in \((\mathsf{Open},\subseteq,\between)\) -- being a non-\(\mathsf{L}_{\mathsf{Open}}\) element that is transitive for \(*=\between\). ### The compatibility system \(x^{*}\) **Definition 10.2.1**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(x\in\mathsf{X}\). Then define \(x^{*}\) the **compatibility system** of \(x\) by \[x^{*}=\{x^{\prime}\mid x^{\prime}*x\}.\] **Lemma 10.2.2**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(X\subseteq\mathsf{X}\). Then \((\bigvee X)^{*}=\bigcup_{x\in X}x^{*}\). Proof.: We just follow the definitions: \[y\in(\bigvee X)^{*} \Longleftrightarrow y*\bigvee X\] Definition 10.2.1 \[\Longleftrightarrow\exists x{\in}X.y*x\] Definition 6.2.1(3) \[\Longleftrightarrow\exists x{\in}X.y\in x^{*}\] Definition 10.2.1 \[\Longleftrightarrow y\in\bigcup_{x\in X}x^{*}\] Fact of sets **Notation 10.2.3**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(X,Y\subseteq\mathsf{X}\) and \(x\in\mathsf{X}\). Then we generalise \(x*y\) to \(x*Y\), \(X*y\), and \(X*Y\) as follows: 1. Write \(x*Y\) for \(\forall y{\in}Y.x*y\). 2. Write \(X*y\) for \(\forall x{\in}X.x*y\). 3. Write \(X*Y\) for \(\forall x{\in}X.\forall y{\in}Y.x*y\). We read \(x*Y\) as '\(x\) is **compatible** with \(Y\)', and similarly for \(X*y\) and \(X*Y\). **Remark 10.2.4**.: We will see later on in Lemma 10.7.1 that \(X*X^{\prime}\) generalises \(p\between p^{\prime}\), in the sense that if \(X=nbhd(p)\) and \(X^{\prime}=nbhd(p^{\prime})\), then \(p\between p^{\prime}\) if and only if \(nbhd(p)*nbhd(p^{\prime})\). **Lemma 10.2.5**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(x\in\mathsf{X}\) is transitive. Then the following are equivalent for every \(y\in\mathsf{X}\): \[y*x\quad\Longleftrightarrow\quad y\in x^{*}\quad\Longleftrightarrow\quad y* x^{*}.\] Proof.: We prove a cycle of implications: * Suppose \(y*x\). Then \(y\in x^{*}\) is direct from Definition 10.2.1. * Suppose \(y\in x^{*}\). Then \(y*x^{*}\) -- meaning by Notation 10.2.3(1) that \(y*x^{\prime}\) for every \(x^{\prime}\in x^{*}\) -- follows by transitivity of \(x\). * Suppose \(y*x^{*}\). By proper reflexivity of \(*\) (Definition 6.2.1(2); since \(x\neq\mbox{\rm\bf L}_{\mathsf{X}}\)) \(x\in x^{*}\), and \(y*x\) follows. **Proposition 10.2.6**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(\mathsf{L}\mathsf{X}\neq x\in\mathsf{X}\). Then the following are equivalent: 1. \(x\) is transitive. 2. \(x^{*}\) is a completely prime semifilter (i.e. an abstract point). 3. \(x^{*}\) is a semifilter. 4. \(x^{*}\) is compatible. 5. \(x^{*}\) is a maximal semifilter. Proof.: We first prove a cycle of implications betweeen parts 1 and 3: 1. Suppose \(x\) is transitive. We need to check that \(x^{*}\) is nonempty, up-closed, compatible, and completely prime. We consider each property in turn: * \(x*x\) by proper reflexivity of \(*\) (Definition 6.2.1(2); since \(x\neq\mathsf{L}\mathsf{X}\)), so \(x\in x^{*}\). * It follows from monotonicity of \(*\) (Lemma 6.2.3(1)) that if \(x^{\prime}\leq x^{\prime\prime}\) and \(x*x^{\prime}\) then \(x*x^{\prime\prime}\). * Suppose \(x^{\prime}*x*x^{\prime\prime}\). By transitivity of \(x\) (Definition 10.1.3), \(x^{\prime}*x^{\prime\prime}\). * Suppose \(x*\bigvee X^{\prime}\); then by distributivity of \(*\) (Definition 6.2.1(3)) \(x*x^{\prime}\) for some \(x^{\prime}\in X^{\prime}\). 2. If \(x^{*}\) is a completely prime semifilter, then it is certainly a semifilter. 3. If \(x^{*}\) is a semifilter, then it is compatible (Definition 7.1.1(5)&4). 4. Suppose \(x^{*}\) is compatible (Definition 7.1.1(4)) and suppose \(x^{\prime}*x*x^{\prime\prime}\). By Lemma 10.2.5\(x^{\prime},x^{\prime\prime}\in x^{*}\), and by compatibility of \(x^{*}\) we have \(x^{\prime}*x^{\prime\prime}\). Thus, \(x\) is transitive. To conclude, we prove two implications between parts 4 and 5: * Suppose \(x^{*}\) is a semifilter. By equivalence of parts 3 and 1 of this result, \(x\) is transitive, and so using Lemma 10.2.5\(x^{\prime}*x^{*}\) if and only if \(x^{\prime}\in x^{*}\). By Lemma 10.1.2, \(x^{*}\) is maximal. * Clearly, if \(x^{*}\) is a maximal semifilter then it is a semifilter. ### The compatibility system \(F^{*}\) #### 10.3.1 Basic definitions and results **Definition 10.3.1**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(F\subseteq\mathsf{X}\) (\(F\) may be a semifilter, but the definition does not depend on this). Define \(F^{*}\) the **compatibility system** of \(F\) by \[F^{*}=\{x\in\mathsf{X}\mid x*F\}\] Unpacking Notation 10.2.3(1), and combining with Definition 10.2.1, we can write: \[F^{*}=\{x\in\mathsf{X}\mid x*F\}=\{x^{\prime}\in\mathsf{X}\mid\forall x{\in}F.x^{\prime}*x\}=\bigcap\{x^{*}\mid x\in F\}.\] Lemma 10.3.2 presents one easy and useful example of Definition 10.3.1: **Lemma 10.3.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and suppose \(p\in\mathsf{P}\) and \(O^{\prime}\in\mathsf{Open}\). Then: \[O^{\prime}\in\mathit{nbhd}(p)^{*} \Longleftrightarrow\forall O\in\mathsf{Open}.p\in O\wedge O^{ \prime}\between O\] \[O^{\prime}\not\in\mathit{nbhd}(p)^{*} \Longleftrightarrow\exists O\in\mathsf{Open}.p\in O\wedge O^{ \prime}\between O.\] Proof.: We just unpack Definitions 8.2.1 and 10.3.1. **Lemma 10.3.3**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\). Then \(F^{*}\) is up-closed. Proof.: This is just from Definition 10.3.1 and monotonicity of \(*\) (Lemma 6.2.3(1)). **Lemma 10.3.4**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then: 1. If \(x\in F\) then \(F\subseteq x^{*}\). 2. As a corollary, \(F\subseteq F^{*}\). Proof.: Suppose \(x\in F\). By compatibility of \(F\) (Definition 7.1.1(4)), \(x^{\prime}*x\) for every \(x^{\prime}\in F\). It follows from Definition 10.2.1 that \(F\subseteq x^{*}\). The corollary is immediate from Definition 10.3.1. We can use Lemma 10.3.4 and Definition 10.3.1 to give a more succinct rendering of Lemma 10.1.2: **Corollary 10.3.5**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then the following are equivalent: 1. \(F\) is maximal. 2. \(F^{*}=F\). 3. \(F^{*}\subseteq F\). Proof.: Equivalence of parts 1 and 2 just repeats Lemma 10.1.2 using Definition 10.3.1. To prove equivalence of parts 2 and 3 we use use Lemma 10.3.4(2). #### 10.3.2 Strong compatibility: when \(F^{*}\) is a semifilter Proposition 10.2.6 relates good properties of \(x\) (transitivity) to good properties of its compatibility system \(x^{*}\) (e.g. being compatible). It will be helpful to ask similar questions of \(F^{*}\). What good properties are of interest for \(F^{*}\), and what conditions can we impose on \(F\) to guarantee them? **Definition 10.3.6**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe. Then: 1. Call \(F\subseteq\mathsf{X}\)**strongly compatible** when \(F^{*}\) is nonempty and compatible. 2. Call \((\mathsf{X},\leq,*)\)**strongly compatible** when every abstract point (completely prime semifilter) \(P\subseteq\mathsf{X}\) is strongly compatible. **Remark 10.3.7**.: For the reader's convenience we unpack Definition 10.3.6. 1. By Definition 7.1.1(4), \(F^{*}\) is compatible when \(x*x^{\prime}\) for every \(x,x^{\prime}\in F^{*}\). Combining this with Definition 10.3.1 and Notation 10.2.3, \(F^{*}\) is compatible when \(x*F*x^{\prime}\) implies \(x*x^{\prime}\), for every \(x,x^{\prime}\in\mathsf{X}\). Thus, \(F\) is strongly compatible when \[\forall x,x^{\prime}\in\mathsf{X}.\ x*F*x^{\prime}\Longrightarrow x*x^{\prime}.\] 2. \((\mathsf{X},\leq,*)\) is strongly compatible when every abstract point \(P\in\mathsf{Point}(\mathsf{X},\leq,*)\) is strongly compatible in the sense just given above. **Remark 10.3.8**.: The reader may note that the strong compatibility condition on \(F\) from Definition 10.3.6 closely resembles the condition of \(x\) being transitive from Definition 10.1.3. Why do we not call \(F\) 'transitive' instead of'strongly compatible'? Because this might be confusing. Specifically, it is possible -- indeed, it is very natural -- for \(x\) to be transitive but for \(x^{*}\) to not be strongly compatible. Take \(\mathbb{N}\) with the discrete semitopology: then \(0\) is transitive, but \(0^{*}=\{N\subseteq\mathbb{N}\ |\ 0\in N\}\) is not strongly compatible. So yes, there is a design similarity between Definitions 10.1.3 and 10.3.6, but we distinguish them for clarity. **Lemma 10.3.9**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(F\subseteq\mathsf{X}\) is nonempty. Then the following are equivalent: 1. \(F^{*}\) is a semifilter. 2. \(F^{*}\) is compatible. 3. \(F\) is strongly compatible. **Proof:** Equivalence of parts 2 and 3 is just Definition 10.3.6. For equivalence of parts 1 and 2 we prove two implications: * Suppose \(F^{*}\) is a semifilter. Then it is compatible by assumption in Definition 7.1.1(5). * Suppose \(F^{*}\) is compatible. It is up-closed by Lemma 10.3.3, and nonempty by Lemma 10.3.4(2) (since \(F\) is nonempty). Thus, by Definition 7.1.1(5) \(F^{*}\) is a semifilter. **Lemma 10.3.10**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(F\subseteq\mathsf{X}\). Then it is not necessarily the case that \(F^{*}\) is a semifilter. This non-implication holds even in strong well-behavedness conditions: that \((\mathsf{X},\leq,*)\) is spatial and \(F\) is an abstract point (a completely prime semifilter). **Proof:** It suffices to provide a counterexample. Let \((\mathsf{P},\mathsf{Open})=(\{0,1,2\},\{\varnothing,\{0\},\{2\},\mathsf{P}\})\), as illustrated in the top-left semicontopology in Figure 3. Take \((\mathsf{X},\leq,*)=(\mathsf{Open},\subseteq,\between)\) (which is spatial by Proposition 8.2.6) and set \(F=nbhd(1)=\{0,1,2\}\). Then \(nbhd(1)^{*}=\{\{0\},\{2\},\{0,1,2\}\}\), and this is not compatible because \(\{0\}\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Semiframe characterisation of community **Remark 10.4.1**.: We saw the notion of \(K(p)\) the _community_ of a point in Definition 4.1.3(1). In this Subsection we construct an analogue to it in semiframes. We will give two characterisations: one in Definition 10.4.5, and another in Proposition 10.4.7. We will mostly be interested in Definition 10.4.2 when \(F\) is a semifilter, but the definition does not require this: **Definition 10.4.2**.: Suppose if \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) and \(x\in\mathsf{X}\). Then define \(F^{c}\in\mathsf{X}\), \(F^{*c}\in\mathsf{X}\), and \(x^{*c}\in\mathsf{X}\) by \[F^{c} =\bigvee\{y\in\mathsf{X}\mid y\not\in F\}\] \[F^{*c} =(F^{*})^{c}\] \[x^{*c} =(x^{*})^{c}.\] **Remark 10.4.3**.: We unpack the definitions of \(F^{*c}\) and \(x^{*c}\): \[F^{*c} =(F^{*})^{c} \text{Definition \ref{def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:defdef:def:def:def:def:def:def:def:defdef:def:def:def:defdef:def:def:defdef:defdef:defdef:defdef:def:defdef:def:defdef:defdef:defdef: 3. By Definitions 10.4.2 and 10.3.1 we have \[O^{c}=(O^{*})^{*c}=\bigcup\{O^{\prime}{\in}{\sf Open}\mid O^{\prime}\not\in O^{*} \}=\bigcup\{O^{\prime}{\in}{\sf Open}\mid O^{\prime}\not\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Semiframe characterisation of regularity We now have enough to generalise the notions of quasiregularity, weak regularity, and regularity from semitopologies (Definition 4.1.3 parts 5, 4, and 3) to semiframes: **Definition 10.5.1**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. 1. Call \(F\)**quasiregular** when \(k(F)\neq\leavevmode\hbox{\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6.45pt depth 0.0pt width 1px\vrule heigh t 6.45pt depth 0.0pt width 1px\vrule height 6. **Theorem 10.5.4**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then \(F\) is regular if and only if \(F\) is weakly regular and strongly compatible. We can write this succinctly as follows: Regular = weakly regular + strongly compatible.32 Footnote 32: Compare this slogan with the version for semitopologies in Theorem 5.5.4. **Proof:** Suppose \(F\) is weakly regular and strongly compatible. By Lemma 10.5.3(3) \(k(F)\) is transitive, and by Definition 10.5.1(3) \(F\) is regular. For the converse implication we just reverse the reasoning above. ### Semiframe characterisation of (quasi/weak)regularity The direct translation in Definition 10.5.1 of parts 5, 4, and 3 of Definition 4.1.3, along with the machinery we have now built, makes Lemma 10.6.1 easy to prove: **Lemma 10.6.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Recall from Definition 8.2.1 and Proposition 8.2.3(1) that \(nbhd(p)=\{O\in\mathsf{Open}\mid p\in O\}\) is a (completely prime) semifilter. Then: 1. \(p\) is quasiregular in the sense of Definition 4.1.3(5) if and only if \(nbhd(p)\) is quasiregular in the sense of Definition 10.5.1(1). 2. \(p\) is weakly regular in the sense of Definition 4.1.3(4) if and only if \(nbhd(p)\) is weakly regular in the sense of Definition 10.5.1(2). 3. \(p\) is regular in the sense of Definition 4.1.3(3) if and only if \(nbhd(p)\) is regular in the sense of Definition 10.5.1(3). **Proof:** We consider each part in turn: 1. Suppose \(p\) is quasiregular. By Definition 4.1.3(5) \(K(p)\neq\varnothing\). By Proposition 10.4.6\(k(nbhd(p))\neq\varnothing=\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.2999 403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.2999 403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule heigh t ight 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{\kern 1.499977pt \vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.2999954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.29954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to 0.0pt{ \kern 1.49977pt\vrule height 6.2999403pt width 0.299954pt\hss}\hbox{\hbox to **Proposition 10.6.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then * \(p\) is quasiregular / weakly regular / regular in \((\mathsf{P},\mathsf{Open})\) the sense of Definition 4.1.3 if and only if * \(nbhd(p)\) is quasiregular / weakly regular / regular in \(\mathsf{Soberify}(\mathsf{P},\mathsf{Open})\) the sense of Definition 10.5.1. Proof.: We consider just the case of regularity; quasiregularity and weak regularity are no different. Suppose \(p\) is regular. By Definition 4.1.3(3) \(p\in K(p)\in\mathsf{Topen}\). It follows from Lemma 10.1.4 that \(K(p)\) is transitive in \((\mathsf{Open},\subseteq,\between)\), and from Proposition 8.2.3(2) that \(K(p)\in nbhd(p)\). It follows from Proposition 10.4.6 that \(nbhd(p)\) is regular in the sense of Definition 10.5.1(3). **Corollary 10.6.3**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(F\subseteq\mathsf{X}\) is a semifilter. Then the converse implications in Lemma 10.5.2 need not hold: \(F\) may be quasiregular but not regular, and it may be weakly regular but not regular, and it may not even be quasiregular. Proof.: It suffices to provide counterexamples. We easily obtain these by using Proposition 10.6.2 to consider \(nbhd(p)\) for \(p\in\mathsf{P}\) as used in Lemma 4.1.5(3&4). ### Characterisation of being intertwined This Subsection continues Remark 10.2.4. The notion of points being intertwined from Definition 3.6.1(1) generalises in semiframes to the notion of semifilters being compatible: **Lemma 10.7.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). Then \[p\between p^{\prime}\quad\Longleftrightarrow\quad nbhd(p)*nbhd(p^{\prime}) \quad\Longleftrightarrow\quad nbhd(p)\between nbhd(p^{\prime}).\] For clarity and precision we unpack this. The following are equivalent: 1. \(p\between p^{\prime}\) in the semitopology \((\mathsf{P},\mathsf{Open})\) (Definition 3.6.1(1)). In words: the point \(p\) is intertwined with the point \(p^{\prime}\). 2. \(nbhd(p)*nbhd(p^{\prime})\) in the semiframe \((\mathsf{Open},\subseteq,\between)\) (Notation 10.2.3(3)). In words: the abstract point \(nbhd(p)\) is compatible with the abstract point \(nbhd(p^{\prime})\). 3. \(nbhd(p)\between nbhd(p^{\prime})\) in the semitopology \(\mathsf{St}(\mathsf{Open},\subseteq,\between)\) (Definition 3.6.1(1)). In words: the point \(nbhd(p)\) is intertwined with the point \(nbhd(p^{\prime})\). Proof.: We unpack definitions: * By Definition 3.6.1(1) \(p\between p^{\prime}\) when for every pair of open neighbourhoods \(p\in O\) and \(p^{\prime}\in O^{\prime}\) we have \(O\between O^{\prime}\). * By Notation 10.2.3(3) \(nbhd(p)*nbhd(p^{\prime})\) when for every \(O\in nbhd(p)\) and \(O^{\prime}\in nbhd(p^{\prime})\) we have \(O*O^{\prime}\). By Proposition 8.2.3(2) we can simplify this to: \(p\in O\) and \(p^{\prime}\in O^{\prime}\) implies \(O*O^{\prime}\). * By Definition 3.6.1(1) and Theorem 9.1.4, \(nbhd(p)\between nhd(p^{\prime})\) when: for every pair of open neighbourhoods \(nbhd(p)\in Op(O)\) and \(nbhd(p^{\prime})\in Op(O^{\prime})\) we have \(Op(O)\between Op(O^{\prime})\). By Proposition 8.2.3(2) we can simplify this to: \(p\in O\) and \(p^{\prime}\in O^{\prime}\) implies \(Op(O)\between Op(O^{\prime})\). By Proposition 7.3.3(2) we can simplify this further to: \(p\in O\) and \(p^{\prime}\in O^{\prime}\) implies \(O*O^{\prime}\). But by definition, the compatibility relation \(*\) of \((\mathsf{Open},\subseteq,\between)\) is \(\between\), so \(O*O^{\prime}\) and \(O\between O^{\prime}\) are the same assertion. The equivalences follow. The property of being intertwined is preserved and reflected when we use \(nbhd\) to map to the soberified space: **Corollary 10.7.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). Then \(p\between p^{\prime}\) in \((\mathsf{P},\mathsf{Open})\) if and only if \(nbhd(p)\between nbdd(p^{\prime})\) in \(\mathit{S}\mathit{oberify}(\mathsf{P},\mathsf{Open})\). Proof.: This just reiterates the equivalence of parts 1 and 3 in Lemma 10.7.1. **Proposition 10.7.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then: 1. It may be that \((\mathsf{P},\mathsf{Open})\) is unconflicted (meaning that it contains no conflicted points), but the semitopology \(\mathit{S}\mathit{oberify}(\mathsf{P},\mathsf{Open})\) contains a conflicted point. 2. It may further be that \((\mathsf{P},\mathsf{Open})\) is unconflicted and \(p\in\mathsf{P}\) is such that \(nbhd(p)\) is conflicted in the semitopology \(\mathit{S}\mathit{oberify}(\mathsf{P},\mathsf{Open})\). We can summarise the two assertions above as follows: 1. Soberifying a space might introduce a conflicted point, even if none was originally present. 2. Soberifying a space can make a point that was unconflicted, into a point that is conflicted.33 Footnote 33: If we stretch the English language, we might say that soberifying a space can conflictify one of its points. Proof.: It suffices to provide counterexamples. 1. Consider the right-hand semitopology in Figure 8; this is unconflicted because every point is intertwined only with itself. The soberification of this space is illustrated in the right-hand semitopology in Figure 9. Each of the extra points is intertwined with the two numbered points next to it; e.g. the extra point in the open set \(A\) -- write it \(\bullet_{A}\) (in-between \(3\) and \(0\)) -- is intertwined with \(0\) and \(3\); so \(3\between\bullet_{A}\between 0\). However, the reader can check that \(3\between 0\). Thus, \(\bullet_{A}\) is conflicted. 2. We define \((\mathsf{P},\mathsf{Open})\) by: * \(\mathsf{P}=(\text{-}1,1)\) (real numbers between -\(1\) and \(1\) exclusive). * Open is generated by: * All open intervals that do not contain \(0\); so this is open intervals \((r_{1},r_{2})\) where -\(1\leq r_{1}<r_{2}\leq 0\) or \(0\leq r_{1}<r_{2}\leq 1\). * All of the open intervals \((\)-\(1/n,1/n)\), for \(n\geq 2\). The reader can check that: * Points in this semitopology are intertwined only with themselves. * The soberification includes four additional points, corresponding to completely prime semifilters -\(1/0\) generated by \(\{(\)-\(1/n,0)\mid n\geq 2\}\) and \(+1/0\) generated by \(\{(0,1/n)\mid n\geq 2\}\), and to the endpoints -\(1\) and \(i\)+\(1\). * -\(1/0\) and \(+1/0\) are intertwined with \(0\), but are not intertwined with one another. Thus, \(0\) is conflicted in \(\mathit{S}\mathit{oberify}(\mathsf{P},\mathsf{Open})\) but not in \((\mathsf{P},\mathsf{Open})\). **Remark 10.7.4**.: Proposition 10.7.3 may seem surprising in view of Corollary 10.7.2, but the key observation is that the soberified space may add points to the original space. These points can add conflicting behaviour that is 'hidden' in the completely prime semifilters of the original space. Thus, Proposition 10.7.3 shows that the property of 'being unconflicted' _cannot_ be characterised purely in terms of the semiframe of open sets -- if it could be, then soberification would make no difference, by Theorem 9.1.4(3). There is nothing wrong with that, except that this is a paper about semiframes. We can now look for some other condition -- but one having to do purely with open sets -- that might play a similar role in the theory of (weak/quasi)regularity of semiframes, as being unconflicted does in theory of (weak/quasi)regularity of semitopologies. We already saw a candidate for this in Theorem 10.5.4: _strong compatibility_. We examine this next. ### Strong compatibility in semitopologies **Remark 10.8.1**.: Note that: 1. Theorem 10.5.4 characterises'regular' for semiframes as 'weakly regular + strongly compatibile'. 2. Theorem 5.5.4 characterises'regular' for semitopologies as 'weakly regular + unconflicted'. We know from results like Lemma 10.6.1 and Corollary 10.7.2 that there are accurate correspondences between notions of regularity in semiframes and semitopologies. This is by design, e.g. in Definition 10.5.1; we designed the semiframe definitions so that semiframe regularity and semitopological regularity would match up closely. Yet there are differences too, since Theorem 10.5.4 uses strong compatibility, and Theorem 5.5.4 uses being unconflicted. What is the difference here, and why does it arise? One answer is given by Proposition 10.7.3, which illustrates that the condition of 'unconflicted' (which comes from semitopologies) does not sit comfortably with the 'pointless' semiframe definitions. This raises the question of how strong compatibility (which comes from semiframes) translates into the context of semitopologies; and how this relates to being (un)conflicted? We look into this now; see Remark 10.8.11 for a summary. We can translate the notion of _strongly compatible filter_ (Definition 10.3.6(1)) to semitopologies in the natural way, just applying it to the neighbourhood semifilter \(nbhd(p)\) of a point (Definition 8.2.1): **Definition 10.8.2**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then call \(p\in\mathsf{P}\)**strongly compatible** when the (by Example 7.1.4(1)) abstract point \(nbhd(p)\) is strongly compatible (Definition 10.3.6) as a semifilter in \((\mathsf{Open},\subseteq,\between)\). We unpack what Definition 10.8.2 means concretely: **Lemma 10.8.3**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then the following are equivalent: 1. \(p\) is strongly compatible (Definition 10.8.2). 2. \(nbhd(p)\) is strongly compatible. 3. \(nbhd(p)^{*}\) is compatible. 4. For every \(O^{\prime},O^{\prime\prime}\in\mathsf{Open}\), if \(O^{\prime}*nbhd(p)*O^{\prime\prime}\) then \(O^{\prime}\between O^{\prime\prime}\). (Above, \(O^{\prime}*nbhd(p)\) follows Notation 10.2.3(1) and means that \(O^{\prime}\between O\) for every \(p\in O\in\mathsf{Open}\), and similarly for \(nbhd(p)*O^{\prime\prime}\).) **Proof:** Equivalence of parts 1 and 2 is Definition 10.8.2. Equivalence of parts 2 and 3 is Definition 10.3.6(1). For the equivalence of parts 3 and 4, we just unpack what it means for \(nbhd(p)^{*}\) to be compatible (see Remark 10.3.7). \(\sqcap\)\(p\in\mathsf{P}\) is strongly compatible' is a strictly stronger condition than '\(p\in\mathsf{P}\) is unconflicted': **Lemma 10.8.4**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. If \(p\) is strongly compatible then it is unconflicted. 2. If \(p\) is conflicted then: \(p\) is not strongly compatible, \(nbhd(p)\) is not strongly compatible, and \(nbhd(p)^{*}\) is not compatible. 3. The reverse implication need not hold, even if \((\mathsf{P},\mathsf{Open})\) is sober:34 it is possible for \(p\) to be unconflicted but not strongly compatible. Footnote 34: \(\dots\) meaning that every abstract point in \((\mathsf{Open},\subseteq,\between)\) is the neighbourhood semifilter of a unique concrete point in \(\mathsf{P}\). **Proof:** We consider each part in turn: 1. Suppose \(p\) is strongly compatible and suppose \(p^{\prime}\between p\between p^{\prime\prime}\); we must show that \(p^{\prime}\between p^{\prime\prime}\). Consider open neighbourhoods \(p^{\prime}\in O^{\prime}\) and \(p^{\prime\prime}\in O^{\prime\prime}\). By assumption \(p^{\prime}\between p\) and so by Lemma 10.7.1(1&2) \(nbhd(p^{\prime})*nbhd(p)\). Since \(O^{\prime}\in nbhd(p^{\prime})\), it follows that \(O^{\prime}*nbhd(p)\), and similarly it follows that \(nbhd(p)*O^{\prime\prime}\). Then by strong compatibility, \(O^{\prime}\between O^{\prime\prime}\) as required. 2. We take the contrapositive of part 1 of this result, and use Lemma 10.8.3. 3. It suffices to provide a counterexample. Consider the bottom right semitopology in Figure 3, and take \(p=*\) and \(O^{\prime}=\{1\}\) and \(O^{\prime\prime}=\{0,2\}\). Note that: * \(*\) is unconflicted, since it is intertwined only with itself and \(1\). * \(O^{\prime}\) and \(O^{\prime}\) intersect every open neighbourhood of \(*\), but \(O^{\prime}\not\mathrel{\betweenbetween}O^{\prime\prime}\), so \(*\) is not strongly compatible. This space is sober: the only completely prime filters are the neighbourhood semifilters of \(*\), \(0\), \(1\), and \(2\). **Example 10.8.5**.: Continuing Lemma 10.8.4(3), it is possible for a point to be strongly compatible (Definition 10.8.2) but not regular, or even quasiregular (Definition 4.1.3(3, 5)). Consider the right-hand semitopology illustrated in Figure 8 and take \(p=0\). The reader can check that \(p\) is strongly compatible, but it is not quasiregular (i.e. \(K(p)=\varnothing\)) and thus also not regular. Lemma 10.8.6 shows that the situation outlined in Proposition 10.7.3(2) cannot arise if we work with a strongly compatible point instead of an unconflicted one... **Lemma 10.8.6**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p,p^{\prime}\in\mathsf{P}\). Then \(p\) is strongly compatible in \((\mathsf{P},\mathsf{Open})\) if and only if \(nbhd(p)\) is strongly compatible in \(\mathit{Soberify}(\mathsf{P},\mathsf{Open})\) (Notation 9.1.3). Proof.: This result is plausible by looking at Definition 10.8.2 and noting that strong compatibility is defined in terms of \(nbhd(p)\) as an abstract point in \((\mathsf{Open},\subseteq,\between)\), but we check the details. * From Lemma 10.8.3, \(p\) is strongly compatible in \((\mathsf{P},\mathsf{Open})\) when for every \(O^{\prime}O^{\prime\prime}\in\mathsf{Open}\), \[(\forall O\mathord{\in}\mathsf{Open}.p\in O\Longrightarrow O^{\prime} \between O\between O^{\prime\prime})\quad\text{implies}\quad O^{\prime} \between O^{\prime\prime}.\] * From Definition 7.3(2) and Lemma 10.8.3, \(nbhd(p)\) is strongly compatible in \(\mathit{Soberify}(\mathsf{P},\mathsf{Open})\) when for every \(Op(O^{\prime}),\)\(Op(O^{\prime\prime})\in\mathsf{Openes}(\mathit{Soberify}(\mathsf{P},\mathsf{Open}))\), \[(\forall O\mathord{\in}\mathsf{Open}.nbhd(p)\in\mathit{Op}(O) \Longrightarrow\mathit{Op}(O^{\prime})\between O(O)\between O(O^{\prime \prime}))\quad\text{implies}\quad Op(O^{\prime})\between Op(O^{\prime\prime}).\] Now by Proposition 8.2.3(2), \(nbhd(p)\in\mathit{Op}(O)\) if and only if \(p\in O\), and by Corollary 8.2.4\(\mathit{Op}(O^{\prime})\between Op(O)\) if and only if \(O^{\prime}\between O\), and \(\mathit{Op}(O)\between Op(O^{\prime\prime})\) if and only if \(O\between O^{\prime\prime}\). The result follows. ...but, the situation outlined in Proposition 10.7.3(1) _can_ arise, indeed we use the same counterexample: **Lemma 10.8.7**.: It may be that every point in \((\mathsf{P},\mathsf{Open})\) is strongly compatible, yet \(\mathit{Soberify}(\mathsf{P},\mathsf{Open})\) contains a point that is not strongly compatible. Proof.: The same counterexample as used in Proposition 10.7.3(1) illustrates a space \((\mathsf{P},\mathsf{Open})\) such that every point in \((\mathsf{P},\mathsf{Open})\) is strongly compatible, but \(\mathit{Soberify}(\mathsf{P},\mathsf{Open})\) contains a point that is not strongly compatible. We note that \(\bullet_{A}\) (the extra point in-between \(3\) and \(0\)) is not strongly compatible, because both \(B\) and \(D\) intersect with every open neighbourhood of \(\bullet_{A}\), but \(B\) does not intersect with \(D\). The development above suggests that we define: **Definition 10.8.8**.: Call a semitopology \((\mathsf{P},\mathsf{Open})\)**strongly compatible** when \((\mathsf{Open},\subseteq,\between)\) is strongly compatible in the sense of Definition 10.3.6(2). The proof of Proposition 10.8.9 is then very easy: **Proposition 10.8.9**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Then the following are equivalent: 1. \((\mathsf{P},\mathsf{Open})\) is strongly compatible in the sense of Definition 10.8.8. 2. \(\mathit{Soberify}(\mathsf{P},\mathsf{Open})\) is strongly compatible in the sense of Definition 10.3.6(2). Proof.: We unpack Definition 10.8.8 and note that strong compatibility of \((\mathsf{Open},\subseteq,\between)\) is expressed purely as a property of the semiframe of open sets of \((\mathsf{P},\mathsf{Open})\). By Theorem 9.1.4\(\mathit{Soberify}(\mathsf{P},\mathsf{Open})\) is semiframe isomorphic to \((\mathsf{P},\mathsf{Open})\), via \(nbhd\mbox{-}^{1}\). The result follows. We can now prove an analogue of Theorems 5.5.4 and 10.5.4: **Corollary 10.8.10**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(p\in\mathsf{P}\). Then: 1. \(p\) is regular if and only if \(p\) is weakly regular and strongly compatible. 2. \((\mathsf{P},\mathsf{Open})\) is regular if and only if \((\mathsf{P},\mathsf{Open})\) is weakly regular and strongly compatible. Recall from Definition 4.1.3(7) that \((\mathsf{P},\mathsf{Open})\) being (weakly) regular means that every point in \((\mathsf{P},\mathsf{Open})\) is (weakly) regular. Recall from Definition 10.8.8 that \((\mathsf{P},\mathsf{Open})\) being strongly compatible means that \((\mathsf{Open},\subseteq,\between)=\mathsf{Fr}(\mathsf{P},\mathsf{Open})\) is strongly compatible in the sense of Definition 10.3.6(2). Proof.: We consider each part in turn: 1. Suppose \(p\) is regular. By Theorem 10.5.4\(nbhd(p)\) is weakly regular and strongly compatible. By Proposition 10.6.2\(p\) is weakly regular and by Definition 10.8.2\(p\) is strongly compatible. The reverse implication follows just reversing the reasoning above. 2. Suppose \((\mathsf{P},\mathsf{Open})\) is regular, meaning that every \(p\in\mathsf{P}\) is regular. By part 1 of this result every \(p\in\mathsf{P}\) is weakly regular and strongly compatible. The definition of weak regularity for a space in Definition 4.1.3(7) is pointwise, so it follows immediately that \((\mathsf{P},\mathsf{Open})\) is weakly regular. However, the definition of strong compatibility for a space in Definition 10.8.8 is not pointwise; it is on its semiframe of open sets. It therefore does not follow immediately that \((\mathsf{P},\mathsf{Open})\) is strongly compatible (Lemma 10.8.7 contains a counterexample). We can still show that \((\mathsf{P},\mathsf{Open})\) is strongly compatible, but we need to do a bit more work. Unpacking Definition 10.8.8, we must show that \((\mathsf{Open},\subseteq,\between)\) is strongly compatible. Unpacking Definition 10.3.6(2), we must show that every abstract point in \((\mathsf{Open},\subseteq,\between)\) is strongly compatible. So consider an abstract point \(P\subseteq\mathsf{Open}\). By Corollary 4.3.3\(\mathsf{P}\) has a topen partition \(\mathcal{T}\), which means that: every \(T\in\mathcal{T}\) is topen; the elements of \(\mathcal{T}\) are disjoint; and \(\bigcup\mathcal{T}=\mathsf{P}\). Now \(\bigcup\mathcal{T}=\mathsf{P}\in T\) by Definition 7.1.1(7) and Lemma 7.2.1(1), so by Definition 7.1.1(2) there exists at least one (and in fact precisely one) \(T\in\mathcal{T}\) such that \(T\in P\). Now \(T\) is a transitive element in \(\mathsf{Open}\), so by Proposition 10.3.12\(P\subseteq\mathsf{Open}\) is strongly compatible as required. **Remark 10.8.11**.: We summarise what we have seen: 1. The notions of (quasi/weak)regularity match up particularly nicely between a semitopology and its soberification as a semiframe (Proposition 10.6.2). 2. We saw in Proposition 10.7.3 that the notions of (un)conflicted point and unconflicted space from Definition 5.5.1(2) are not robust under forming soberification (Notation 9.1.3). From the point of view of a pointless methodology in semitopologies -- in which we seek to understand a semitopology \((\mathsf{P},\mathsf{Open})\) starting from its semiframe structure \((\mathsf{Open},\subseteq,\between)\) -- this is a defect. 3. A pointwise notion of strong compatibility is possible (Definition 10.8.2), and this is preserved pointwise by soberification (Lemma 10.8.6). But soberification can still introduce _extra_ points, and it turns out that the property of a space being pointwise strongly compatible is still not robust under soberification because the extra points need not necessarily be strongly compatible; see Lemma 10.8.7. 4. This motivates Definition 10.8.8, and then Proposition 10.8.9 becomes easy. Our larger point (no pun intended) is the Definition and its corresponding Proposition are natural, _and also_ that the other design decisions are _less_ natural, as noted above. Perhaps somewhat unexpectedly,'regular = weakly regular + strongly compatible' then works pointwise _and_ for the entire space; see Corollary 10.8.10. Thus Definition 10.8.8 has good properties and is natural from a pointless/semiframe/open sets perspective. ### 11 Graph representation of semitopologies A substantial body of literature exists studying social networks as connected graphs. A semitopology has the flavour of a social network, in the sense that it models voting and consensus on a distributed system. It is therefore interesting to consider representations of semitopologies as graphs. We will consider two ways to do this: 1. We can map a semitopology to the intersection graph of its open sets. We discuss it in Subsection 11.1. This works well but loses information (Remark 11.1.16). 2. We can use a _straddling_ relation between sets. We discuss this in Subsection 11.2. ### From a semitopology to its intersection graph We start with a very simple representation of \((\mathsf{P},\mathsf{Open})\) obtained just as the _intersection graph_ of \(\mathsf{Open}\) (Definition 11.1.1). This is not necessarily the most detailed representation (we spell out why, with examples, in Remark 11.1.16), but it is still simple, direct, and nontrivial: #### 11.1.1 The basic definition **Definition 11.1.1**.: Suppose \((\mathsf{P},\mathsf{Open})\) is a semitopology. Define its **intersection graph**\(\mathsf{IntGr}(\mathsf{P},\mathsf{Open})\) by: * The nodes of \(\mathsf{IntGr}(\mathsf{P},\mathsf{Open})\) are nonempty open sets \(O\in\mathsf{Open}_{\neq\varnothing}\). * There is an edge \(O\leftrightarrow O^{\prime}\) between \(O\) and \(O^{\prime}\) when \(O\betweentie O^{\prime}\). **Remark 11.1.2**.: 1. The notion of the _intersection graph_ of a set of sets is standard.35 The notion used in Definition 11.1.1 is slightly different, in that we exclude the empty set. This technical tweak is mildly useful, to give us Lemma 11.1.4. Footnote 35: See e.g. the Wikipedia page on intersection graphs (permalink). 2. If \((\mathsf{P},\mathsf{Open})\) is a semitopology and \(\mathsf{IntGr}(\mathsf{P},\mathsf{Open})\) is its intersection graph in the sense of Definition 11.1.1, then \(O\leftrightarrow O^{\prime}\) is a synonym for \(O\betweentie O^{\prime}\). However, writing \(O\leftrightarrow O^{\prime}\) use suggests that we view \(O\) and \(O^{\prime}\) as nodes. **Notation 11.1.3**.: For the rest of this Section we assume a fixed but arbitrary \[G=\mathsf{IntGr}(\mathsf{P},\mathsf{Open})\] that is the open intersection graph of a semitopology \((\mathsf{P},\mathsf{Open})\). We start with an easy lemma: **Lemma 11.1.4**.: \(O\leftrightarrow O\) always (the graph \(G\) is reflexive). Proof.: From Definition 11.1.1, noting that nodes are _nonempty_ open sets \(O\in\mathsf{Open}\), and it is a fact of sets intersection that \(O\betweentie O\) when \(O\) is nonempty. #### 11.1.2 The preorder \(\leq\) **Definition 11.1.5**.: Write \(O\leq O^{\prime}\) when for every \(O^{\prime\prime}\), if \(O\leftrightarrow O^{\prime\prime}\) then \(O^{\prime}\leftrightarrow O^{\prime\prime}\). In symbols: \[O\leq O^{\prime}\quad\text{when}\quad\forall O^{\prime\prime}.O\leftrightarrow O ^{\prime\prime}\Longrightarrow O^{\prime}\leftrightarrow O^{\prime\prime}.\] \(\leq\) from Definition 11.1.5 is a preorder (reflexive and transitive relation): **Lemma 11.1.6**.: 1. \(\leq\) is reflexive: \(O\leq O\). 2. \(\leq\) is transitive: if \(O\leq O^{\prime}\leq O^{\prime\prime}\) then \(O\leq O^{\prime\prime}\). Proof.: By routine calculations. **Lemma 11.1.7**.: 1. If \(O\leq O^{\prime}\) then \(O\leftrightarrow O^{\prime}\). 2. It is not in general the case that \(O\leftrightarrow O^{\prime}\) implies \(O\leq O^{\prime}\) (but see Proposition 11.1.12(1&2)). In symbols we can write: \(\leq\subseteq\leftrightarrow\) and \(\leftrightarrow\not\subseteq\) in general.36 Footnote 36: It gets better: see Lemma 11.1.9. Proof.: We consider each part in turn: 1. Suppose \(O\leq O^{\prime}\). By Lemma 11.1.4\(O\leftrightarrow O\), and it follows (since \(O\leq O^{\prime}\)) that \(O^{\prime}\leftrightarrow O\) as required. 2. It suffices to give a counterexample. Consider the semitopology \((\mathsf{P},\mathsf{Open})\) where \(\mathsf{P}=\{0,1,2\}\) and \(\mathsf{Open}\) is generated by \(O=\{0,1\}\), \(O^{\prime}=\{1,2\}\), and \(O^{\prime\prime}=\{0\}\), as illustrated in Figure 11. Then \(O\leftrightarrow O^{\prime}\) but \(O\not\leq O^{\prime}\) since \(O\leftrightarrow\{0\}\) but \(O^{\prime}\not\to\{0\}\). **Remark 11.1.8**.: Suppose \(O\leq O^{\prime}\), so that by Lemma 11.1.7 also \(O\leftrightarrow O^{\prime}\). We can illustrate Definition 11.1.5 in the style of a categorical diagram -- -- expressing that \(O\leq O^{\prime}\) holds when every arrow out of \(O\) factorises through \(O^{\prime}\). **Lemma 11.1.9**.: (\(\leq\) **generalises \(\subseteq\)**) We have: 1. If \(O\subseteq O^{\prime}\) then \(O\leq O^{\prime}\). 2. The converse implication need not hold: \(O\leq O^{\prime}\) does not necessarily imply \(O\subseteq O^{\prime}\).37 Footnote 37: So to sum up this and Lemma 11.1.7: \(\subseteq\subseteq\subseteq\subseteq\leftrightarrow\), and the inclusion may be strict: \(\subseteq\subseteq\leq\) in general. **Proof:** 1. A fact of sets. 2. It suffices to give a counterexample. Set \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{0\},\{0,1\}\}\). This generates a very simple graph \(G\) as follows: \[\{0\}\longleftrightarrow\{0,1\}\] The reader can check that \(\{0,1\}\leq\{0\}\), but \(\{0,1\}\not\subseteq\{0\}\). \(\sqcap\)\(\sqcup\) #### 11.1.3 Transitive elements **Definition 11.1.10**.: Call \(T\in G\)**transitive** when for every \(O,O^{\prime}\in G\) we have that \[O\leftrightarrow T\leftrightarrow O^{\prime}\quad\text{implies}\quad O \leftrightarrow O^{\prime}.\] In pictures: \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0*{\xy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxyxy(0,0)*{ \xyxyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{xyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0,0)*{xyxy(0,0)*{xyxyxy(0,0)*{xyxyxyxy(0,0)*{xyxyxy(0,0)*{xyxyxy(0)*{xyxyxyxy(0,0)*{xyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0})*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0*{xyxyxyxyxy(0)*{xyxyxyxy(0 )*{xyxyxyxy(0)*{xyxyxyxyxy(0)*{xyxyxyxy(0*{(0)*{xyxyxyxyxyxy(0)*{xyxyxyxyxy(0})*{xyxyxyxy(0)*{xyxyxyxy(0 )*{xyxyxyxy(0)*{xyxyxyxy(0*{xyxyxyxyxy(0)*{xyxyxyxyxy(0)*{xyxyxyxyxy(0)*{xyxyxyxyxy(0)*{xyxyxyxyxy(0 *{(0})*{xyxyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxyxy(0)*{xyxyxyxyxy(0 *{(0})*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxyxy(0)*{*}^{*(0)*{xyxyxyxyxy(0 )*{xyxyxyxy(0)*{xyxyxyxyxy(0*{(0})*{xyxyxyxyxy(0)*{xyxyxyxyxy(0)*{*{xyxyxyxyxyxyxy( *{(0})*{xyxyxyxy(0)*{xyxyxyxyxy(0)*{*{xyxyxyxyxyxyxy( *{}})*{\xyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxy(*{0})*{\xyxyxy(0*{ *{}^{*}^{*(*})*{xyxyxyxy(0)*{xyxyxyxy(0)*{xyxyxyxyxy(0*{ *{}})*{\xyxy(0)*{xyxyxyxy(0*{0})*{\xyxyxy(0)*{xyxyxyxy(0*{0}){\xyxyxy(0*{xyxyxyxyxy( )*{xyxyxyxyxy(*{0})*{\cdot{xyxyxyxyxy(*){xyxyxyxyxyxyxy(*){*{xyxyxyxyxyxyxy(*})*{\cdot{xyxyxyxyxyxy( *)*{xyxyxyxy(*){*{xyxyxyxyxyxyxy(*})*{\cdot{xyxyxyxyxy(*})*{\cdot{xyxyxyxyxy(* *})*{\cdot{xyxyxyxyxy(*})*{\cdotxyxyxy(*){*(xyxyxy,*{xyxyxyxyxy(*)*{*}})*{xyxy( *{xyxyxyxy(*){xyxyxyxy(*)*{xyxyxyxy(*{*})*{\cdotxyxyxy(*})*{xyxyxy(*{xyxyxy(*)*{xyxyxyxy(* {*})*{\cdotxyxyxy(*{0})*{\cdotxyxyxy(*{0})*{xyxy(*{xyxyxyxy,*{*(*)xyxyxyxy(* *{0})*{\cdotxyxy(*{*})*{\cdotxyxyxy(*{*})*{xyxyxy(*{0,*{xyxyxyxy(*)*{*{xyxyxyxyxyxyxy( *})*{\cdotxyxyxy(*{*{}^{*})*{\cdotxyxyxyxy(*{*}){\cdotxyxyxyxy(* {*{}^{*}^{*})*{xyxyxy(*{xyxyxyxyxy(*})*{xyxyxy(*{0)*{xyxyxyxy(*{*})*{\cdotxyxyxy(* *{*})*{xyxy(*{xyxyxyxy)*{*{\cdotcdotxyxyxyxy(*{*}})*{xyxy(*{*{xyxyxyxyxy(*})*{xyxyxy(* *{*{xyxyxyxyxy(*)*{*{}^{*}^{*{*}^{*}^{*}^{*})*{xyxyxy(*{xyxyxyxyxy(*})*{xyxyxyxy(* *{*}){xyxyxy(*{xyxyxy)*{*{\cdotxyxyxy(*})*{xyxy 2. \(\forall O.(T\leftrightarrow O\Longrightarrow T\leq O)\). 3. \(\forall O.(T\leftrightarrow O\Longleftrightarrow T\leq O)\). **Proof:** The lower equivalence just follows from Lemma 11.1.7. For the top equivalence, we prove two implications: * _The top-down implication._ Suppose \(T\) is transitive and suppose \(T\leftrightarrow O\). To prove \(T\leq O^{\prime}\) it suffices to consider any \(O^{\prime}\) and show that \(T\leftrightarrow O^{\prime}\) implies \(O\leftrightarrow O^{\prime}\). But this is just from transitivity and the fact that \(\leftrightarrow\) is symmetric: \(O\leftrightarrow T\leftrightarrow O^{\prime}\) implies \(O\leftrightarrow O^{\prime}\). * _The bottom-up implication._ Suppose for every \(O\), if \(T\leftrightarrow O\) then \(T\leq O\), and suppose \(O\leftrightarrow T\leftrightarrow O^{\prime}\). Because \(T\leq O\) and \(T\leftrightarrow O^{\prime}\), we have \(O\leftrightarrow O^{\prime}\) as required. \(\sqcap\)\(\sqcup\) **Corollary 11.1.13.** 1. If \(T\) is transitive then \(T\) is \(\leq\)-least. That is: \[O\leq T\quad\text{implies}\quad T\leq O.\] 2. The converse implication does not hold: it is possible for \(T\) to be \(\leq\)-least and not transitive. **Proof:** 1. Suppose \(T\) is transitive and \(O\leq T\). By Lemma 11.1.9(3) \(O\leftrightarrow T\) and by Proposition 11.1.12(2) \(T\leq O\). 2. It suffices to provide a counterexample. Consider the semitopology illustrated in Figure 5. It is a fact that \(A\) is not transitive, yet \(A\) is \(\leq\)-least: \(A\not\leq B\) (because \(B\leftrightarrow D\) yet \(A\not\leftrightarrow D\)) and similarly \(A\not\leq C\), and \(A\not\leq D\) (because \(A\not\leftrightarrow D\)). \(\sqcap\)\(\sqcup\) **Definition 11.1.14.** Suppose \(O,O^{\prime}\in G\). Define \(O\approx O^{\prime}\) when \(O\leq O^{\prime}\wedge O^{\prime}\leq O\), and in this case call \(O\) and \(O^{\prime}\)**extensionally equivalent**. It is easy to see from Definition 11.1.5 that \[O\approx O^{\prime}\Longleftrightarrow\forall O^{\prime\prime}.(O\leftrightarrow O ^{\prime\prime}\Longleftrightarrow O^{\prime}\leftrightarrow O^{\prime\prime}).\] **Corollary 11.1.15.** 1. If \(T\) and \(T^{\prime}\) are transitive (Definition 11.1.10) then the following are equivalent: \[T\leq T^{\prime}\quad\Longleftrightarrow\quad T^{\prime}\leq T\quad \Longleftrightarrow\quad T\leftrightarrow T^{\prime}\quad\Longleftrightarrow \quad T\approx T^{\prime}.\] 2. As a corollary, if \(T\) and \(T^{\prime}\) are transitive then \(T\leftrightarrow T^{\prime}\) if and only if \(T\approx T^{\prime}\). **Proof:** The left-hand equivalence is from Corollary 11.1.13 (since \(T\) and \(T^{\prime}\) are transitive). The middle equivalence is from Proposition 11.1.12. The right-hand equivalent follows from the left-hand and middle equivalences using Definitions 11.1.5 and 11.1.14. The corollary just repeats the right-hand equivalence. **Remark 11.1.16**.: **(Intersection graph loses information)** The proof of Corollary 11.1.15(2) is not hard but it tells us something useful: the intersection graph identifies intersecting topens, and thus identifies a topen with the (by Corollary 3.5.3) unique maximal topen that contains it. Consider a semitopology \((\mathsf{P},\mathsf{Open})\) and its corresponding intersection graph \(\mathsf{IntGr}(\mathsf{P},\mathsf{Open})\), and consider some regular point \(p\in K(p)\). Recall from Theorem 4.2.6 that \(K(p)\) is the greatest topen (transitive open) neighbourhood of \(p\). Putting Corollary 11.1.15(2), Theorem 4.2.6, and Lemma 3.4.3 together, we have that \(K(p)\) -- when considered as a node in the intersection graph of open sets -- is extensionally equivalent to each of its topen subsets, and also (equivalently) to any topen set that it intersects with. So, if we were to build a functor from intersection graphs back to semitopologies, by forming a notion of abstract point and mapping a node to the set of abstract points that contain it, then Corollary 11.1.15 tells us that this will map all connected transitive nodes down to a single point. Thus, our intersection graph representation from Definition 11.1.1_ loses information_. It is easy to generate examples of this kind of information loss. The following clearly different semitopologies give rise to isomorphic intersection graphs, namely: the full graph on three points, representing three pairwise intersecting nonempty open sets. 1. \(\mathsf{P}=\{0,1,2\}\) and \(\mathsf{Open}=\{\varnothing,\ \{0\},\ \{0,1\},\ \{0,1,2\}\}\). 2. \(\mathsf{P}^{\prime}=\{0,1,2\}\) and \(\mathsf{Open}^{\prime}=\{\varnothing,\ \{0,1\},\ \{1,2\},\ \{0,1,2\}\}\). See Figure 12; the intersection graph isomorphism is illustrated on the right (where we equate \(\{0\}\) with \(\{1,2\}\)). The left-hand and middle examples in the figure are of intersecting topens, consistent with Corollary 11.1.15(2). Whether this behaviour is a feature or a bug depend on what we want to accomplish -- but for our purposes of modelling networks, we prefer a representation that retains more information. In the next Subsection we consider a slightly more elaborate graph representation, which is more discriminating. ### From a semiframe to its straddling graph **Remark 11.2.1**.: In Remark 11.1.16 we gave a natural representation of a semitopology as its intersection graph. We noted in Corollary 11.1.15 that this identifies open sets up to a notion of extensional equivalence \(\approx\) given in Definition 11.1.14, and because topen sets are extensionally equivalent if and only if they intersect by Corollary 11.1.15, the intersection graph representation of semitopologies identifies two topens precisely when they intersect. This is not wrong -- intersection topen sets _are_ extensionally equivalent, after all -- but suppose we want to retain a bit more detail. How can we proceed? #### 11.2.1 The straddling relation \(\ltimes\) **Remark 11.2.2**.: Notice that the notion of semiframe \((\mathsf{X},\leq,*)\) from Definition 6.3.1 is based on _two_ structures on \(\mathsf{X}\): a semilattice relation \(\leq\), and a compatibility relation \(*\). Correspondingly, our notion of semitopology observes _two_ properties of open sets: whether \(O\) is a subset of \(O^{\prime}\), and whether \(O\) intersects \(O^{\prime}\). Can these two notions obtained from a single relation? Yes (if we are also allowed to observe equality): we can combine \(\leq\) and \(*\) into a single relation and so obtain a graph structure, without the loss of information we noted of intersection graphs. The definition is as follows: **Definition 11.2.3**.: 1. Suppose \(\mathsf{P}\) is a set and \(X\subseteq\mathsf{P}\).39 Define \(X^{c}\) the **complement** of \(X\) by \(X^{c}=\mathsf{P}\setminus X\). 2. Suppose \(\mathsf{P}\) is a set and \(X,Y\subseteq\mathsf{P}\). Define a relation \(X\ltimes Y\), and say that \(X\)**straddles**\(Y\), by \[X\ltimes Y\quad\text{when}\quad X\between Y\wedge X^{c}\between Y.\] 3. Suppose \((\mathsf{X},\leq,*)\) is a semiframe and \(x,y\in\mathsf{X}\). Define a relation \(x\ltimes y\), and say that \(x\)**straddles**\(y\), by \[x\ltimes y\quad\text{when}\quad x*y\wedge y\not\leq x.\] Footnote 39: We will be most interested in the case that \(P\) is the set of points of a semitopology, but the definition does not depend on this. **Example 11.2.4**.: Set \(P=\{0,1,2\}\). Then: 1. \(\{0\}\) straddles \(\{0,1\}\), because \(\{0\}\between\{0,1\}\) and \(\{0,1,2\}\setminus\{0\}=\{1,2\}\between\{0,1\}\). Similarly, \(\{1\}\) straddles \(\{0,1\}\). 2. \(\{2\}\) does not straddle \(\{0,1\}\), because \(\{2\}\not\between\{0,1\}\). 3. \(\{0,1\}\) does not straddle \(\{0\}\) or \(\{1\}\), because \(\{2\}\not\between\{0\}\) and \(\{2\}\not\between\{1\}\). **Remark 11.2.5**.: **(One property, and three non-properties)** It is easy to show that \(X\ltimes Y\) is positive (covariant) in its second argument: if \(X\ltimes Y\) and \(Y\subseteq Y^{\prime}\) then \(X\ltimes Y^{\prime}\). However, \(X\ltimes Y\) is neither positive nor negative in its first argument, and it does not commute with intersection in its right argument. Take \(X,Y\subseteq\{0,1,2,3\}\). Then: Figure 12: Semitopologies with isomorphic intersection graphs (Remark 11.1.16) 1. It is not the case that \(X\ltimes Y\wedge X\subseteq X^{\prime}\) implies \(X^{\prime}\ltimes Y\). Take \(X=\{0\}\) and \(Y=\{0,1\}\) and \(X^{\prime}=\{0,1\}\). 2. It is not the case that \(X\ltimes Y\wedge X^{\prime}\subseteq X\) implies \(X^{\prime}\ltimes Y\). Take \(X=\{0,1\}\) and \(Y=\{1,2\}\) and \(X^{\prime}=\{0\}\). 3. It is not the case that \(X\ltimes Y\wedge X\ltimes Y^{\prime}\) implies \(X\ltimes(Y\cap Y^{\prime})\). Take \(X=\{0,1\}\) and \(Y=\{1,2\}\) and \(Y^{\prime}=\{1,3\}\). **Lemma 11.2.6**.: Suppose \(\mathsf{P}\) is a set and \(X,Y\subseteq\mathsf{P}\). Then the following are equivalent: 1. \(X\ltimes Y\) in the sense of Definition 11.2.3(2). 2. \(X\not\between Y\wedge Y\not\subseteq X\). In other words: \(X\ltimes Y\) for \(X\) and \(Y\) considered as sets as per Definition 11.2.3(2), precisely when \(X\ltimes Y\) for \(X\) and \(Y\) considered as elements in the semiframe \((\mathit{pow}(\mathsf{P}),\subseteq,\between)\) as per Definition 11.2.3(3). Proof.: Routine, using the fact of sets that \(Y\between X^{c}\) if and only if \(Y\not\subseteq X\). **Corollary 11.2.7**.: Suppose \((\mathsf{X},\leq,*)\) is a spatial semiframe and \(x,y\in\mathsf{X}\). Then \[x\ltimes y\quad\text{if and only if}\quad\mathit{Op}(x)\ltimes\mathit{Op}(y).\] Proof.: Suppose \((\mathsf{X},\leq,*)\) is a spatial semiframe. By Proposition 8.1.4(2&1) \(x*y\) if and only if \(\mathit{Op}(x)\between Op(y)\), and \(x\leq y\) if and only if \(\mathit{Op}(x)\subseteq\mathit{Op}(y)\). We use Lemma 11.2.6. #### 11.2.2 Recovering \(\leq\) and \(*\) from \(\ltimes\) **Remark 11.2.8**.: We can recover \(\subseteq\) and \(\between\) from \(\ltimes\). We can also recover \(\leq\) and \(*\). We consider the construction for semiframes and \(\leq\) and \(*\), because it is the more general setting; the proofs for the concrete instance of \(\subseteq\) and \(\between\) are identical: **Proposition 11.2.9**.: Suppose \((\mathsf{X},\leq,*)\) is a semiframe and suppose \(x,y\neq\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\rm L$}}}{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\rm L$}}}{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule heigh t 6.299904pt width 1px\hss}\hbox{$\rm L$}}}{\hbox{\hbox to 0.0pt{\kern 1.49 9977pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm L$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm L$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm L$}}}}_{\mathsf{X}}\). Then: 1. \(x*y\) if and only if \(x=y\lor x\ltimes y\lor y\ltimes x\). 2. \(x\leq y\) if and only if \(x=y\lor(x\ltimes y\land\neg(y\ltimes x))\). Proof.: We consider each implication in turn: * _We show that_ \(x*y\) _implies_ \(x=y\lor x\ltimes y\lor y\ltimes x\). Suppose \(x*y\). By antisymmetry of \(\leq\), either \(x=y\) or \(y\not\leq x\) or \(x\not\leq x\). The result follows. * _We show that_ \(x=y\lor x\ltimes y\lor y\ltimes x\)_implies_ \(x*y\). By reversing the reasoning of the previous case. * _We show that_ \(x=y\vee(x\ltimes y\wedge\neg(y\ltimes x))\) _implies_ \(x\leq y\)_._ Suppose_ \(x=y\vee(x\ltimes y\wedge\neg(y\ltimes x))\)_._ If_ \(x=y\) _then_ \(x\leq y\) _and we are done. If_ \(x\neq y\) _then we unpack Definition_ 11.2.3_(3 and simplify as follows:_ \[x\ltimes y\wedge\neg(y\ltimes x)\Longleftrightarrow x*y\wedge y \not\leq x\wedge(\neg(x*y)\lor x\leq y)\Longleftrightarrow\\ x*y\wedge y\not\leq x\wedge x\leq y\Longrightarrow x\leq y\] * _We show that_ \(x\leq y\) _implies_ \(x=y\vee(x\ltimes y\wedge\neg(y\ltimes x))\)_._ Suppose_ \(x\leq y\)_. By assumption_ \(x\neq\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3. * For a graph \((\mathsf{G},\ltimes)\), the natural notion of morphism is some notion of \(\ltimes\)-preserving map, and this is _not_ necessarily the same as a \(\leq\!/*\)-preserving map, because \(\ltimes\) uses \(\not\leq\). If we still want to preserve notions of lattice structure and semifilter, then we look at how \(\leq\) is defined from \(\ltimes\) in Proposition 11.2.9, and see that it uses both \(\ltimes\) and \(\ltimes\), and so in this case we may want a notion of morphism such that \(x\ltimes x^{\prime}\)_if and only if_\(g(x)\ltimes g(x^{\prime})\). Looked at from the point of view of \(\leq\) and \(*\), this suggests that \(x\leq x^{\prime}\) if and only if \(g(x)\leq g(x^{\prime})\), and \(x*x^{\prime}\) if and only if \(g(x)*g(x^{\prime})\). Investigating the design space here is future work. See also Remark 12.1.1. ## 12 Future work and conclusions ### Future work This paper, for all its length, is in no way encyclopaedic or comprehensive -- the list of things that we have _not_ done is at least as long as the list of things that we have. We see this a feature, not a bug: it suggests that we may have tapped a rich vein of possible future research. Here are a few comments and ideas: **Remark 12.1.1**.: **(Other notions of morphism)** In Definition 9.1.1(1) we take a morphism of semitopologies \(f:(\mathsf{P},\mathsf{Open})\to(\mathsf{P}^{\prime},\mathsf{Open}^{\prime})\) to be a continuous function \(f:\mathsf{P}\to\mathsf{P}^{\prime}\).40 Footnote 40: Correspondingly, in Definition 9.2.1(1) we take a morphism of semiframes \(g:(\mathsf{X}^{\prime},\leq^{\prime},*^{\prime})\to(\mathsf{X},\leq,*)\) to be a compatible morphism of complete semilattices. The reader may be familiar with conditions on maps between topologies other than continuity, such as being _open_ (\(f\) maps open sets to open sets) and _closed_ (\(f\) maps closed sets to closed sets). These conditions make sense in semitopologies too, but the semiframe and graph representations of semitopologies suggest yet another design space of conditions on morphisms, which includes conditions on sets intersections and strict inclusions. We briefly list some of the conditions that we could impose on \(f:\mathsf{P}\to\mathsf{P}^{\prime}\): 1. If \(O\between O^{\prime}\) then \(f^{\mbox{-}1}(O)\between f^{\mbox{-}1}(O^{\prime})\). (It is automatic that if \(f^{\mbox{-}1}(O)\between f^{\mbox{-}1}(O^{\prime})\) then \(O\between O^{\prime}\), but the reverse implication is a distinct condition). 2. If \(O\subsetneq O^{\prime}\) then \(f^{\mbox{-}1}(O)\subsetneq f^{\mbox{-}1}(O^{\prime})\). 3. \(O\between O^{\prime}\) and \(f^{\mbox{-}1}(O)\subseteq f^{\mbox{-}1}(O^{\prime})\) implies \(O\subseteq O^{\prime}\). If we write this as a contrapositive -- \(O\between O^{\prime}\) and \(O\not\subseteq O\) implies \(f^{\mbox{-}1}(O)\not\subseteq f^{\mbox{-}1}(O^{\prime})\) -- then we see the connection to the straddling relation from Definition 11.2.3. See also related discussions in Remarks 7.1.5 and 7.1.6, and in Remark 11.2.11. **Remark 12.1.2**.: **(Exponential spaces)** It remains to check whether \(\mathsf{SemiFrame}\) the category of semiframes is closed [10, page 180, Section VII.7], or cartesian.41 We have checked that the category of semitopologies is cartesian (it is), but it remains to check whether it is closed. It remains to look into the _Vietoris_ (also called the _exponential_) semitopologies [10, Exercise 2.7.20, page 120]. There are also indications that an exponential semitopology based on a _three-valued_ logic \(\mathbf{3}\) might be useful: see the discussion in Remark 12.1.3. **Remark 12.1.3.** (Representations): The **Sierpinski space**\(\mathit{Sk}=(\mathsf{P},\mathsf{Open})\) sets \(\mathsf{P}=\{0,1\}\) and \(\mathsf{Open}=\{\varnothing,\{1\},\{0,1\}\}\). This is both a topology and a semitopology, and it is a _classifying space_ for open sets, in the sense that \(\mathit{Hom}(\mbox{-},\mathit{Sk}):\mathsf{SemiTop}\to\mathsf{Set}\) is naturally isomorphic to \(\mathit{Opns}:\mathsf{SemiTop}\to\mathsf{Set}\) which maps \((\mathsf{P},\mathsf{Open})\) to \(\mathsf{Open}\). However, this is not enough for semitopologies, because semiframes suggest that we should view the set of open sets \(\mathsf{Open}\) of a semitopology as a semiframe structure, having a subset inclusion (of course) and _also_ a _generalised intersection_\(\backslash\). A likely classifying space for this is the the top-left example in Figure 3, such that * \(\mathsf{P}=\{0,1,2\}\) and * \(\mathsf{Open}=\{\varnothing,\{0\},\{2\},\{0,2\},\{0,1,2\}\}\). With \(\mathit{Sk}\) in mind, this space looks like two copies of \(\mathit{Sk}\) glued end-to-end -- i.e. like two open sets -- where the \(1\in\mathsf{P}\) represents where they might intersect. Call this space \(\mathbf{3}\). It remains to check what \(\mathit{Hom}(\mbox{-},\mathbf{3})\) represents. \(\mathbf{3}\) further suggests that a logic for semiframes might be naturally _three-valued_, with values \(\mathbf{t}\), \(\mathbf{f}\), and also \(\mathbf{b}\) for 'is in the intersection', and that the spaces \(\mathit{Hom}(\mbox{-},\mathbf{3})\) of continuous mappings to \(\mathbf{3}\) might play a useful role in the theory of semitopologies and semiframes. If so, then this might also have some impact on suitable notions of exponential space (see Remark 12.1.2). More generally, it remains to consider functors of the from \(\mathit{Hom}(\mbox{-},B)\) and \(\mathit{Hom}(A,\mbox{-})\), for different values of \(A\) and \(B\). **Remark 12.1.4.** (Computational/logical behaviour): Semiframes stand as objects of mathematical interest in their own right (just as frames do) but the original motivation for them comes from semitopologies, and semitopologies are motivated by distributed systems. It might therefore be useful to think about 'computable' semiframes. What this would mean is not entirely clear at the moment, but of course this is what would make it research. One possibility is to develop a theory of logic within semiframes. On this topic, we can recall the discussion so far, and note that semiframes support a complementation operation \(x^{c}=\bigvee\{x^{\prime}\mid\neg(x^{\prime}*x)\}\), so it is clearly possible to interpret propositional logic in a semiframe (implication would be \(x\to y=x^{c}\mathbf{\mathsf{V}}y\)). **Remark 12.1.5.** (Finiteness and compactness): The relation of semitopologies to finiteness is interesting. On the one hand, our motivating examples -- distributed networks -- are finite because they exist in the real world. On the other hand, in distributed networks, precisely because they are distributed, participants may not be able to depend on an exhaustive search of the full network being practical (or even permitted -- this could be interpreted as a waste of resources or even as hostile or dangerous). This requires mathematical models and algorithms that _make sense_ on at least countably infinitely many points.42 Footnote 42: This is no different than a programming language including a datatype of arbitrary precision integers: the program must eventually terminate, but because we do not know when, we need the _idea_ of an infinity in the language. In fact, arguably even 'countably large' is not quite right. The natural cardinality for semitopologies may be _uncountable_, since network latency means that we cannot even enumerate the network: no matter how carefully we count, we could always in principle discover new participants who have joined in the past (but we just had not heard of them yet). This motivates future work in which we consider algebraic conditions on a semiframe \((\mathsf{X},\leq,*)\) that mimic some of the properties of open sets of finite semitopologies (without necessarily insisting on finiteness itself). For instance: 1. We could insist that a \(\leq\)-descending chain of non-\(\mathsf{L}_{\mathsf{X}}\) elements in \(\mathsf{X}\) have a non-\(\mathsf{L}_{\mathsf{X}}\) greatest lower bound in \(\mathsf{X}\). 2. We could insist that a \(\leq\)-descending chain of elements strictly \(\leq\)-greater than some \(x\in\mathsf{X}\) have a greatest lower bound that is strictly \(\leq\)-greater than \(x\). 3. We could insist that if \((x_{i}\mid i\geq 0)\) and \((y_{i}\mid i\geq 0)\) are two \(\leq\)-descending chains of elements, and \(x_{i}*y_{i}\) for every \(i\geq 0\) -- in words: \(x_{i}\) is compatible with \(y_{i}\) -- then the greatest lower bounds of the two chains are compatible. The reader may notice how these conditions are reminiscent of compactness conditions from topology: e.g. a metric space is compact if and only if every descending chain of open sets has a nonempty intersection. This is no coincidence, since one of the uses of compactness in topology is precisely to recover some of the characteristics of finite topologies. Considering semiframes (and indeed semitopologies) with compactness/finiteness flavoured conditions, is future work. **Remark 12.1.6**.: **(Generalising \(*\))** In Remark 6.2.2 we mentioned that we can think of semitopologies not as _'topologies without intersections'_ so much as _'topologies with a generalised intersection'_. In this paper we have studied a relation called \(\betweenbetween(\)for point-set semitopologies) and \(*\) (for semiframes), which intuitively measure whether two elements intersect. But really, this is just a notion of generalised join. We would take \((\mathsf{X},\leq)\) and \((\mathsf{X}^{\prime},\leq^{\prime})\) to be complete join-semilattices and the generalised join \(*:(\mathsf{X}\times\mathsf{X})\to\mathsf{X}^{\prime}\) is any commutative distributive map. Or, we could generalise in a different direction and consider (for example) cocomplete symmetric monoidal categories: \(*\) becomes the (symmetric) monoid action. These objects could be studied in their own right, or we could try to translate their structure back to sets, to see what point-set generalisations of semitopologies result. **Remark 12.1.7**.: **(Homotopy and convergence)** We have not looked in any detail at notions of _path_ and _convergence_ in semitopologies and semiframes. We can give a flavour of why this might be new and different relative to the notions from topologies. Let \((\mathsf{P},\mathsf{Open})\) be the semitopology defined as follows, and illustrated in Figure 13: * \(\mathsf{P}=\mathbb{Z}\cup\{\top\}\) is thought of intuitively as a circle with \(0\) at the bottom and \(\top\) at the top. * For each \(x\in\mathbb{Z}\) define \[\mathit{left}(x)=\{\top\}\cup\{y\in\mathbb{Z}\mid y\leq x\}\quad\text{and} \quad\mathit{right}(x)=\{\top\}\cup\{y\in\mathbb{Z}\mid x\leq y\}\] and give \(\mathsf{P}\) the semitopology \(\mathsf{Open}\) generated by the sets \(\mathit{left}(x)\) and \(\mathit{right}(x)\) for all \(x\in\mathbb{Z}\). Intuitively: * \(\mathit{left}(x)\) is a circle segment starting at \(x\) (\(x\) may be negative) and headed leftwards towards \(\top\). * \(\mathit{right}(x)\) is a circle segment starting at \(x\) (\(x\) may be negative) and headed rightwards towards \(\top\). We can converge on \(\top\) from the left (via the negative numbers), and from the right (via the positive numbers) -- however, the descending sequences of open neighbourhoods intersect only at \(\top\) and do not have a common open intersection. This is not behaviour that would be possible in a topology. This example is really just dressing up one of our earliest observations, from Lemma 2.1.5: in semitopologies a point can have more than one minimal open neighbourhood, and the example illustrates that intuitively each of these minimal open neighbourhoods can be thought of as a distinct direction by which we can converge on the point. Developing this part of the theory is future work. **Remark 12.1.8**.: **(Constructive mathematics)** We have not considered what semiframes would look like in a constructive setting, though we should look into this and we have plans to do so. Much of the interest in frames and locales (versus point-set topologies) comes from working in a constructive setting; e.g. in the topos of sheaves over a base space, locales give a good fibrewise topology of bundles. To what extent similar structures might be built using semiframes, or what other structures might emerge instead, are currently entirely open questions. Figure 13: A point with two paths to it (Remark 12.1.7) ### Topology vs. semitopology We briefly compare and contrast topology/frames and semitopology/semiframes. This list is far from exhaustive but we hope it will help the reader get a feel for these two worlds: 1. _Topology:_ We are typically interested in spaces with separation axioms.43 Footnote 43: The Wikipedia page on separation axioms (permalink) includes an excellent overview with over a dozen separation axioms. No anti-separation axioms are discussed. 44 An extra word on this: Our theory of semitopologies admits spaces whose points partition into distinct communities, as discussed in Theorem 3.5.4 and Remark 3.5.5. Surely it _must be bad_ if not all points need be in consensus in a final state? Not at all: for example, most blockchains have a _mainnet_ and several _testnets_ and it is understood that each should be coherent within itself, but different nets _need not_ be in consensus with one another — indeed, if the mainnet had to agree with a testnet then this would likely be a bug, not a feature. So the idea of a single space with multiple partitions of consensus is not a new idea; it is an old idea, which we frame in a new, fruitful, and more general way. 2. _Topology:_ If a minimal open neighbourhood of a point exists then it is least, because we can intersect two minimal neighbourhoods to get a smaller one which by minimality is equal to both. A finite filter has a least element. _Semitopology:_ A point may have multiple minimal open neighbourhoods -- examples are very easy to generate, see e.g. the top-right example in Figure 3. A finite semifilter need not have a least element (see Remark 7.1.6). 3. _Topology:_ Every finite \(T_{0}\) topology is sober. A topology is sober if and only if every nonempty irreducible closed set is the closure of a unique point. _Semitopology:_ Neither property holds. See Lemma 8.3.7. 4. Semitopological questions such as _'is this a topen set'_ or _'are these two points intertwined'_ or _'does this point have a topen neighbourhood'_ -- and many other definitions in this paper, such as our taxonomy of points into _regular_, _weakly regular_, _quasiregular_, _conflicted_, and _strongly compatible_ are novel and/or play a larger role in the theory than they typically do in topology. ### Related work DualitiesWe discussed duality results in detail in Remark 9.4.3. The reader may know that there are a great many such results, starting with Stone's classic duality between Boolean algebras and compact Hausdorff spaces with a basis of clopen sets [14, 15]. The duality between frames and topologies is described in [16, page 479, Corollary 4]. See also the encyclopaedic treatment in [17], with an overview in Example 2.9 on page 17. Our duality between semiframes and semitopologies fits into this canon. Union sets, and minimal structuresThere is a thread of research into _union-closed families_; these are subsets of a finite powerset closed under unions, so that a union-closed families is precisely just a finite semitopology. The motivation is to study the combinatorics of finite subsemilattices of a powerset. Some progress has been made in this [11]; the canonical reference for the relevant combinatorial conjectures is the 'problem session' on page 525 (conjectures 1.9, 1.9', and 1.9") of [10]. See also recent progress in a conjecture about union-closed families (permalink). There is no direct connection to semitopologies, and certainly no consideration of duality results. Perhaps the duality in this paper may be of some interest in that community. A _minimal structure_ on a set \(X\) is a subset of \(\mathit{pow}(X)\) that contains \(\varnothing\) and \(X\). Thus a semitopology is a minimal structure that is also closed under arbitrary unions. There is a thread of research into minimal structures, studying how notions familiar from topology (such as continuity) fare in weak (minimal) settings [20] and how this changes as axioms (such as closure under unions) are added or removed. An accessible discussion is in [13], and see the brief but comprehensive references in Remark 3.7 of that paper. Of course our focus is on properties of semitopologies which are not considered in that literature; but we share an observation with minimal structures that it is useful to study topology-like constructs, in the absence of closure under intersections. Algebraic topology as applied to distributed computing tasksThe reader may know that solvability results about distributed computing tasks have been obtained from algebraic topology, starting with the impossibility of \(k\)-set consensus and the Asynchronous Computability Theorem [14, 15, 16] in 1993. See [11] for numerous such results. The basic observation is that states of a distributed algorithm form a simplicial complex, called its _protocol complex_, and topological properties of this complex, like connectivity, are constrained by the underlying communication and fault model. These topological properties in turn can determine what tasks are solvable. For example: every algorithm in the wait-free model with atomic read-write registers has a connected protocol complex, and because the consensus task's output complex is disconnected, consensus in this model is not solvable [11, Chapter 4]. This paper is also topological, but in a different way: we use (semi)topologies to study consensus in and of itself, rather than the solvability of consensus or other tasks in particular computation models. Put another way: the papers cited above use topology to study the solvability of distributed tasks, but this paper shows how the very idea of 'distribution' can be viewed as having a semitopological foundation. Of course we can imagine that these might be combined -- that in future work we may find interesting and useful things to say about the topologies of distributed algorithms when viewed as algorithms _on_ and _in_ a semitopology. Fail-prone systems and quorum systemsGiven a set of processes \(\mathsf{P}\), a _fail-prone_ system [13] (or _adversary structure_[12]) is a set of _fail-prone sets_\(\mathcal{F}=\{F_{1},...,F_{n}\}\) where, for every \(1\leq i\leq n\), \(F_{i}\subseteq\mathsf{P}\). \(\mathcal{F}\) denotes the assumptions that the set of processes that will fail (potentially maliciously) is a subset of one of the fail-prone sets. A _dissemination quorum system_ for \(\mathcal{F}\) is a set \(\{Q_{1},...,Q_{m}\}\) of quorums where, for every \(1\leq i\leq m\), \(Q_{i}\subseteq\mathsf{P}\), and such that * for every two quorums \(Q\) and \(Q^{\prime}\) and for every fail-prone set \(F\), \((Q\cap Q^{\prime})\setminus F\neq\emptyset\) and * for every fail-prone set \(F\), there exists a quorum disjoint from \(F\). Several distributed algorithms, such as Bracha Broadcast [10] and PBFT [11], rely on a quorum system for a fail-prone system \(\mathcal{F}\) in order to solve problems such as reliable broadcast and consensus assuming (at least) that the assumptions denoted by \(\mathcal{F}\) are satisfied. Several recent works generalize the fail-prone system model to heterogeneous systems. Under the failure assumptions of a traditional fail-prone system, Bezerra et al. [1] study reliable broadcast when participants each have their own set of quorums. Asymmetric Fail-Prone Systems [13] generalize fail-prone systems to allow participants to make different failure assumption and have different quorums. In Permissionless Fail-Prone Systems [11], participants not only make assumptions about failures, but also make assumptions about the assumptions of other processes; the resulting structure seems closely related to witness semitopologies, but the exact relationship still needs to be elucidated. Federated Byzantine Agreement Systems [14] are an instance of semitopologies. Garcia-Perez and Gotsman [15] rigorously prove the correctness of broadcast abstractions in Stellar's Federated Byzantine Agreement model and investigate the model's relationship to dissemination quorum systems. The Personal Byzantine Quorum System model [16] is an abstraction of Stellar's Federated Byzantine Agreement System model and accounts for the existence of disjoint consensus clusters (in the terminology of the paper) which can each stay in agreement internally but may disagree between each other. Consensus clusters are closely related to the notion of topen in Definition 3.2.2(2). Sheff et al. study heterogeneous consensus in a model called Learner Graphs [17] and propose a consensus algorithm called Heterogeneous Paxos. Cobalt, the Stellar Consensus Protocol, Heterogeneous Paxos, and the Ripple Consensus Algorithm [14, 15, 16] are consensus algorithms that rely on heterogeneous quorums or variants thereof. The Stellar network [18] and the XRP Ledger [17] are two global payment networks that use heterogeneous quorums to achieve consensus among an open set of participants; the Stellar network is an instance of a witness semitopology. The literature on fail-prone systems and quorum systems is most interested in synchronisation algorithms for distributed systems and has been less concerned with their deeper mathematical structure. Some work by the second author and others [16] gets as far as proving an analogue to Lemma 3.5.2 (though we think it is fair to say that the presentation in this paper is simpler and clearer), but it fails to notice the connection with topology and the subsequent results which we present in this paper, and there is no consideration of algebra as used in this paper. (Semi)lattices with extra structureWe are not aware of semiframes having been studied in the literature, but they are in excellent company, in the sense that things have been studied that are structurally similar. We mention two examples to give a flavour of this extensive literature: 1. A **quantale** is a complete lattice \((\mathsf{Q},\bigvee)\) with an associative _multiplication_ operation \(*:(\mathsf{Q}\times\mathsf{Q})\to\mathsf{Q}\) that distributes over \(\bigvee\) in both arguments [18]. A commutative quantale whose multiplication is restricted to map to either the top or bottom element in \(\mathsf{Q}\) is close being a semiframe.45 For reference, a pleasingly simple representation result for quantales is given in [11]. 2. An **overlap algebra** is a complete Heyting algebra \(\mathsf{X}\) with an _overlap relation_\(\gg\subseteq\mathsf{X}\times\mathsf{X}\) whose intuition is that \(x{\gg}y\) when \(x{\boldsymbol{\mathsf{A}}}y\) is _inhabited_. The motivation for this comes from constructive logic, in which \(\exists p.(p\in x\wedge p\in y)\) is a different and stronger statement than \(\neg\forall p.\neg(p\in x\wedge p\in y)\). Accordingly, overlap algebras are described as 'a constructive look at Boolean algebras' [10]. Overlap algebras are not semiframes, but they share an idea with semiframes in making a structural distinction between 'intersect' and 'have a non-empty join'. ### Final comments Recall the summary of this paper in a nutshell in Remark 1.1.2. Distributed systems are an old idea; think: telephone exchanges, satellite systems -- and of course the generals in an army, as per the classic paper [13]. However, it is not hyperbole to note that the use and importance of distributed systems has expanded exponentially in the last twenty years. These modern distributed systems can be _very_ distributed indeed, which has provoked an explosion of new algorithms and new mathematics with which to understand them. This includes looking into generalisations of the notion of consensus and quorum [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], and new systems [14, 15, 16]. This paper combines the research on heterogeneous consensus with an even longer mathematical tradition of studying topologies, algebras, and the dualities between them (references in Remark 9.4.3 and at the start of Subsection 12.3). We do this by applying a classic technique: _topologise_, _then dualise_. And, we think is fair to say that it works: we get a new and interesting structure of semiframes, a duality result, and we find that the well-behavedness conditions of semitopologies have nice semiframe analogues. Our understanding of semitopologies has been substantively enriched and deepened by looking at them as semiframes. As we note above in Section 12.1, there is no shortage of potential for future research.
2304.02006
Approaches for Retrieving Sulfur Species Abundances from Dual X/Ka Band Radio Occultations of Venus with EnVision and VERITAS
The EnVision and VERITAS missions to Venus will fly with X and Ka band telecommunications channels which can be used to conduct radio occultation studies of Venus' atmosphere. While link attenuation measurements during prior S and X band occultation experiments have been used to determine vertical profiles of H$_2$SO$_4$ vapor abundance, the addition of the Ka band channel introduces greater sensitivity to the abundances of H$_2$SO$_4$ aerosols and SO$_2$ gas, permitting retrieval of their vertical profiles from dual band measurements. Such measurements would be valuable in the assessment of chemical and dynamical processes governing short and long-term variability in Venus' atmosphere. This paper considers the sensitivity of the X/Ka band radio attenuation measurement to these atmospheric constituents, as well as uncertainties and regularization approaches for conducting retrievals of these atmospheric sulfur species from future occultation experiments. We introduce methods for seeding maximum likelihood estimation retrievals using shape models and simple atmospheric transport constraints. From simulated retrievals, we obtain mean errors of the order of 0.5 ppm, 20 ppm, and 10 mg/m$^3$ for H$_2$SO$_4$ vapor, SO$_2$, and H$_2$SO$_4$ aerosol abundances, respectively, for simultaneous retrieval.
Alex B. Akins, Tatiana M. Bocanegra-Bahamón, Kuo-Nung Wang, Panagiotis Vergados, Chi O. Ao, Sami W. Asmar, Robert A. Preston
2023-04-04T17:56:55Z
http://arxiv.org/abs/2304.02006v1
Approaches for Retrieving Sulfur Species Abundances from Dual X/Ka Band Radio Occultations of Venus with EnVision and VERITAS ###### Abstract The EnVision and VERITAS missions to Venus will fly with X and Ka band telecommunications channels which can be used to conduct radio occultation studies of Venus' atmosphere. While link attenuation measurements during prior S and X band occultation experiments have been used to determine vertical profiles of H\({}_{2}\)SO\({}_{4}\) vapor abundance, the addition of the Ka band channel introduces greater sensitivity to the abundances of H\({}_{2}\)SO\({}_{4}\) aerosols and SO\({}_{2}\) gas, permitting retrieval of their vertical profiles from dual band measurements. Such measurements would be valuable in the assessment of chemical and dynamical processes governing short and long-term variability in Venus' atmosphere. This paper considers the sensitivity of the X/Ka band radio attenuation measurement to these atmospheric constituents, as well as uncertainties and regularization approaches for conducting retrievals of these atmospheric sulfur species from future occultation experiments. We introduce methods for seeding maximum likelihood estimation retrievals using shape models and simple atmospheric transport constraints. From simulated retrievals, we obtain mean errors of the order of 0.5 ppm, 20 ppm, and 10 mg/m\({}^{3}\) for H\({}_{2}\)SO\({}_{4}\) vapor, SO\({}_{2}\), and H\({}_{2}\)SO\({}_{4}\) aerosol abundances, respectively, for simultaneous retrieval. + Footnote †: journal: PSJ ## 1 Introduction Spacecraft radio occultations (RO) have been used to accurately measure vertical profiles of the temperature and pressure of Venus' neutral atmosphere since the first use of the technique at Venus with Mariner V (Fjeldbo et al., 1971). Additionally, the observed excess attenuation of the radio link signal as it traverses the lower atmosphere has been used to infer the abundance of H\({}_{2}\)SO\({}_{4}\) vapor (Steffes & Eshleman, 1982; Kolodner & Steffes, 1998). Prior analyses of neutral atmosphere RO soundings have assessed trends in cloud-level temperature and H\({}_{2}\)SO\({}_{4}\) vapor abundances with altitude, latitude, and time (Jenkins and Steffes, 1991; Withers et al., 2020; Jenkins et al., 1994; Patzold et al., 2007; Tellmann et al., 2012; Oschlsiniok et al., 2012, 2021; Imamura et al., 2017). As a result, the average atmospheric structure above 40 kilometers (the average penetration depth of prior RO measurements) is now well known over a wide range of latitudes and local times (Ando et al., 2020), and local variations, such as vertically-progagating gravity waves, have been well characterized. The recent analysis of over 800 Venus Express radio occultations by Oschlsiniok et al. (2021) has provided thus far the most comprehensive assessment of the distribution of H\({}_{2}\)SO\({}_{4}\) vapor with latitude and time. Oschlsiniok et al. (2021) compared these results to a 2D transport model and found that the observed H\({}_{2}\)SO\({}_{4}\) vapor distribution was recreated by driving meridional circulation with Hadley and polar cells. In this model, enhanced abundances of H\({}_{2}\)SO\({}_{4}\) vapor are found at low latitudes and high latitudes, while mid-latitudes are relatively depleted. While the high latitude enhancement of H\({}_{2}\)SO\({}_{4}\) vapor is driven mostly by sedimentation of cloud aerosols, the low latitude enhancement is highly circulation dependent and relies on supply of H\({}_{2}\)SO\({}_{4}\) from the lower branch of the Hadley cell. By assuming that the vertical distribution of H\({}_{2}\)SO\({}_{4}\) vapor follows the saturation vapor pressure curve and is thus negligible above a certain altitude (\(>\) 50 km), Oschlsiniok et al. (2021) also provided estimates of the sub-cloud abundance of SO\({}_{2}\). These estimates suggest that sub-cloud SO\({}_{2}\) abundance was greater in the polar regions than near mid-latitudes over the course of the Venus Express mission. These and other analyses of Venus RO measurements have yielded valuable results that will be useful for future dynamical and chemical modeling of the Venus atmosphere. With the recent selection of several NASA and ESA missions to Venus, it is worthwhile to consider how the design of future RO experiments can provide new insight into the state of Venus' atmosphere. Of particular interest is the possibility of dual X (8.4 GHz, 3.5 cm) and Ka (32 GHz, 0.94 cm) band radio occultations of the neutral atmosphere. Of the recently selected missions, both EnVision and VERITAS will be capable of conducting dual X/Ka band RO experiments, although only EnVision has designated such experiments within its baseline objectives. The use of a Ka band link during RO measurements is of interest due to the increased atmospheric attenuation experienced by a 32 GHz signal, which may permit the retrieval of atmospheric neutral species beyond H\({}_{2}\)SO\({}_{4}\) vapor. As discussed by Akins and Steffes (2020), the 32 GHz opacity of H\({}_{2}\)SO\({}_{4}\) cloud aerosols and SO\({}_{2}\) gas in the cloud-level atmosphere is high enough to noticeably affect radio signal propagation. The prospects of success for retrievals of SO\({}_{2}\) gas or H\({}_{2}\)SO\({}_{4}\) aerosols from dual X/Ka band occultations, however, have yet to be thoroughly considered. If vertical profiling of either of these neutral species could be accomplished with RO measurements, the benefit to our understanding of Venus' atmosphere would be considerable. SO\({}_{2}\) is one of the most abundant trace species in Venus' atmosphere, but the processes that govern its vertical distribution remain unclear. SO\({}_{2}\) is thought to originate in the atmosphere from persistent volcanic outgassing (Bullock and Grinspoon, 2001), and its photolysis in the mesosphere is a key mechanism in the formation of the H\({}_{2}\)SO\({}_{4}\) clouds. Above the clouds, order of magnitude variations in SO\({}_{2}\) abundance have been observed on relatively short timescales (Marcq et al., 2013; Vandaele et al., 2017), which could possibly result from strong, episodic injection from the lower atmosphere driven by volcanism. (Glaze, 1999; Airey et al., 2015). Over longer timescales, observations have suggested a persistent steep decrease in SO\({}_{2}\) abundance within Venus' cloud layer between the troposphere and mesosphere. This decrease is difficult to reconcile with the results of atmospheric chemical models and requires either an inhibition of vertical transport, chemical depletion, or dissolution of SO\({}_{2}\) within the clouds via unexpected mechanisms (Bierson and Zhang, 2020; Rimmer et al., 2021). RO measurements of the vertical distribution of SO\({}_{2}\) within the clouds could perhaps provide insight into this depletion/inhibition processes, and on shorter timescales, they could also be used to identify strong injections of SO\({}_{2}\) from the lower atmosphere. Gaps also exist in our knowledge of Venus' lower cloud structure and how it varies with latitude and time. While Venus Express observations provide strong constraints on the cloud-top altitude as a function of latitude, inferences of cloud-base altitude are far more ambiguous (Barstow et al., 2012; Haus et al., 2013). Beyond in situ results, most notably the Pioneer Venus LCPS measurements (Knollenberg and Hunten, 1980), the latitudinal variation in lower cloud mass-loading is also weakly constrained by observations. Knowledge of the cloud mass and its contribution to cloud opacity is important in consideration of the radiative energy balance of Venus' atmosphere (Limaye et al., 2018) and in circulation modeling (Sanchez-Lavega et al., 2017). In this paper, we investigate the accuracy with which H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) abundances can be retrieved from dual X/Ka band radio occultations with upcoming spacecraft missions. In Section 2, we discuss contributing factors to link attenuation measured during an RO experiment and their associated uncertainties, including uncertainties for models of Venus' atmospheric opacity inferred from laboratory studies. In Section 3, we illustrate the ill-posed nature of dual X/Ka band retrievals of sulfur species and introduce a regularization procedure to simultaneously retrieve H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) profiles with greater vertical resolution than previously possible. We then apply these procedures to conduct simulated retrievals. We discuss the performance of these algorithms and their implications for actual retrievals in Section 4, and we provide concluding remarks in Section 5. Overall, we argue that vertically resolved measurements of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosols should be sufficiently accurate (20 ppm and 10 mg/m\({}^{3}\) for SO\({}_{2}\), and H\({}_{2}\)SO\({}_{4}\) aerosol abundances, respectively) to both determine the mean atmospheric abundances of both species (as a function of latitude and altitude) and also identify strong perturbations from the mean. Our results are particularly encouraging for the possible detection of volcanic injection of SO\({}_{2}\) into the upper troposphere. ## 2 Radio link attenuation and uncertainties During a one-way spacecraft-to-Earth RO experiment, a spacecraft orbiting Venus transmits a radio carrier wave signal towards receivers on Earth, and the center frequency and amplitude of the received tone are modified via atmospheric refraction and attenuation. We consider here the 2-D spherically symmetric RO geometry for sounding a refractive neutral atmosphere described by Figure 1 of Eshleman (1973), where the coordinate system is defined by the vector between the centers of Earth and Venus and the orthogonal vector within the plane containing the Earth, Venus, and the spacecraft. In this coordinate system, simple relationships exist between the observed Doppler shift \(f\) of the received RO signal from the center frequency \(f_{0}\) (with electromagnetic wavelength \(\lambda_{0}\)), the spacecraft velocity \(v_{t}\) in the orthogonal direction to the Earth-Venus vector, the complement angle \(\gamma\) of the spacecraft elevation with respect to the Earth-Venus vector, the distance \(R_{s}\) of the spacecraft from the center of Venus and the occultation ray impact parameter \(a\) and bending angle \(\delta\). \[f=(v_{t}/\lambda_{0})\sin\delta \tag{1}\] \[a=R_{s}\cos\left(\gamma-\delta\right) \tag{2}\] The refractive index \(n\) can be determined directly from knowledge of the ray impact parameter and bending angle through an inverse Abel transform. \[\ln n(a)=\frac{1}{\pi}\int_{a}^{\infty}\frac{\delta da^{\prime}}{\sqrt{a^{ \prime 2}-a^{2}}} \tag{3}\] Assuming perfect antenna pointing, the Doppler-shifted RO signal is also attenuated via refractive defocusing and neutral atmosphere gas absorption. The refractive defocusing contribution \(L\) is frequency-independent and defined below by the experiment geometry under the assumption that the Earth is significantly farther from the spacecraft than Venus (Eshleman, 1973; Oschlisniok et al., 2012). \[L=-10\log_{10}\left(\Phi_{1}\Phi_{2}\right) \tag{4}\] \[\Phi_{1}=\left(\sec\delta-\frac{D}{a}\tan\delta\right)^{-1}\] \[\Phi_{2}=\left(1+\left(a\tan\delta-D\sec\delta\right)\frac{d\delta}{da} \right)^{-1}\] \[D=R_{s}\left[\sin\left(\gamma-\delta\right)+\cos\left(\gamma-\delta\right) \tan\frac{\delta}{2}\right]\] Once the contribution to total link attenuation from refractive defocusing is subtracted, the resulting excess attenuation \(\tau\) can be converted to absorptivity \(\alpha\) profiles in dB/km units via an inverse Abel transform, which is written in terms of attenuation, absorptivity, ray impact parameter, and ray periapse altitude \(r\)(Jenkins & Steffes, 1991; Oschlismiok et al., 2012) \[\alpha(a)=-\frac{da}{dr}\frac{1}{\pi a}\frac{dF}{da} \tag{5}\] \[F=\int_{a}^{\infty}\frac{\tau a^{\prime}da^{\prime}}{\sqrt{a^{\prime 2}-a^{2}}}\] The resulting absorptivity profile can then be used with temperature and pressure profiles derived from the measured refractivity to retrieve the abundance of atmospheric absorbers. ### Random uncertainty in absorptivity profiles Knowledge of random uncertainties in the measurement of the radio link Doppler shift and signal strength can be used to determine uncertainties in the resulting absorptivity profiles. In this section, we review the calculation of uncertainties in RO-inferred absorptivity based on the discussions in Lipa & Tyler (1979); Jenkins & Steffes (1991); Oschlismiok et al. (2012) and state our assumptions regarding Doppler shift and signal strength statistics for X and Ka band RO measurements which are relevant to our simulated retrievals of atmospheric composition. In the reconstruction of the received carrier tone at the ground station, different sources of errors contribute to the uncertainties of the frequency and power estimates: instrumental (onboard the spacecraft and at the receiving system) and propagation random errors (introduced by the presence of interplanetary plasma, Earth's ionosphere and troposphere). Neglecting errors in the trajectory of the spacecraft, the variance of the frequency time series estimates is given by the summation of the contributions of thermal and phase noise \[\sigma_{f}^{2}=\frac{2BN_{0}/C}{(2\pi\tau)^{2}}+\sigma_{AD}^{2}f_{0}^{2} \tag{6}\] where \(B=1\) Hz is the noise bandwidth, \(N_{0}\) is the noise power density, \(C\) is the signal power and \(\tau\) is the integration time, \(\sigma_{AD}\) is the Allan deviation of the phase noise and \(f\) is the nominal signal frequency. We assume the phase noise is dominated by the onboard frequency standard (assuming \(\sigma_{AD}\sim 5\times 10^{-13}\) at \(\tau=0.1\) s, Hausler et al. (2006)) and it is constant throughout the occultation event. The \(C/N_{0}\) ratio will decrease during the occultation as the signal probes deeper in the atmosphere, increasing the uncertainty in the frequency measurement through Equation 6. The \(C/N_{0}\) ratio also describes the noise in the received signal power \(p\) in dB-Hz (for a signal with linear amplitude \(s_{a}\)). The relationships between received signal power \(p\), signal amplitude \(s_{a}\) and their corresponding uncertainties \(\sigma_{p}\) and \(\sigma_{a}\) are given in Equation 7. \[p=10\log_{10}s_{a}^{2} \tag{7}\] \[\sigma_{a}=\left(\sqrt{10^{\frac{C/N_{0}}{10}}}\right)^{-1}\] \[\sigma_{p}=\frac{\partial p}{\partial s_{a}}\sigma_{a}\] We assume a top-of-atmosphere \(C/N_{0}\) ratio of 70 dB for X band and 80 dB Ka band, which is consistent with the notional design for the EnVision radio science experiment (Team, 2021). We also increase the phase noise contribution by adding a constant value (0.2 Hz at X band, 0.8 Hz at Ka band) to the Doppler shift uncertainty, which is consistent with the uncertainties observed for the Magellan orbit 3212 X band occultation. To determine estimated of uncertainty for the simulated RO retrievals discussed in this paper, we employ a forward and inverse RO simulator using Equations 1-5 (similar to Jenkins (1992)). For the model atmospheric compositions discussed in later sections, simulated power and frequency time series are derived for an occultation experiment at X and Ka band by a spacecraft in a circular orbit at 250 km altitude. From the values of absorptivity, refractive index and impact parameter (\(a=nr\)) corresponding to the models, bending angle and signal attenuation are derived using forward Abel transforms \[\delta(a)=-2a\int_{r}^{\infty}\frac{dn}{da(r^{\prime})}\frac{dr^{\prime}}{ \sqrt{a(r^{\prime})^{2}-a(r)^{2}}} \tag{8}\] \[\tau(r)=2\int_{r}^{\infty}\frac{\alpha(r)a(r^{\prime})dr^{\prime}}{\sqrt{a(r^ {\prime})^{2}-a(r)}} \tag{9}\] The uncertainty in the inferred absorptivity from the RO measurement can be determined via linear propagation of errors (Jenkins and Steffes, 1991), starting from the frequency and power uncertainties \(\sigma_{f}\) and \(\sigma_{p}\). We assume that there is no covariance between the recorded signal power and Doppler shift at different times, which is appropriate for the assumed time and bandwidth integrations (see discussion in Lipa and Tyler (1979)). In the simplified RO geometry, the corresponding covariance matrices for the impact parameter \(C_{a}\) and bending angle \(C_{\delta}\) are diagonal and computed as \(C_{a}^{i=j}=\left(\frac{\partial a}{\partial f}\sigma_{f}\right)^{2}\) and \(C_{\delta}^{i=j}=\left(\frac{\partial\delta}{\partial f}\sigma_{f}\right)^{2}\), where \[\frac{\partial\delta}{\partial f}=\frac{\lambda_{0}/v_{t}}{\sqrt{1-(\lambda_{ 0}/v_{t})^{2}f^{2}}} \tag{10}\] \[\frac{\partial a}{\partial f}=R_{s}\sin{(\gamma-\delta)}\frac{\partial\delta} {\partial f} \tag{11}\] In this convention, the \(i\) index corresponds to the matrix row and the \(j\) index corresponds to the column. Off-diagonal terms are introduced into the covariance matrices for terms that are the result of an inverse Abel transform, such as the inferred refractive index and absorptivity. In Lipa & Tyler (1979) Appendix A, a procedure is provided for the determination of the refractive index covariance matrix using a midpoint-rule Riemann sum discretization of an alternate form of Equation 3. \[\ln n(a)=\frac{1}{\pi}\int_{\delta(a)}^{\infty}\ln\left[\frac{a^{\prime}}{a}+ \sqrt{(a^{\prime}/a)^{2}-1}\right]d\delta(a^{\prime})\approx\frac{1}{\pi}\sum _{j=1}^{n}h_{ij}(\delta_{j+1}-\delta_{j}) \tag{12}\] \[h_{ij}=\log\left[\frac{a_{j+1}+a_{j}}{2a_{i}}+\sqrt{\left(\frac{a_{j+1}+a_{j}}{ 2a_{i}}\right)^{2}-1}\right]\] The covariance matrix \(C_{n}\) can then be determined as \(C_{n}=T_{na}C_{a}T_{na}^{T}+T_{n\delta}C_{\delta}T_{n\delta}^{T}\), where the \(T_{na}\) and \(T_{n\delta}\) matrices are lower triangular and determined as \[T_{na}^{i,j<i}=\frac{\partial n_{i}}{\partial a_{j}}=\frac{\partial h_{ij}}{ \partial a_{j}}(\delta_{j+1}-\delta_{j})+\frac{\partial h_{i,j-1}}{\partial a _{j}}(\delta_{j}-\delta_{j-1}) \tag{13}\] \[\frac{\partial h_{ij}}{\partial a_{j}}=\frac{\partial h_{i,j-1}}{\partial a_ {j}}=\left(\sqrt{\left(a_{j}-2a_{i}+a_{j+1}\right)\left(a_{j}+2a_{i}+a_{j+1} \right)}\right)^{-1}\] \[T_{n\delta}^{i,j<i}=\frac{\partial n_{i}}{\partial\delta_{j}}=h_{i,j-1}-h_{ij} \tag{14}\] The uncertainty in the inferred excess signal attenuation \(\tau\) is the sum of the uncertainty in the signal power measurement and the uncertainty in the refractive defocusing estimate as \(C_{\tau}^{i=j}=\sigma_{p}^{2}+\left(\frac{\partial L}{\partial a}\right)^{2}C _{a}+\left(\frac{\partial L}{\partial\delta}\right)^{2}C_{\delta}\), where \[\frac{\partial L}{\partial a}=\frac{10}{\ln 10}\frac{1}{\Phi_{1}\Phi_{2}} \left[\frac{\partial\Phi_{1}}{\partial a}\Phi_{2}+\frac{\partial\Phi_{2}}{ \partial a}\Phi_{1}\right] \tag{15}\] \[\frac{\partial L}{\partial\delta}=\frac{10}{\ln 10}\frac{1}{\Phi_{1}\Phi_{2}} \left[\frac{\partial\Phi_{1}}{\partial\delta}\Phi_{2}+\frac{\partial\Phi_{2}} {\partial\delta}\Phi_{1}\right]\] \[\frac{\partial\Phi_{1}}{\partial a}=-\frac{D\tan\delta}{(a\sec\delta-D\tan \delta)^{2}}\] \[\frac{\partial\Phi_{1}}{\partial\delta}=\frac{a\sec\delta(D\sec\delta-a\tan \delta)}{(a\sec\delta-D\tan\delta)^{2}}\] \[\frac{\partial\Phi_{2}}{\partial a}=-\frac{\frac{\partial\delta}{\partial a} \tan\delta}{\left[1+\left(a\tan\delta-D\sec\delta\right)\frac{\partial\delta}{ \partial a}\right]^{2}}\] \[\frac{\partial\Phi_{2}}{\partial\delta}=-\frac{\frac{\partial\delta}{\partial a} \left(\sec\delta\left(a\sec\delta-D\tan\delta\right)\right)}{\left[1+\left(a \tan\delta-D\sec\delta\right)\frac{\partial\delta}{\partial a}\right]^{2}}\] Next, the covariance matrix of the intermediate inverse Abel transform term in Equation 5 is determined as as \(C_{F}=T_{Fa}C_{a}T_{Fa}^{T}+T_{F\tau}C_{a}T_{F\tau}^{T}\). The lower diagonal \(T_{Fa}\) and \(T_{F\tau}\) matrices are computed using a similar discretization to that employed in the calculation of the \(T_{n}\) matrices. \[F=\sum_{j=1}^{n}g_{ij}\frac{\tau_{j}+\tau_{j+1}}{2} \tag{16}\] \[g_{ij}=\frac{\frac{a_{j}+a_{j+1}}{2}\left(a_{j+1}-a_{j}\right)}{\sqrt{\left( \frac{a_{j}+a_{j+1}}{2}\right)^{2}-a_{0}^{2}}}\] \[T_{F\tau}^{i,j<i}=\frac{\partial F}{\partial\tau_{j}}=\frac{1}{2}(g_{ij}+g_{i,j-1}) \tag{17}\] \[T_{Fa}^{i,j<i}=\frac{\partial F}{\partial a_{j}}=\frac{\partial g_{ij}}{ \partial a_{j}}(\frac{\tau_{j+1}+\tau_{j}}{2})+\frac{\partial g_{i,j-1}}{ \partial a_{j}}(\frac{\tau_{j}+\tau_{j-1}}{2})\] \[\frac{\partial g_{ij}}{\partial a_{j}}=-\frac{\partial g_{i,j-1}}{\partial a_ {j}}=-\frac{(a_{j}+a_{j+1})^{3}-8a_{i}^{2}a_{j}}{((a_{j}+a_{j+1})^{2}-4a_{i}^{2 })^{3/2}}\] Finally, the absorptivity covariance is computed as \(C_{\alpha}=\left(\frac{\partial\alpha}{\partial F}\right)^{2}C_{F}+\left( \frac{\partial\alpha}{\partial n}\right)^{2}C_{n}+\left(\frac{\partial\alpha} {\partial a}\right)^{2}C_{a}\), where \[\frac{\partial\alpha}{\partial a}=\frac{dF}{dr}\frac{1}{\pi a^{2}},\quad \frac{\partial\alpha}{\partial n}=-\frac{dF}{da}\frac{1}{\pi a},\quad\frac{ \partial\alpha}{\partial F}=-\frac{1}{\Delta r\pi a} \tag{18}\] Examples of these covariance matrices computed at in uniform 0.5 km intervals for X and Ka band occultations are shown in Figure 1 for the Set 1 model atmosphere in Table 6. In addition to the offsets introduced to the Doppler shift uncertainty, the covariance profiles are also multiplied by a factor of 15. This factor was determined by comparing the results of simulations with \(C/N_{0}=60\) dB to the reported results of Magellan occultations. Jenkins et al. (1994) applied a similar correction factor in their presentation of the Magellan RO results, and this was intended as compensation for small-scale fluctuations in the signal power. For both bands, the diagonal terms are stronger than the off-diagonal terms due to the form of the denominator in the inverse Abel transform expressions. For both bands, the uncertainties are greater at higher altitudes and lower altitudes due to lower rate of sampling and proximity to the signal attenuation limit, respectively. The signal attenuation limits (i.e. deepest sounding depth) range between 35-40 km for X band and 45-50 km for Ka band. ### Neutral atmosphere absorptivity The consequential microwave absorbers at X and Ka band in the atmosphere of Venus are the bulk CO\({}_{2}\)/N\({}_{2}\) atmosphere, H\({}_{2}\)SO\({}_{4}\) vapor and aerosols, and SO\({}_{2}\). Continuum and spectral line models of the microwave opacity of these species in the atmosphere of Venus have been derived from laboratory measurements under simulated Venus conditions (Ho et al., 1966; Fahd and Steffes, 1991, 1992; Kolodner and Steffes, 1998; Akins and Steffes, 2020). While H\({}_{2}\)SO\({}_{4}\) vapor and SO\({}_{2}\) opacity are described by spectral line models, single frequency expressions at X and Ka band have been derived as linear functions of gas volume mole fraction, which are reviewed in this section. Additionally, first order uncertainties have been derived for opacity model parameters based on fits to respective laboratory data sets. These uncertainties were determined in the Bayesian sense, where for some dataset \(\mathbf{x}\) with uncertainty \(\sigma\) and model parameter \(a\), the probability \(P(a|\mathbf{x})\) can be estimated using Equation 19. The resulting Gaussian-like probability distribution can be used to determine \(2\sigma\) uncertainties for each model parameter. Since the covariance of the model parameters are not considered here, these are conservative \(2\sigma\) estimates. \[P(a|\mathbf{x})\propto\prod_{i=1}^{n}P(x_{i}|a,\sigma_{i}) \tag{19}\] For H\({}_{2}\)SO\({}_{4}\) vapor, Kolodner and Steffes (1998) and Akins and Steffes (2020) determined single frequency S, X, and Ka Band models for H\({}_{2}\)SO\({}_{4}\) vapor opacity \(\alpha\) as a function of pressure \(p\) in atmospheres, temperature \(T\) in Kelvins, and volume mole fraction \(q\). The temperature dependence is given by the \(\theta\) term, where \(\theta=553/T\) for the H\({}_{2}\)SO Figure 1: Covariance matrices computed for X and Ka band atmospheric absorptivity from RO simulations for the Set 1 model atmosphere. The corresponding \(1\sigma\) uncertainties are shown in context bracketing the absorptivity profiles. vapor model. The values of these parameters and their derived 2\(\sigma\) uncertainties are shown in Table 1 \[\alpha=a_{1}p^{a_{2}}\theta^{a_{3}}q\quad\text{dB/km} \tag{20}\] For H\({}_{2}\)SO\({}_{4}\) aerosol, Fahd & Steffes (1991) determined parameters for a Cole and Cole model of the complex dielectric constant \(\epsilon_{r}=\epsilon_{r}^{\prime}-j\epsilon_{r}^{\prime\prime}\). This model is expressed in terms of a static dielectric constant \(\epsilon_{rs}\), a high-frequency dielectric constant \(\epsilon_{r\infty}\), and relaxation constants \(\tau\) and \(a\). This model is used to determine absorption at a given wavelength \(\lambda\) for an aerosol mass with bulk density M (in units of mg of aerosol per atmosphere volume in m\({}^{3}\)) and a characteristic solution liquid density of \(\rho\) (mg of liquid per liquid volume in m\({}^{3}\)). We assume that the aerosol particle diameter is small enough (10s of microns) such that scattering does not need to be considered, which is an acceptable assumption for Venus' atmosphere (Fahd & Steffes, 1991). \[\epsilon_{r}=\epsilon_{r\infty}+\frac{\epsilon_{rs}-\epsilon_{r\infty}}{1+(j \omega\tau)^{1-a}} \tag{21}\] \[\alpha=\frac{246Me_{r}^{\prime\prime}}{\rho\lambda\left[\left(\epsilon_{r}^{ \prime}+2\right)^{2}+\left(\epsilon_{r}^{\prime\prime}\right)^{2}\right]} \quad\text{dB/km} \tag{22}\] Fahd & Steffes (1991) made measurements of 85% and 99% H\({}_{2}\)SO\({}_{4}\) solutions, which resulted in different models. The dielectric model parameters and their 2\(\sigma\) uncertainties are shown in Table 2. These models were fit to Fahd's Ka and W band measurements, and while limiting the fit to only Ka Band changes the model parameters, the broadband fit is preferable due to the possible presence of systematic offsets. Uncertainty in cloud weight percent H\({}_{2}\)SO\({}_{4}\), which may range from 75% to 99 % in the atmosphere of Venus, also contributes to retrieval uncertainties. Fahd & Steffes (1992) also determined a spectral line model for SO\({}_{2}\) opacity which has been corroborated over a range of frequencies and temperatures (Suleiman et al. \begin{table} \begin{tabular}{c c c c} \hline Band & \(a_{1}\) & \(a_{2}\) & \(a_{3}\) \\ \hline S (2.26 GHz) & 106.58 \(\pm\) 2.90 & 1.333 \(\pm\) 0.02 & 3.2 \(\pm\) 0.2 \\ X (8.39 GHz) & 451.76 \(\pm\) 3.04 & 1.283 \(\pm\) 0.005 & 3.0 \(\pm\) 0.2 \\ Ka (32 GHz) & 2586.66 \(\pm\) 421.64 & 1.092 \(\pm\) 0.12 & 3.0 \(\pm\) 0.2 \\ \hline \end{tabular} \end{table} Table 1: H\({}_{2}\)SO\({}_{4}\) vapor opacity model parameters and 2\(\sigma\) uncertainties \begin{table} \begin{tabular}{c c c c} \hline H\({}_{2}\)SO\({}_{4}\) Weight Percent & \(\epsilon_{r\infty}\) & \(\tau\) & \(a\) \\ \hline 85 \% & 3.393 \(\pm\) 0.290 & (1.78 \(\pm\) 0.02) \(\times 10^{-11}\) & 0.113 \(\pm\) 4.6e-3 \\ 99 \% & 2.319 \(\pm\) 0.065 & (2.576 \(\pm\) 0.04) \(\times 10^{-10}\) & 0.390 \(\pm\) 2.0e-3 \\ \hline \end{tabular} \end{table} Table 2: H\({}_{2}\)SO\({}_{4}\) aerosol dielectric model parameters and 2\(\sigma\) uncertainties 1996; Bellotti & Steffes, 2015; Steffes et al., 2015). The absorption of gaseous SO\({}_{2}\) can be expressed as the product of the line center absorption and a Van Vleck-Weisskopf lineshape function. \[\alpha=A_{max}F_{VVW}(\nu,\Delta\nu)\quad\text{dB/km} \tag{23}\] \[\Delta\nu=\gamma p\left(\frac{T_{o}}{T}\right)^{n}\quad\text{MHz} \tag{24}\] The free parameters for the model are the linewidth parameters \(\gamma\) and their temperature dependence \(n\) for SO\({}_{2}\)-SO\({}_{2}\) and SO\({}_{2}\)-CO\({}_{2}\) broadening, which are shown in Table 3 with their \(2\sigma\) uncertainties. Single frequency expressions at S, X, and Ka Band and their uncertainties have also been derived by fitting a model with the form of Equation 20 (and \(\theta=300/T\)) to the spectral line model predictions. Due to the nonlinear relationship between total mixture pressure in the atmosphere and SO\({}_{2}\) opacity, separate expressions are given that are applicable below and above 1.5 atmosphere mixture pressure, respectively. Note that this is not necessary for H\({}_{2}\)SO\({}_{4}\) vapor, which is largely depleted above the 1 atmosphere altitude. The resulting parameters are shown in Table 4. Ho et al. (1966) made measurements of CO\({}_{2}\) opacity at 9 GHz and determined a model as a function of frequency \(\nu\) in GHz, temperature \(T\) in Kelvins, and pressure \(p\) in atmospheres that was confirmed by Steffes et al. (2015). The values of these parameters and their derived \(2\sigma\) uncertainties from the data of Steffes et al. (2015) are shown in Table 5 \[\alpha=a_{1}p^{a_{2}}T^{a_{3}}\nu^{a_{4}}q\quad\text{dB/km} \tag{25}\] \begin{table} \begin{tabular}{c c c c c} \hline Band & \(p\) & \(a_{1}\) & \(a_{2}\) & \(a_{3}\) \\ \hline S (2.26 GHz) & \(<\) 1.5 atm & 1.36 \(\pm\) 0.03 & 1.19 \(\pm\) 0.02 & 2.65 \(\pm\) 0.01 \\ & \(\geq\) 1.5 atm & 1.36 \(\pm\) 0.02 & 1.18 \(\pm\) 0.02 & 2.74 \(\pm\) 0.03 \\ X (8.39 GHz) & \(<\) 1.5 atm & 21.68 \(\pm\) 0.20 & 0.89 \(\pm\) 0.04 & 2.48 \(\pm\) 0.02 \\ & \(\geq\) 1.5 atm & 19.10 \(\pm\) 0.20 & 1.15 \(\pm\) 0.03 & 2.72 \(\pm\) 0.04 \\ Ka (32 GHz) & \(<\) 1.5 atm & 309.10 \(\pm\) 4.50 & 1.079 \(\pm\) 0.003 & 2.66 \(\pm\) 0.02 \\ & \(\geq\) 1.5 atm & 288.94 \(\pm\) 3.60 & 1.15 \(\pm\) 0.01 & 2.75 \(\pm\) 0.03 \\ \hline \end{tabular} \end{table} Table 4: SO\({}_{2}\) continuum model parameters and \(2\sigma\) uncertainties \begin{table} \begin{tabular}{c c c} \hline Broadening Gas & \(\gamma\) (MHz/torr) & \(n\) \\ \hline SO\({}_{2}\) & 16 \(\pm\) 1.58 & 0.85 \(\pm\) 0.11 \\ CO\({}_{2}\) & 7 \(\pm\) 0.91 & 0.85 \(\pm\) 0.07 \\ \hline \end{tabular} \end{table} Table 3: SO\({}_{2}\) spectral line model parameters and \(2\sigma\) uncertainties These uncertainties can be then converted to uncertainties in retrieved gas abundances from RO measurements via standard propagation of errors methods (Oschlsiniok et al., 2012). Figure 2 shows an example of X and Ka Band atmospheric absorptivity and 1\(\sigma\) uncertainties (1\(\sigma\) has been convention for Venus RO absorptivity measurements) for a model atmosphere (see Section 3.2) of these constituents. The largest sources of uncertainty at Ka band are associated with H\({}_{2}\)SO\({}_{4}\) vapor and aerosol, both of which exhibit a 1\(\sigma\) uncertainty near 13%. While the H\({}_{2}\)SO\({}_{4}\) vapor uncertainty is a result of the laboratory measurement uncertainties, the aerosol uncertainty is almost entirely due to uncertainty in the weight percent H\({}_{2}\)SO\({}_{4}\) of the aerosols themselves. Uncertainties are shown assuming a range of H\({}_{2}\)SO\({}_{4}\) aerosol weight percents between 85%-99%. This uncertainty can be somewhat reduced if a reasonable vertical profile of H\({}_{2}\)SO\({}_{4}\) weight percent can be assumed, such as that of Krasnopolsky (2015). Also included in Figure 2 is the contribution from other gases whose volume mole fraction exceeds 1 ppm in this altitude range, specifically CO, OCS, and H\({}_{2}\)O. At its highest, the contribution of these additional trace gases to atmospheric opacity is near 1.5%, and uncertainties in their abundances contribute less than 1% to the overall uncertainty in Ka band RO measurements. From Figure 2, the relatively increased contribution at Ka band of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosol to the total absorptivity profile measured during an RO experiment is apparent, hence the interest in dual X/Ka band RO for constraining the vertical distribution of these species. Figure 2: X (left) and Ka (right) band atmospheric opacity of neutral atmosphere microwave absorbers for an equatorial model atmosphere. The bulk CO\({}_{2}\)/N\({}_{2}\) atmosphere and sulfur species dictate the microwave opacity of the atmosphere, and the contribution of other trace gases is minimal. Shaded regions show 1\(\sigma\) uncertainties associated with opacity models. \begin{table} \begin{tabular}{c c c c} \hline \(a_{1}\) & \(a_{2}\) & \(a_{3}\) & \(a_{4}\) \\ \hline (1.15 \(\pm\) 0.06) \(\times 10^{8}\) & 2 \(\pm\) 0.01 & -5 \(\pm\) 0.05 & 2 \(\pm\) 0.05 \\ \hline \end{tabular} \end{table} Table 5: CO\({}_{2}\) opacity model parameters and 2\(\sigma\) uncertainties ## 3 Simulated retrievals With the relevant uncertainties established, we can now consider approaches to retrieve abundance profiles of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) from dual band RO measurements. These approaches are tested via simulated retrievals where several models of Venus' atmosphere are used as ground truth. We assume that raw measurements of X and Ka band amplitude and tone frequency have been converted to atmospheric refractivity and absorptivity profiles using geometric optics methods and that the associated uncertainties have been established following the discussion in the previous section. Since we are interested in the intrinsic retrieval accuracy, we consider only random uncertainties in these simulated retrievals and not the systematic uncertainties in opacity models discussed in the previous section. Our simulated retrievals span a range from 70 km altitude, where Ka band absorptivity will first be measurable, to the Ka Band signal attenuation limit for each model atmosphere. ### The Ill-Posed Retrieval Problem Since H\({}_{2}\)SO\({}_{4}\) vapor, aerosol and SO\({}_{2}\) contribute non-negligibly to X and Ka band link attenuation and the frequency dependences of H\({}_{2}\)SO\({}_{4}\) aerosol and SO\({}_{2}\) are similar, the retrieval of their respective abundances is an under-determined ill-posed problem. Since H\({}_{2}\)SO\({}_{4}\) vapor is the strongest absorber, its abundance can be determined with accuracy comparable to that of dual S/X band RO retrievals (Jenkins et al., 1994). The ill-posed nature of retrieving sulfur species abundances beyond H\({}_{2}\)SO\({}_{4}\) vapor can be illustrated by assessing the uniqueness of retrievals at a single altitude. It is assumed that the atmospheric temperature and pressure are known (\(T=350\) K, \(P=1\) bar), as well as the abundance of H\({}_{2}\)SO\({}_{4}\) vapor (10 ppm). The relationship between atmospheric abundances of trace species \(\mathbf{x}=\left[q_{H_{2}SO_{4}(g)},M_{H_{2}SO_{4}(l)},q_{SO_{2}}\right]\) and measured absorptivity \(\mathbf{y}=[y_{X},y_{Ka}]\) is established via a forward model \(\mathbf{y}=\mathbf{K}\mathbf{x}\). The values of the \(\mathbf{K}\) matrix are the derivatives of the linear opacity expressions for each absorber (see Section 2.2) with respect to abundance (volume mole fraction \(q\) or bulk density \(M\)). A 10% uncertainty (consistent with the expected uncertainty range, see Figure 1) in the measurement of \(\mathbf{y}\) is included and represented as the diagonal matrix \(\mathbf{S_{y}}\). The probability of a particular atmospheric composition from a given measurement \(P(x|y)\) can then be determined following Rodgers (2000). \[-2\text{ln}P(x|y)\propto(\mathbf{y}-\mathbf{K}\mathbf{x})^{\mathbf{T}}\mathbf{ S_{y}^{-1}}(\mathbf{y}-\mathbf{K}\mathbf{x}) \tag{26}\] Figure 3 shows the resulting probability distribution for a given abundance of H\({}_{2}\)SO\({}_{4}\) aerosol and SO\({}_{2}\) (green dot) under these conditions. Also shown is a line representing the range of possible solutions for an error-free measurement of X and Ka band absorptivity. For each possible SO\({}_{2}\) abundance, there is a corresponding cloud bulk density that can match the absorptivity measurement with equal probability, i.e. the set of possible solutions is infinite (but bounded). Although the SO\({}_{2}\) opacity model is linearized, we find that calculations of the probability distribution shown in Figure 3 using the spectral line model for SO\({}_{2}\) exhibit negligible differences. It is therefore necessary to incorporate additional information, such as vertical structure assumptions, to arrive at a plausible simultaneous solution for H\({}_{2}\)SO\({}_{4}\) aerosol and SO\({}_{2}\) retrievals. ### Model Atmospheres For our simulated retrievals, we consider different sets of model atmospheres defined in Table 6. Sets 1-4 include a range of different atmospheric profiles from sources which are not necessarily physically consistent. For atmospheric temperature and pressure profiles, we use latitude dependent model profiles derived from analysis of Venus Express and Akatsuki RO data (Ando et al., 2020). Profiles of H\({}_{2}\)SO\({}_{4}\) vapor derived from Venus Express radio occultations at several latitudes are used as provided by the VeRa team (Oschlisniok, personal communication). Since prior radio occultations and microwave/infrared imaging results have only suggested values for uniform sub-cloud abundance (Oschlisniok et al., 2021; Jenkins et al., 2002; Arney et al., 2014), our only sources of information on the vertical distribution of SO\({}_{2}\) are in situ measurements and chemical model predictions. Figure 4 shows a collection of SO\({}_{2}\) profiles adjusted to a common base abundance of 100 ppm derived from contemporary chemical models (Krasnopolsky, 2012; Zhang et al., 2012; Bierson and Zhang, 2020; Rimmer et al., 2021) and from the in situ results of the Vega descent probe ISAV spectrometers (Bertaux et al., 1996). Specifically, we use the nominal Figure 3: Probability of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosol abundance combinations consistent with simulated radio link absorption at X and Ka Band at an altitude of 50 km with an H\({}_{2}\)SO\({}_{4}\) volume mole fraction of 10 ppm. The true abundances used for the simulations are shown as a green dot, and a line is shown representing the range of solutions possible for an error-free measurement. profile of Krasnopolsky (2012) representing conventional chemical model predictions, the cloud-layer inhibited transport model of Bierson and Zhang (2020), and the cloud droplet depletion model of Rimmer et al. (2021). The ISAV measurements deviate significantly from equilibrium chemical models, which indicate a uniform sub-cloud SO\({}_{2}\) abundance and limited gradients within the clouds themselves. Vertical structure information for the clouds is lacking in a similar sense and must also be considered in the context of modeling results and in situ data. Figure 4 also shows a collection of cloud bulk density at varying latitudes from the 2D transport models of Imamura and Hashimoto (1998) and Oschlisniok et al. (2021), as well as the Pioneer Venus LCPS measurements (Knollenberg and Hunten, 1980). The model atmospheres in Sets 5-8 are based on the 2D transport model results of Oschlisniok (2020); Oschlisniok et al. (2021) at 0, 40, and 80 degree latitude for temperature and H\({}_{2}\)SO\({}_{4}\) abundances. Since SO\({}_{2}\) abundances are not solved by this model, the SO\({}_{2}\) profiles from Sets 1-4 are also used in Sets 5-8.For each of the model atmospheres, vertical absorptivity profiles are determined at 8 and 32 GHz using the opacity models discussed in the previous section. Random covariant noise is added to these profiles using the statistical uncertainty matrices derived for each profile (e.g. Figure 1, see Section 2) at a resolution of 0.5 km. ### Profile Retrieval Approaches Least-squares minimization of Equation 26 results in a maximum likelihood estimation of the abundance profiles. Since we are now considering profile retrievals, the definitions of the \(\mathbf{x}\), \(\mathbf{y}\), \(\mathbf{K}\), and \(\mathbf{S_{y}}\) matrices introduced in Section 3.1 are now expanded to include the full profiles. For a uniform altitude grid of length \(n\), the length of \(\mathbf{x}\) becomes \(3n\), and the length of \(\mathbf{y}\) becomes \(2n\). To minimize Equation 26, we use the scipy.optimize package implementation of Powell's method (Virtanen et al., 2020). An estimate for the covariance of the retrieved profiles \(\mathbf{S_{x}}\) without regularization conditioning can be found from the pseudoinverse of the transformed measurement error covariance. \[\mathbf{\hat{S}_{x}}=(\mathbf{K^{T}S_{y}^{-1}K})^{+} \tag{27}\] \begin{table} \begin{tabular}{c c c c} Set & H\({}_{2}\)SO\({}_{4}\) vapor & H\({}_{2}\)SO\({}_{4}\) aerosol & SO\({}_{2}\) \\ \hline 1 & VEX VeRa, 18\({}^{\circ}\) & Imamura and Hashimoto (1998), 0\({}^{\circ}\) & Krasnopolsky (2012) \\ 2 & VEX VeRa, 45\({}^{\circ}\) & Imamura and Hashimoto (1998), 30\({}^{\circ}\) & Bierson and Zhang (2020) \\ 3 & VEX VeRa, 80\({}^{\circ}\) & Oschlisniok et al. (2021), 60\({}^{\circ}\) & Rimmer et al. (2021) \\ 4 & VEX VeRa, 85\({}^{\circ}\) & Oschlisniok et al. (2021), 90\({}^{\circ}\) & Bertaux et al. (1996), ISAV-1 \\ \hline 5 & Oschlisniok et al. (2021), 0\({}^{\circ}\) & Oschlisniok et al. (2021), 0\({}^{\circ}\) & Krasnopolsky (2012) \\ 6 & Oschlisniok et al. (2021), 40\({}^{\circ}\) & Oschlisniok et al. (2021), 40\({}^{\circ}\) & Bierson and Zhang (2020) \\ 7 & Oschlisniok et al. (2021), 80\({}^{\circ}\) & Oschlisniok et al. (2021), 80\({}^{\circ}\) & Rimmer et al. (2021) \\ 8 & Oschlisniok et al. (2021), 40\({}^{\circ}\) & Oschlisniok et al. (2021), 40\({}^{\circ}\) & Bertaux et al. (1996), ISAV-1 \\ \hline \end{tabular} \end{table} Table 6: Latitude-dependent ground-truth atmospheric profiles for simulated retrievals. This inversion, however, is highly ill-conditioned for joint estimates of neutral species abundances. Figure 5 shows the uncertainties associated with this pseudoinverse for retrievals of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) abundances under the limiting assumption that, for each of the absorbing species, the abundances of the other absorbers are known exactly. This sets a lower limit on the achievable uncertainties at 0.15 ppm for H\({}_{2}\)SO\({}_{4}\) vapor, 8 ppm for SO\({}_{2}\), and 3.5 mg/m\({}^{3}\) for H\({}_{2}\)SO\({}_{4}\) aerosol. These uncertainty estimates for X and Ka band retrievals determined in this way are significantly lower than those for S and X band retrievals; this illustrates why previous attempts at joint retrieval of H\({}_{2}\)SO\({}_{4}\) vapor and SO\({}_{2}\) from S and X band RO measurements directly (instead of using the saturation depletion assumption) have yielded unlikely results (Jenkins et al., 1994). Figure 4: (Left) SO\({}_{2}\) mole fraction profiles from the chemical models of Krasnopolsky (2012); Zhang et al. (2012); Bierson and Zhang (2020); Rimmer et al. (2021) and Vega ISAV spectrometer measurements (Bertaux et al., 1996). (Right) Cloud bulk density profiles from 2D transport models of Imamura and Hashimoto (1998); Oschlisiok et al. (2021) and Pioneer Venus LCPS measurements (Knollenberg and Hunten, 1980) Figure 5: Minimum uncertainty estimates (see text) for X/Ka band RO retrievals of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) abundances at 0.5 km resolution compared to S/X band retrievals assuming other absorber abundances are known exactly. Inflection points occur at altitudes where the statistical uncertainty exceeds the abundance necessary to match the Set 2 model atmosphere absorptivities. Since the simultaneous retrieval problem is under-constrained and ill-posed, minimization of Equation 26 is strongly dependent on the starting guesses (or seed profiles) provided to the solver and the regularization (i.e. conditioning, with terms used interchangeably) strategy. We propose a multi-step approach to seed and regularize the problem, as discussed in the following sections. The inputs for the retrieval are the temperature and pressure profiles derived from the RO Doppler shift measurements, X and Ka band absorptivities, and estimates of the absorptivity per-band covariance. The outputs are abundance profiles for H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) species at 0.5 km resolution from the Ka band attenuation limit (45-50 km) to 70 km and estimations of the retrieval uncertainties. #### 3.3.1 Atmospheric transport model First, an initial estimate of the H\({}_{2}\)SO\({}_{4}\) vapor abundance is determined from the X band profile to its attenuation limit via direct inversion of the opacity equation (i.e. assuming all opacity is due to H\({}_{2}\)SO\({}_{4}\) vapor). The initial determination of the H\({}_{2}\)SO\({}_{4}\) vapor profile is equivalent in precision to profiles derived from prior single-band radio occultation measurements and will somewhat overestimate the vapor abundance. This initial H\({}_{2}\)SO\({}_{4}\) vapor estimate and the retrieved temperature are used as inputs to a 1D transport model of the H\({}_{2}\)SO\({}_{4}\) aerosol system to develop estimates for cloud bulk density. We use a simplified 1D advection-diffusion transport model based on the previously published 1D cloud microphysics (James et al., 1997; Imamura and Hashimoto, 2001; McGouldrick and Toon, 2007) and 2D transport (Imamura and Hashimoto, 1998; Oschlisniok et al., 2021) models of Venus' H\({}_{2}\)SO\({}_{4}\) aerosol system. The active physical processes in this model are eddy diffusion, sedimentation of cloud aerosols, mean vertical winds, and cloud condensation/vaporization. We adopt the nominal aerosol sedimentation velocity profiles of Oschlisniok et al. (2021), and we use H\({}_{2}\)SO\({}_{4}\) vapor pressure laws suggested by Krasnopolsky (2015) assuming a constant cloud weight percent profile (Hashimoto and Abe, 2001). Our model ignores the impact of cloud microphysics; it is assumed that the distribution of H\({}_{2}\)SO\({}_{4}\) between vapor and liquid phases is governed solely by the saturation vapor pressure. Microphysical processes are omitted because other models have shown that H\({}_{2}\)SO\({}_{4}\) vapor abundance in Venus' atmosphere follows the saturation vapor pressure curve for nominal abundances of cloud condensation nuclei (James et al., 1997; McGouldrick and Toon, 2007). Additionally, X and Ka band link attenuation by the clouds is only sensitive to the total cloud mass since the nominal diameters of Venus cloud aerosols are well within one wavelength (i.e. they are negligible scatterers). We also do not consider the effect of H\({}_{2}\)SO\({}_{4}\) photochemical production, or H\({}_{2}\)SO\({}_{4}\) thermal dissociation. We solve the advection-diffusion equations using a Semi-Lagrangian Crank-Nicholson finite difference scheme (Spiegelman and Katz, 2006) with Neumann boundary conditions (\(\frac{\partial n}{\partial x}=0\)). The intial guess H\({}_{2}\)SO\({}_{4}\) vapor profile is held constant throughout the simulation. The model is run until convergence (over several Venus years), which yields the initial estimate for the cloud profile. Uncertainties in the cloud bulk density are estimated by varying the eddy diffusion coefficients and mean vertical winds for several model runs. We use a range of eddy diffusion coefficients implemented in previously published models, as shown in Figure 6. A resulting simulation is shown compared to the Set 5 model atmosphere in Figure 7. Figure 6: Eddy diffusion profiles used in the 1D atmospheric transport model Figure 7: Results of cloud aerosol mass simulations for the Set 5 model atmosphere. The ensemble of simulations is used to derive a mean profile and associated uncertainties, as shown compared to the true profile. #### 3.3.2 MCMC Shape Model Fitting After determining the mean cloud profile estimate from the transport model, the initial guess for H\({}_{2}\)SO\({}_{4}\) vapor is refined by fitting to both X and Ka band absorptivity profiles, which improves the estimate over the final retrieval altitude range. Next, a Markov Chain Monte Carlo (MCMC) approach is used to perform a parametric fit the X and Ka band absorptivity using the output of the cloud model and a shape model for SO\({}_{2}\). Three parameters are used to define the SO\({}_{2}\) vertical abundance profile: the base abundance \(q_{0}\), the depletion altitude \(h_{0}\), and the depletion scale height \(s\). The cloud model output is additionally scaled, leading to a total of 4 free parameters. \[q_{SO_{2}}(h)=\begin{cases}q_{0}&h\leq h_{0}\\ q_{0}e^{-(h-h_{0})/s}&h>h_{0}\end{cases} \tag{28}\] The MCMC approach provides an initial estimate for the profile shape by estimating the likelihood distribution \(P(x|y)\) for each model parameter (see Foreman-Mackey et al. (2013) for a discussion of MCMC estimation). This fitting method is useful because in addition to providing a seed profile for \(\mathbf{x}\), the collection of sampled profiles in the converged distribution also provides a preliminary estimate of variance in the retrieved quantities. Each MCMC fit executes 10000 iterations plus 500 burn-in steps for sets of 100 walkers to arrive at the final likelihood distribution. An example of the fit results using this procedure is shown in Figure 8 for the Set 2 model atmosphere. #### 3.3.3 Conditioned retrieval Both prior steps provide initial estimates for the H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) abundances based on assumptions as to the general shape of these profiles. These profile estimates are Figure 8: Draws from the posterior distribution and the median results for H\({}_{2}\)SO\({}_{4}\) aerosol and SO\({}_{2}\) profiles for the Set 2 model atmosphere. then used as seed profiles \(x\mathbf{x_{a}}\) for a regularized minimization of Equation 26. We use the following form for the regularization term \(\mathcal{R}\) which is added to Equation 26. \[\mathcal{R}=b_{1}(\mathbf{x}-\mathbf{x_{a}})^{T}\mathbf{\hat{S}_{a}^{-1}(x-x_{a })}+b_{2}\mathbf{x}^{T}\mathbf{\Gamma}\mathbf{\Gamma}^{T}\mathbf{x} \tag{29}\] While the regularization applies only to the \(\mathrm{SO}_{2}\) and \(\mathrm{H}_{2}\mathrm{SO}_{4}\) aerosol profiles, we retain the matrix notation for convenience. The first term of Equation 29 penalizes deviation from the seed profiles. We note that while this resembles the form of a retrieval incorporating a priori information, the \(\mathbf{x_{a}}\) seed profiles are not true a priori, since they were determined from fits to the data under strict assumptions (Rodgers, 2000). In this expression, the diagonal \(\mathbf{\hat{S}_{a}^{-1}}\) matrix is the inverse of the variance from the MCMC step. The \(\mathbf{\Gamma}\) matrix in the second term represents the application of high pass filter to the retrieved \(\mathrm{SO}_{2}\) and \(\mathrm{H}_{2}\mathrm{SO}_{4}\) aerosol abundance profiles. Specifically, \(\mathbf{\Gamma}\) represents a finite impulse response (FIR) filter matrix. The constant terms \(b\) are used to weight the regularization and are determined empirically. The addition of the regularization terms to Equation 26 also modifies the corresponding estimate of uncertainty originally stated in Equation 27. These regularization terms condition the matrix inverse, and the resulting inversion provides useful estimates of retrieval uncertainties. \[\mathbf{\hat{S}_{x}}=(\mathbf{K^{T}S_{y}^{-1}K}+b_{1}\mathbf{\hat{S}_{a}^{-1}} +b_{2}\mathbf{\Gamma}\mathbf{\Gamma}^{T})^{+} \tag{30}\] ### Simulation Results To test the efficacy of this approach for retrieving \(\mathrm{SO}_{2}\) and \(\mathrm{H}_{2}\mathrm{SO}_{4}\) aerosol abundances from dual X/Ka band RO measurements, we conducted simulated retrievals for the atmosphere models enumerated in Table 6. Figure 9 illustrates the simulated retrieval inputs and outputs for the Set 1 model atmosphere. The vertical profiles of atmospheric neutrals shown in black in the top row were used to compute X and Ka band absorptivities, and noise is added to these profiles by adding samples from a multivariate normal distribution (the absorptivities at each altitude point represent random variables in the ensemble) with statistics specified by the corresponding simulation covariance matrix (e.g. Figure 1). The corrupted absorptivities, shown in blue on the bottom row, are then used to retrieve the atmospheric profiles. The transport model provides initial estimates for cloud bulk density conditions over a range of different advection and diffusion conditions. The mean cloud profile scale and parameters for the \(\mathrm{SO}_{2}\) shape model are then adjusted using the MCMC fitting procedure, and the outputs are used as seed profiles for the final retrieval at full resolution. The diagonal variance of the MCMC samples is used as the \(\mathbf{\hat{S}_{a}}\) matrix since the full covariance output can be poorly conditioned and difficult to invert. Rather than using the output cloud profile from the MCMC step, the seed cloud profile was set by taking the mean of the scaled cloud profile and the original mean model output, which generally improved the resulting fits. We experimented with using an eigendecomposition of the resulting cloud model profiles to increase the number of free parameters for the cloud model in the MCMC fit but found that no significant improvement was observed over simply scaling the mean profile (which is similar in shape to the principal eigenvector). For the \(b_{1}\) constant, we found a value of 0.005 as a good empirical weight. Weights of this order of magnitude somewhat deprioritize agreement with the seed profile in the final result while being superior to a zero weight. The b\({}_{2}\) weights were set separately for the SO\({}_{2}\) and cloud profiles as the inverse of the maximum value of their seed profiles. The FIR filter highpass cutoff wavenumbers were determined empirically as 0.15 km\({}^{-1}\) and 0.25 km\({}^{-1}\) for the SO\({}_{2}\) and cloud profiles, respectively. Figure 10 compares the retrieved abundances of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosols with the true profile and seed profiles for the Set 1-8 model atmospheres. For the final retrieved profiles, the average profile mean (and maximum) errors below 55 km are 0.4 (0.7) ppm for H\({}_{2}\)SO\({}_{4}\) vapor, 20 (47) ppm for SO\({}_{2}\), and 9 (24) mg/m\({}^{3}\) for H\({}_{2}\)SO\({}_{4}\) aerosol. The corresponding average uncertainties estimated using Equation 30 are 0.7 ppm for H\({}_{2}\)SO\({}_{4}\) vapor, 25 ppm for SO\({}_{2}\), and 18 mg/m\({}^{3}\) for H\({}_{2}\)SO\({}_{4}\) aerosol. In addition to retrieving profiles of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosols, we also compared the retrieved column abundance of each species with its true value in Figure 11. The relationship between the retrieved and true column-integrated quantities can suggest potential biases in the retrieval, although no conclusions are drawn here due to the limited number of simulated retrievals. The dashed line indicates the region where the retrieval matches the true value (\(y=x\)), and a solid line indicates a best fit slope to the data assuming a zero intercept. Regions of 10% and 25% difference from the linear relationship are also shown. While a slight ( 10%) positive bias is apparent in the H\({}_{2}\)SO\({}_{4}\) aerosol result, minimal bias is observed for SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) vapor. ## 4 Discussion In Section 2, we gave an overview of methods for computing uncertainties in radio occultation profiles. We used simple radio occultation simulations and radio link characteristics expected for EnVision measurements to compute the full absorptivity covariance matrices. These absorptivity uncertainties were used for all simulated retrievals presented here, and we consider them to be sufficiently characteristic of uncertainties that will likely be encountered during actual occultations. Of course, variations in transmitter signal strength, knowledge of antenna gain/pointing, occultation geometry, and uncertainty in spacecraft trajectory will all impact the resulting uncertainties; we refer the reader to Jenkins et al. (1994) and Oschlisniok et al. (2012, 2021) for a sense of the variability of these uncertainties at X band. Future work should incorporate realistic occultation geometries for retrieval simulations. We have also derived uncertainties in the opacity models for gases and aerosols in Venus' atmosphere based on laboratory measurements. We used the raw data tables supplied with the papers describing these laboratory measurements, which included estimates of random and systematic uncertainties. Of these neutral species, the measurements of H\({}_{2}\)SO\({}_{4}\) vapor have the greatest uncertainty due to the considerable difficulty in making accurate laboratory measurements under simulated Venus conditions (see Kolodner & Steffes (1998); Akins & Steffes (2019, 2020) for details of those experiments). While we exclude these uncertainties from our simulated retrievals out of a desire to isolate the uncertainties most closely associated with the retrieval approach, they will need to be taken into account in the analysis of future dual X/Ka band RO measurements. Figure 9: Abundances of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) (top) retrieved from simulated dual X/Ka band RO absortivities (bottom) for the Set 1 model atmosphere, with uncertainties determined using Equation 30. Seed profiles were provided to the final optimization stage from the outputs of the atmospheric transport and MCMC steps (see text). Figure 10: Retrievals of Set 1-8 model atmosphere abundances of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosol with uncertainties. In Section 3, we have illustrated the prospects for the simultaneous retrieval of H\({}_{2}\)SO\({}_{4}\) vapor, H\({}_{2}\)SO\({}_{4}\) aerosol, and SO\({}_{2}\) abundance profiles in Venus' atmosphere using X and Ka band absorptivity profiles. As with earlier RO experiments, the retrieval of H\({}_{2}\)SO\({}_{4}\) vapor from such measurements is on firm ground. Simultaneous retrieval at X and Ka band should achieve accuracies similar to prior dual S/X band occultations (Jenkins et al., 1994). Retrievals of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosol are more uncertain and require careful regularization. Previous attempts to retrieve SO\({}_{2}\) abundances from Venus RO measurements have taken two approaches. The first is to retrieve both abundances simultaneously from S and X band measurements. Jenkins et al. (1994) used Magellan S and X band absorptivity profiles for orbit 3212 to solve for H\({}_{2}\)SO\({}_{4}\) vapor and SO\({}_{2}\) simultaneously, finding SO\({}_{2}\) uncertainties ranging from 50 ppm in H\({}_{2}\)SO\({}_{4}\) vapor-free regions to 200 ppm in regions where vapor was present. The perturbative approach used by Jenkins et al. (1994) to determine SO\({}_{2}\) uncertainties is similar to the approach we employed to determine our Figure 5 using the Set 2 model atmosphere, and the uncertainty estimates are consistent when converted to equivalent vertical resolutions (1 km for Jenkins et al. (1994) vs 500 m resolution in Figure 5)1. The second approach is to assume that the H\({}_{2}\)SO\({}_{4}\) vapor profile above the cloud base agrees well with the saturation vapor pressure, i.e. H\({}_{2}\)SO\({}_{4}\) vapor at higher altitudes is depleted efficiently via condensation. We note that a similar justification is used to ignore the effects of cloud microphysics and condensation nuclei availability in our atmospheric transport model. Following this assumption, Oschlisniok et al. (2021) determined an SO\({}_{2}\) abundance from residual X band absorptivity above 51 km, with the assumption that the SO\({}_{2}\) abundance is constant within this range. Neither approach has thus far been used to place estimates on H\({}_{2}\)SO\({}_{4}\) aerosol abundances. Figure 11: Comparison of retrieved and true column abundances of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\). A linear model (black line) is fit to the data assuming a zero intercept and compared to a one-to-one relationship (dashed black line). Regions of 10% (shaded orange) and 25% (shaded blue) are also shown. As we suggest in Figure 5, dual X/Ka band radio occultations with future missions can likely provide improved accuracy over dual S/X band occultations due to the increased opacity of both H\({}_{2}\)SO\({}_{4}\) aerosol and SO\({}_{2}\) gas. As with the S and X band measurements, this retrieval is both undetermined, since two measurements are used to determine 3 quantities), and ill-posed, as both absorbers have a roughly \(\nu^{2}\) dependence in opacity between X and Ka band. It is therefore necessary to condition the retrieval using prior information. In the case of both species of absorbers, we are privy to few vertically resolved measurements, specifically the Pioneer Venus LCPS measurement of cloud aerosol mass (Knollenberg and Hunten, 1980) and the Vega descent probe SO\({}_{2}\) measurements (Bertaux et al., 1996). Of these, only the measured cloud aerosol mass is consistent with attempts to model the H\({}_{2}\)SO\({}_{4}\) aerosol system, and the depletion mechanism of SO\({}_{2}\) within the clouds remain an area of active research (e.g. Rimmer et al. (2021)). The significant variations in SO\({}_{2}\) observed by the Vega landers have not been recreated in previous chemical or transport models, although ground-based observations suggest latitudinal variability that may be consistent with non-negligible sub-cloud abundance gradients (Arney et al., 2014; Marcq et al., 2021). If either the SO\({}_{2}\) or H\({}_{2}\)SO\({}_{4}\) aerosol profile can be assumed known from the results of proximal in-situ measurements, uncertainties in the joint retrieval of the other with H\({}_{2}\)SO\({}_{4}\) vapor will likely fall somewhere in between the cases illustrated in Figures 5 and 10. If the dual band measurements could sound below the cloud base,it would be possible to increase the accuracy of these retrievals by assigning the cloud-base SO\({}_{2}\) abundance. Unfortunately, the Ka band signal appears likely to become attenuation-limited near this altitude range (Akins and Steffes, 2020), and most measurements are unlikely to resolve the cloud base. The determination of cloud profiles from our 1D transport model is an extension of the information from the retrieved H\({}_{2}\)SO\({}_{4}\) vapor profile. The variability of derived cloud profiles from the simulation ensemble is representative of the uncertainty in mean cloud structure. In the prior modeling efforts which inform this study, simulations are generally adjusted to achieve agreement with some observation, whether it be the LCPS measurement of aerosol mass or RO measurements of H\({}_{2}\)SO\({}_{4}\) vapor. Our problem is the inverse, in that we are using such models to predict the abundances of cloud aerosols present in the measurement. We covered a range of possible simulation parameters in an attempt to mitigate the importance of model implementation details, such as model transport dimension or inclusion of more detailed cloud microphysics. It may be possible for a more realistic model of cloud aerosols to be brought to bear on this problem, but such an attempt would need to be based on high accuracy observations of a kind which do not exist at present. Generally, our simulated retrievals suggest that the profile variances assigned by the MCMC seeding method are useful if the underlying shape assumptions are valid. The autocorrelation time \(\tau\) of the MCMC walkers (a metric of distribution convergence, (Foreman-Mackey et al., 2013)) in our simulations is relatively long due to the multi-modal distribution of the resulting shape parameter fits. Our MCMC iteration number, however, is consistent with the suggested value of 50\(\tau\), and the variances derived this way are representative of the distribution of possible profiles. Overall, the recovery of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) profiles simultaneously from dual X/Ka band RO measurements of Venus is an exceptional challenge, and as our simulations demonstrate, useful accuracy requires the incorporation of accurate prior information. Our proposed approach appears capable of conditioning the problem appropriately based on currently available information. If our assumptions are inaccurate, however, uncertainties in the retrieval of SO\({}_{2}\) and H\({}_{2}\)SO\({}_{4}\) aerosol abundances could be significantly worse than those determined in our simulations. While the variance estimates provided using the proposed approach seem generally reliable, there are cases in which the true profile is not captured within the 1\(\sigma\) estimate. Estimates of column abundances from the few retrieved profiles are encouraging, although not conclusive, with respect to the retrieval bias. The column accuracy of SO\({}_{2}\) in particular suggests that detection of time-variable enhancement associated with volcanism is likely achievable with this approach. While this is a challenging measurement, our simulations suggest that useful information can be obtained regarding the distributions of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) in clouds of Venus from future X and Ka band radio occultations. ## 5 Conclusion We have considered in detail the prospects for retrieving vertical profiles of H\({}_{2}\)SO\({}_{4}\) (vapor and aerosol) and SO\({}_{2}\) abundance from dual X and Ka band radio occultation measurements of Venus which will be conducted by spacecraft missions in the near future. We first discussed the basis for measurement uncertainties that were used in this study, reviewed relevant models of atmospheric opacity derived from laboratory measurements, and derived formal uncertainty estimates for models of H\({}_{2}\)SO\({}_{4}\) and SO\({}_{2}\) opacity. We then illustrated the ill-posedness of the retrieval problem and introduced a novel approach for seeding and regularizing maximum likelihood estimations of profile abundances. For the resulting retrievals we can estimate uncertainties of on the orders of 0.5 ppm for retrievals of H\({}_{2}\)SO\({}_{4}\) vapor, 20 ppm for retrievals of SO\({}_{2}\), and 10 mg/m\({}^{3}\) for cloud aerosol bulk density. These uncertainty estimates are determined when the underlying assumptions informing the regularization are accurate, and we additionally discussed the implications of deviations from these assumptions on the retrieved abundances. From the retrieved column abundances, we surmise that the retrieval of SO\({}_{2}\) is more accurate than that of the cloud aerosol mass. Further ground-truth estimates from in situ measurements and more advanced atmospheric models can be used to further improve these results. We conclude that dual X/Ka band RO profiling of Venus' atmospheric sulfur species can be accomplished with sufficient accuracy (\(<\) 50% uncertainty in abundant regions) to provide useful insights into chemical and dynamical processes in the cloud-level atmosphere. This work was funded by the JPL Research and Technology Development Fund. We would like to thank Janusz Oschliseniok and the VeRa team for providing processed X band radio occultation data used in the model atmospheres. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration
2301.09955
Linearization and Hölder continuity of generalized ODEs with application to measure differential equations
In this paper, we study the topological conjugacy between the linear generalized ODEs (for short, GODEs) \[ \frac{dx}{d\tau}=D[A(t)x] \] and their nonlinear perturbation \[ \frac{dx}{d\tau}=D[A(t)x+F(x,t)] \] on Banach space $\mathscr{X}$, where $A:\mathbb{R}\to\mathscr{B}(\mathscr{X})$ is a bounded linear operator on $\mathscr{X}$ and $F:\mathscr{X}\times \mathbb{R}\to \mathscr{X}$ is Kurzweil integrable. GODEs are completely different from the classical ODEs. Note that the GODEs in Banach space are defined via its solution. $\frac{dx}{d\tau}$ is only a notation and %$\frac{dx}{d\tau}$ it does not indicate that the solution has a derivative. The solution of the GODEs can be discontinuous and even the number of discontinuous points is countable, so that many classical theorems and tools are no longer applicable to the GODEs. For instances, the chain rule and the multiplication rule of derivatives, differential mean value theorem, and integral mean value theorem are not valid for the GODEs. In this paper, we study the linearization and its H\"{o}lder continuity of the GODEs. Firstly, we construct the formula for bounded solutions of the nonlinear GODEs in the Kurzweil integral sense. Afterwards, we establish a Hartman-Grobman type linearization theorem which is a bridge connecting the linear GODEs with their nonlinear perturbations. Further, we show that the conjugacies are both H\"{o}lder continuous by using the Gronwall-type inequality (in the Perron-Stieltjes integral sense) and other nontrivial estimate techniques. %The GODEs include measure differential equations, impulsive differential equations, functional differential equations and the classical ordinary differential equations as special cases. Finally, applications to the measure differential equations and impulsive differential equations, our results are very effective.
Weijie Lu, Yonghui Xia
2023-01-24T12:33:34Z
http://arxiv.org/abs/2301.09955v3
Linearization and Holder continuity of generalized ODEs with application to measure differential equations + ###### Abstract The generalized ordinary differential equations (for short, GODEs) in Banach space were defined via its solution, which include measure differential equations, impulsive differential equations, functional differential equations and the classical ordinary differential equations as special cases. It should be mentioned that even the symbol \(\frac{dx}{d\tau}\) does not indicate that the solution has a derivative. In this paper, we study the linearization and its Holder continuity of a class of GODEs. Firstly, we construct the formulas for bounded solutions of the nonlinear GODEs in the Kurzweil integral sense under the linear GODEs have an exponential dichotomy. Afterwards, we establish a topological conjugacy between the linear and nonlinear GODEs. Further, we show that the conjugacies are both Holder continuous by using the Gronwall-type inequality (in the Perron-Stieltjes integral sense) and other nontrivial estimate techniques. Finally, applications to the measure differential equations and impulsive differential equations, our results are very effective. **Keywords**: generalized ordinary differential equations; Perron integrals; measure differential equations; Topological equivalence; impulsive differential equations; **MSC2020**: 26A39; 34A36; 45N05;37C15; ## 1 Introduction ### History of the generalized ordinary differential equations The generalized ordinary differential equations (for short, GODEs) in Banach space have received widespread attentions recently, which has the form of: \[\frac{dx}{d\tau}=DK(x,t), \tag{1.1}\] where \(K:\mathscr{X}\times\mathbb{R}\to\mathscr{X}\) is a given map on Banach space \(\mathscr{X}\). In addition, as mentioned by Schwabik ([1], Remark 3.2), the letter \(D\) of (1.1) means that (1.1) is a generalized differential equation, this concept being defined via its solution, and even the symbol \(\frac{dx}{d\tau}\) does not indicate that the solution has a derivative. By an simple example (see [1] pp. 100), he pointed out that the GODE is a formal equation-like object for which one has defined its solutions. The GODEs include various types of other differential equations as special cases, such as the classical ODEs, measure differential equations (MDEs), impulsive differential equations (IDEs), functional differential equations (FDEs) as well as dynamic equations on time scales. In particular, a pretty application of GODEs is MDEs, which have well studied (see e.g., Federson and Mesquita [2], Federson, Mesquita and Slavlk [3], Piccoli [4], Piccoli and Rossi [5], Meng [6], Meng and Zhang [7], Zhang [8], Wen [9], Wen and Zhang [10]), another application of GODEs is IDEs (see Federson and Schwabik [11], Afonso, Bonotto, Federson and Schwabik [12]). The concept of GODEs was initiated in Kurzweil [13, 14], who introduced it to generalize the classical results on the continuous dependence of the solutions of ODEs with respect to parameters in 1957. Later, the fundamental theory in the framework of GODEs was established, one can consult Schwabik [1, 15, 16]. More recently, the stability theory and the qualitative property of GODEs are developed by many scholars. For example, the variation of constants formula for the GODEs was proposed by Collegari et al. [17], the boundedness of solutions for GODEs were derived by Afonso et al. [18] and Federson et al. [19], the concepts of exponential dichotomy and its robustness results were given by Bonotto et al. [20, 21], the results on topological properties of flows for nonnegative time in the framework of GODEs were also presented by Bonotto et al. [22], the existence of periodic solutions for autonomous GODEs can be seen Federson et al. [23], the converse Lyapunov theorem for GODEs were described by Andrade da Silva et al. [24] and the boundary value problems for GODEs was obtained from Bonotto et al. [25]. Other concepts on the GODEs were considered in the monograph of Bonotto et al. [26]. ### History of linearization Linearization, as an important topic in dynamical systems, portrays the nonlinear perturbation systems through the dynamical behavior of the linear systems. A fundamental contributions to this project in the autonomous systems is the Hartman-Grobman theorem (see [27, 28]), which describes that \(C^{1}\) hyperbolic diffeomorphism \(G:\mathbb{R}^{d}\to\mathbb{R}^{d}\) can be \(C^{0}\) linearized near the fixed point. Later, Palis [29] and Pugh [30] generalized the local and global Hartman-Grobman theorem to Banach space, respectively. In addition, infinite dimensional versions of this results were well presented by Lu [31] (scalar reaction-diffusion equation), Bates and Lu [32] (Cahn-Hilliard equation and phase field equations), Hein and Pruss [33] (semilinear hyperbolic evolution equations), Farkas [34] (retarded functional equations). Except for the \(C^{0}\) linearization of the differential equations, Sternberg ([35, 36]) initially investigated \(C^{r}\) linearization for \(C^{k}\) diffeomorphisms. Sell ([37]) extended the theorem of Sternberg. More mathematicians paid particular attention on the \(C^{1}\) linearization. Belitskii [38], ElBialy [39], Rodrigues and Sola-Morales [40] studied that \(C^{1}\) linearization of hyperbolic diffeomorphisms on Banach space independently. Recently, Zhang et al. [41, 42, 43, 44] showed the \(C^{1,\beta}\)-linearization for \(C^{1,\alpha}\) or \(C^{1,1}\) hyperbolic diffeomorphisms (where \(0<\beta<\alpha\leq 1\)), and they proved that the regularity for the transformations is sharp. More recently, many scholars paid attention to the linearization of nonautonomous systems, which was originated from Palmer [45]. Palmer established the global topological conjugacy between the nonautonomous nonlinear system and its linear part by constructing conjugate maps, which also relies on the exponential dichotomy theory (see Coppel [46]) for nonautonomous linear systems. After that, by weakening the assumption of exponential dichotomy, Jiang [47] obtained a version of Hartman-Grobman theorem for ordinary dichotomy. Backes and Dragicevic [48] proved a linearization result for generalized exponential dichotomy, which extended the result of Bernardes and Messaoudi [49] from autonomous to nonautonomous systems. As well as Barreira and Valls [50, 51] presented linearization results with the nonuniform exponential dichotomy. Reinfelds and Steinberga [52] firstly studied the \(C^{0}\) linearization for non-hyperbolic systems, and Backes et al. [53] extended it to the non-hyperbolic coupled systems. On the other hand, by reducing the condition of boundedness for the nonlinear perturbations, Xia et al. [54] reported a Hartman-Grobman theorem under locally integrable conditions. Castaneda and Robledo [55] and Huerta [56] considered the linearization of the unbounded nonlinear system under the nonuniform exponential contraction, respectively. Backes and Dragicevic [57] gave a version of multiscale linearization. Qadir [58] presented a geometric linearization of second order semilinear ordinary differential equations. Furthermore, \(C^{0}\)-linearization theory was investigated in various dynamical systems, for instance, the functional differential equations (see Farkas [34]), the classical impulsive differential equations (see Reinfelds and Sermone [59], Reinfelds [60, 61], Sermone [63, 62], Fenner and Pinto [64] and Xia et al. [65]), dynamic equations on time scales (see Potzche [66] and Xia et al. [67]), the differential equations with piecewise constant argument (see Papaschinopoulos [68], Pinto and Robledo [69]). Many scholars also focus on the smooth linearization for nonautonomous systems. Castaneda and Robledo [70] first formulated sufficient conditions for smooth linearization of nonautonomous differential equations whose linear part admits a uniform exponential contraction. Cuong et al. [71] proved a Sternberg-type theorem for nonautonomous systems while the linear part has a uniform exponential dichotomy. Dragicevic et al. [72, 73] showed discrete and continuous smooth linearization results with nonuniform exponential dichotomy, respectively. ### Motivation and contributions of the present paper The theory of linear GODEs has been established, see the monographs of Schwabik [1] and Bonotto et al. [26]. However the theory of nonlinear GODEs is not yet mature. The Hartman-Grobman theorem is exactly the bridge between the two. Up till now, there is no papers considering the topological conjugacy between the linear systems and their nonlinear perturbations for GODEs. In this paper, we firstly establish the Hartman-Grobman-type theorem in the framework of GODEs. Consider the linear GODEs \[\frac{dx}{d\tau}=D[A(t)x] \tag{1.2}\] and their nonlinear perturbation \[\frac{dx}{d\tau}=D[A(t)x+F(x,t)] \tag{1.3}\] on Banach space \(\mathscr{X}\), where \(A:\mathbb{R}\to\mathscr{B}(\mathscr{X})\) is a bounded linear operator on \(\mathscr{X}\) and \(F:\mathscr{X}\times\mathbb{R}\to\mathscr{X}\) is Kurzweil integrable. Bonotto, Federson and Santo [20] presented the concept of exponential dichotomy for Eq. (1.2). They further studied the existence and uniqueness of bounded solutions of the linear inhomogeneous GODE \[\frac{dx}{d\tau}=D[A(t)x+g(t)],\] where \(g:\mathbb{R}\to\mathscr{X}\) is a regulated function. Firstly, we construct the formulas for bounded solution of Eq. (1.3) provided that the linear GODE (1.2) admits an exponential dichotomy. We achieve this using the technique of Picard's stepwise approximation, and obtain the following explicit expression for the bounded solution of a class of nonlinear GODEs: \[\begin{split} x(t)=&\int_{0}^{t}DF(x(\tau),s)-\int_ {-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{ \sigma}DF(x(\tau),s)\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{- 1}(\sigma)]\int_{0}^{\sigma}DF(x(\tau),s),\end{split} \tag{1.4}\] where \(\mathscr{V}(t):=V(t,0)\) is the fundamental operator of Eq. (1.2) and \(P\) is a projection, \(I\) is an identity operator. Note that the term \(\int_{0}^{t}DF(x(\tau),s)\) is a form specific to the nonlinear GODEs, since the Kurzweil integral is involved. Once the Kurzweil integral reduces to the Riemann integral or the Lebesgue integral, the expression (1.4) degenerates into the case of the classical ODEs, see Coppel [46]. Secondly, we show that Eq. (1.3) is topologically conjugated to Eq. (1.2). More precisely, constructing the approximately identical maps and using the relationship between the integral equations, we verify that these maps are homeomorphisms and achieve the bijection between the solutions of Eqs. (1.2) and (1.3). Furthermore, we study the regularity of these maps (or conjugacies). We give a result for Holder conjugacies with the help of the Gronwall-type inequality (in the Perron-Stieltjes integral sense) and other nontrivial estimate techniques. As applications, we derive results on the Hartman-Grobman theorem for MDEs and IDEs. ### Outline This paper is split section wise as follows. In Section 2, we give the preliminary results for GODEs. Subsections 2.1 and 2.2 introduce the regulated functions, bounded variation functions and Kurzweil integral. Subsection 2.3 presents the fundamental theory for GODEs. And a concept of strong exponential dichotomy is given. In Section 3, we analyze the bounded solution for Eq. (1.3). Section 4 presents results which characterize the topological conjugacy between Eq. (1.2) and Eq. (1.3). In Section 5, we derive the Holder continuity for the conjugacies. We present some useful inequalities and then give rigorous proofs in subsections 5.2 and 5.3, respectively. Finally, we apply the results of GODEs to MDEs and IDEs. ## 2 Preliminaries for the GODEs ### regulated and bounded variation functions Let \(\mathscr{X}\) be a Banach space with the norm \(\|\cdot\|\). We assert that a function \(g:[a_{1},a_{2}]\to\mathscr{X}\) is said to be \(regulated\) if the following limits exist: \[g(t^{+}):=\lim_{\tau\to t^{+}}g(\tau),\quad t\in[a_{1},a_{2})\quad\text{and} \quad g(t^{-}):=\lim_{\tau\to t^{-}}g(\tau),\quad t\in(a_{1},a_{2}].\] Let \(G([a_{1},a_{2}],\mathscr{X})\) denote the space of all regulated functions \(g:[a_{1},a_{2}]\to\mathscr{X}\). Obviously, with the supremum norm \(\|g\|_{\infty}:=\sup_{t\in[a_{1},a_{2}]}\|g(t)\|\) is a Banach space, (see [74], Theorem 3.6). We say that a finite point set \(D=\{t_{0},t_{1},\cdots,t_{j}\}\subset[a_{1},a_{2}]\) such that \(a_{1}=t_{0}\leq t_{1}\leq\cdots\leq t_{j}=a_{2}\) is a \(division\) of \([a_{1},a_{2}]\). We write \(D=\{t_{0},\cdots,t_{|D|}\}\) if \(|D|\) is the number of subintervals \([t_{j-1},t_{j}]\) of a division \(D\) of \([a_{1},a_{2}]\). Denote that \(\mathscr{D}[a_{1},a_{2}]\) is the set of all division of \([a_{1},a_{2}]\). A map \(g:[a_{1},a_{2}]\to\mathscr{X}\) is said to be a \(variation\) if \[\text{var}_{a_{1}}^{a_{2}}g:=\sup_{D\in\mathscr{D}[a_{1},a_{2}]}\sum_{j=1}^{| D|}\|g(t_{j})-g(t_{j-1})\|.\] If \(\text{var}_{a_{1}}^{a_{2}}g<\infty\), then \(g\) is a \(bounded\ variation\) function on \([a_{1},a_{2}]\). Let \(BV([a_{1},a_{2}],\mathscr{X})\) be the set of all bounded variation function \(g\) on \(\mathscr{X}\) equipped with the variation norm \(\|g\|_{BV}:=\|g(a_{1})\|+\text{var}_{a_{1}}^{a_{2}}g\). Then \((BV([a_{1},a_{2}],\mathscr{X}),\|\cdot\|_{BV})\) is a Banach space, and \(BV([a_{1},a_{2}],\mathscr{X})\subset G([a_{1},a_{2}],\mathscr{X})\), see [74]. ### Kurzweil integral We recall the concept of the Kurzweil integral, which is defined by [13] and [1]. To begin it, we give some basic concepts. A _tagged division_ of \([a_{1},a_{2}]\subset\mathbb{R}\) is a finite collection of point-interval pairs \(D=\{(\tau_{j},[s_{j-1},s_{j}]):j=1,2,\cdots,|D|\}\), where \(a_{1}=s_{0}\leq s_{1}\leq\cdots\leq s_{|D|}=a_{2}\) is a division of \([a_{1},a_{2}]\) and the tag \(\tau_{j}\in[s_{j-1},s_{j}]\). A _gauge_ on \([a_{1},a_{2}]\) is an arbitrary positive function \(\varepsilon:[a_{1},a_{2}]\to(0,\infty)\). A tagged division \(D=\{(\tau_{j},[s_{j-1},s_{j}]),j=1,2,\cdots,|D|\}\) of \([a_{1},a_{2}]\) is \(\varepsilon\)-fine for any gauge \(\varepsilon\) on \([a_{1},a_{2}]\) if \[[s_{j-1},s_{j}]\subset(\tau_{j}-\varepsilon(\tau_{j}),\tau+\varepsilon(\tau_{ j}))).\] **Definition 2.1**.: _(Kurzweil integrable) A function \(V:[a_{1},a_{2}]\times[a_{1},a_{2}]\to\mathscr{X}\) is said to be Kurzweil integrable on \([a_{1},a_{2}]\), if there exists a unique element \(J\in\mathscr{X}\) satisfying for any \(\epsilon>0\), there exists a gauge \(\varepsilon\) of \([a_{1},a_{2}]\) such that for each \(\varepsilon\)-fine tagged division \(D=\{(\tau_{j},[s_{j-1},s_{j}]),j=1,2,\cdots,|D|\}\) of \([a_{1},a_{2}]\), we have_ \[\|K(V,d)-J\|<\epsilon,\] _where \(K(V,d)=\sum_{j=1}^{|D|}[V(\tau_{j},s_{j})-V(\tau_{j},s_{j-1})]\). We write \(J=\int_{a_{1}}^{a_{2}}DV(\tau,t)\) in this situation._ ### The fundamental theory for the GODEs We review the fundamental theory of GODEs. Let \(\mathscr{B}(\mathscr{X})\) be the set of all bounded linear operators with the operator norm \(\|\cdot\|\). Given \(\mathbb{I}\subseteq\mathbb{R}\), consider the linear GODEs \[\frac{dx}{d\tau}=D[A(t)x], \tag{2.1}\] where \(A:\mathbb{I}\to\mathscr{B}(\mathscr{X})\). As point out in [1], a function \(x:[a_{1},a_{2}]\to\mathscr{X}\) is said to be a solution of (2.1) iff \[x(a_{2})=x(a_{1})+\int_{a_{1}}^{a_{2}}D[A(s)x(\tau)]. \tag{2.2}\] The integral on the right-hand side of (2.2) is a Kurzweil integral, which is denoted by the Perron-Stieltjes integral \(\int_{a_{1}}^{a_{2}}d[A(s)]x(s)\) (see [1, 15]), since we represent \(\int_{a_{1}}^{a_{2}}D[A(t)x(\tau)]\) as Riemann-Stieltjes sum taking the form of \(\sum[A(t_{j})-A(t_{j-1}]x(\tau_{j})\). Let \(I\) be an identity operator, \(A(t^{+})=\lim\limits_{s\to t^{+}}A(s)\) and \(A(t^{-})=\lim\limits_{s\to t-}A(s)\), we suppose the following conditions throughout this paper. (H1) \(A\in BV([a_{1},a_{2}],\mathscr{B}(\mathscr{X}))\) for \([a_{1},a_{2}]\subset\mathbb{I}\), (H2) \((I+[A(t^{+})-A(t)])^{-1}\in\mathscr{B}(\mathscr{X})\) for \(t\in\mathbb{I}\backslash\sup(\mathbb{I})\) and \((I-[A(t)-A(t^{-})])^{-1}\in\mathscr{B}(\mathscr{X})\) for \(t\in\mathbb{I}\backslash\inf(\mathbb{I})\). The above assumptions guarantee the existence and uniqueness of the solution of (2.1). **Lemma 2.2**.: _([16], Theorem 2.10) Assume that (H1) and (H2) hold. Given \((t_{0},x_{0})\in\mathbb{I}\times\mathscr{X}\). Then the following linear GODE_ \[\begin{cases}\frac{dx}{d\tau}=D[A(t)x(\tau)],\\ x(t_{0})=x_{0},\end{cases} \tag{2.3}\] _admits a unique solution on \(\mathbb{I}\)._ **Lemma 2.3**.: _([17], Theorem 4.3) An operator \(V:\mathbb{I}\times\mathbb{I}\to\mathscr{B}(\mathscr{X})\) is said to be a fundamental operator of Eq. (2.1) if_ \[V(t,s)=I+\int_{s}^{t}d[A(r)]V(r,s),\quad t,s\in\mathbb{I}, \tag{2.4}\] _and for any fixed \(s\in\mathbb{I}\), \(V(\cdot,s)\) is of locally bounded variation in \(\mathbb{I}\). Furthermore, the unique solution of (2.3) is given by \(x(t)=V(t,s)x_{s}\)._ **Lemma 2.4**.: _([17], Theorem 4.4) The operator \(V:\mathbb{I}\times\mathbb{I}\to\mathscr{B}(\mathscr{X})\) has the following properties: (1) \(V(t,t)=I\); (2) for any \([a_{1},a_{2}]\subset\mathbb{I}\), there exists a positive constant \(N>0\) satisfying_ \[\|V(t,s)\|\leq N,\quad t,s\in[a_{1},a_{2}],\quad\mathrm{var}_{a_{ 1}}^{a_{2}}V(t,\cdot)\leq N,\quad t\in[a_{1},a_{2}],\] \[\mathrm{var}_{a_{1}}^{a_{2}}V(\cdot,s)\leq N,\quad s\in[a_{1},a_{ 2}];\] _(3) \(V(t,s)=V(t,r)V(r,s)\) for any \(t,r,s\in\mathbb{I}\); (4) \(V^{-1}(t,s)\in\mathscr{B}(\mathscr{X})\) and \(V^{-1}(t,s)=V(s,t)\)._ **Definition 2.5**.: _(exponential dichotomy [20]) We say that a linear GODEs (2.1) have an exponential dichotomy on \(\mathbb{I}\) if there exist a projection \(P:\mathscr{X}\to\mathscr{X}\) and constants \(K,\alpha>0\) satisfying_ \[\begin{cases}\|\mathscr{V}(t)P\mathscr{V}^{-1}(s)\|\leq Ke^{-\alpha(t-s)}\quad \mathrm{for}\;t\geq s,\\ \|\mathscr{V}(t)(Id-P)\mathscr{V}^{-1}(s)\|\leq Ke^{\alpha(t-s)}\quad \mathrm{for}\;t<s,\end{cases} \tag{2.5}\] _where \(\mathscr{V}(t)=V(t,0)\) and \(\mathscr{V}^{-1}(t)=V(0,t)\)._ **Definition 2.6**.: _(strong exponential dichotomy) If the linear GODEs (2.1) have a strong exponential dichotomy on \(\mathbb{I}\) if (2.5) holds and there exists a constant \(\widetilde{\alpha}\geq\alpha\) such that_ \[\|\mathscr{V}(t)\mathscr{V}^{-1}(s)\|\leq Ke^{\widetilde{\alpha}(t-s)},\quad \mathrm{for}\;t,s\in\mathbb{I}. \tag{2.6}\] Now we give the results on the perturbation theory of GODEs. **Lemma 2.7**.: _([20], Proposition 4.5) Suppose that the linear homogenous GODEs (2.1) satisfy (H1)-(H2) and admit an exponential dichotomy. If \(g\in G(\mathbb{R},\mathscr{X})\) and the Perron-Stieltjes integrals_ \[\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)](g( \sigma)-g(0)) \tag{2.7}\] _and_ \[\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1}(\sigma)](g( \sigma)-g(0)) \tag{2.8}\] _exist for all \(t\in\mathbb{R}\) and the maps (2.7) and (2.8) are bounded, then the non-homogenous GODEs_ \[\frac{dx}{d\tau}=D[A(t)x+g(t)] \tag{2.9}\] _have a unique bounded solution._ We observe that in Remark 4.11 from [20], if \(f\) is bounded with \(\|f(t)\|\leq M\) and \[V_{A}:=\sup\{\mathrm{var}_{a}^{b}A:a,b\in\mathbb{R},a<b\}<\infty,\] then Bonotto et al. [20] proved the existence of (2.7) and (2.8). Furthermore, they obtained that \[\sup_{t\in\mathbb{R}}\left\|\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P \mathscr{V}^{-1}(\sigma)](f(\sigma)-f(0))\right\|\leq 2MK\|P\|C^{3}e^{3CV_{A}}V_{A}^ {2} \tag{2.10}\] and \[\sup_{t\in\mathbb{R}}\left\|\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P) \mathscr{V}^{-1}(\sigma)](f(\sigma)-f(0))\right\|\leq 2MK(1+\|P\|)C^{3}e^{3CV_{A}}V_ {A}^{2}. \tag{2.11}\] **Lemma 2.8**.: _([17], Theorem 4.10) Assume that (H1) and (H2) hold. If \(F:\mathscr{X}\times[a_{1},a_{2}]\to\mathscr{X}\) is Kurzweil integrable, \([\tilde{a_{1}},\tilde{a_{2}}]\subseteq[a_{1},a_{2}]\) and \(t_{0}\in[\tilde{a_{1}},\tilde{a_{2}}]\), then the GODEs_ \[\begin{cases}\frac{dx}{d\tau}=D[A(t)x+F(x,t)],\\ x(t_{0})=x_{0},\end{cases}\] _are equivalent to the following integral equations_ \[x(t)=V(t,t_{0})x_{0}+\int_{t_{0}}^{t}DF(x(\tau),\gamma)-\int_{t_{0}}^{t}d_{ \sigma}[V(t,\sigma)]\left(\int_{t_{0}}^{\sigma}DF(x(\tau),\gamma)\right).\] We then define a special class of functions \(F:\mathscr{X}\times\mathbb{I}\to\mathscr{X}\). For convenience, we write \(\Omega:=\mathscr{X}\times\mathbb{I}\). **Definition 2.9**.: _[_20_]_ _Given a function \(h:I\to\mathbb{R}\) is nondecreasing. We say that a function \(F:\Omega\to\mathscr{X}\) belongs to the class \(\mathscr{F}(\Omega,h)\) if_ \[\|F(x,t_{2})-F(x,t_{1})\|\leq|h(t_{2})-h(t_{1})| \tag{2.12}\] _for any \((x,t_{2})\) and \((x,t_{1})\in\Omega\) and_ \[\|F(x,t_{2})-F(x,t_{1})-F(z,t_{2})+F(z,t_{1})\|\leq\|x-z\|h(t_{2})-h(t_{1})| \tag{2.13}\] _for any \((x,t_{2})\), \((x,t_{1})\), \((z,t_{2})\) and \((z,t_{2})\in\Omega\)._ ## 3 The formulas for bounded solutions of the nonlinear GODEs In the present paper, we consider the following nonlinear GODEs \[\frac{dx}{d\tau}=D[A(t)x+F(x,t)], \tag{3.1}\] where \(A:\mathbb{R}\to\mathscr{B}(\mathscr{X})\) is a map and \(F:\mathscr{X}\times\mathbb{R}\to\mathscr{X}\) is Kurzweil integrable. Furthermore, we make the following assumptions on \(A\) and \(F\): (A1) suppose that (2.1) admits an exponential dichotomy; (A2) there exists a positive constant \(C>0\) such that \(\|[I-(A(t)-A(t^{-}))]^{-1}\|\leq C\), \(\|[I-(A(t^{+})-A(t))]^{-1}\|\leq C\) and \[V_{A}:=\sup\{\mathrm{var}_{a}^{b}A:a,b\in\mathbb{R},a<b\}<\infty;\] (A3) the function \(F\in\mathscr{F}(\Omega,h)\), where \(h:\mathbb{R}\to\mathbb{R}\) is a nondecreasing function such that \[V_{h}:=\sup\{\mathrm{var}_{a}^{b}h:a,b\in\mathbb{R},a<b\}<\infty.\] Then we state the following result in this section. **Theorem 3.1**.: _If conditions (A1)-(A3) hold, then the nonlinear GODEs (3.1) have a unique bounded solution, which is defined by_ \[x(t)= \int_{0}^{t}DF(x(\tau),s)-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V }(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(x(\tau),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x(\tau),s).\] **Remark 3.2**.: _We note that different from the expression of bounded solution under the classical ODEs, Theorem 3.1 presents the formula of bounded solution of nonlinear GODEs in the sense of Kurzweil integral. If the nonlinear equation (3.1) is in a Riemann-integrable or Lebesgue-integrable environment, then the formula of the bounded solution is_ \[x(t)=\int_{-\infty}^{t}\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)F(x(\sigma), \sigma)d\sigma-\int_{t}^{\infty}\mathscr{V}(t)(I-P)\mathscr{V}^{-1}(\sigma)F( x(\sigma),\sigma)d\sigma.\] _Obviously the result is the same as that of the classical ODEs._ **Remark 3.3**.: _As pointed out by Schwabik [1], the GODE often does not involve differentiation. Therefore, it is indispensable to deal with the relationship between various integral equations throughout the proof._ Before giving the proof, we present a proposition that plays an important role in Theorem 3.1 and subsequent linearization results. **Proposition 3.4**.: _Suppose that the linear homogeneous GODEs (2.1) have an exponential dichotomy. Then the GODEs (2.1) have no non-trivial bounded solutions._ Proof.: Different from Bonotto et al. [20] (see Proposition 4.3), we prove it by contradiction. (i) If \(x(t)=0\), then the result is obvious. (ii) We now suppose that the bounded solution \(x(t)\neq 0\). Let \(\zeta=x(0)\). Notice that \[x(t)=\mathscr{V}(t)P\zeta+\mathscr{V}(t)(I-P)\zeta.\] Consider the case of \(t\leq 0\), from the first inequality of (2.5), we have \[\|P\zeta\|= \|\mathscr{V}(0)P\zeta\|=\|\mathscr{V}(0)P\mathscr{V}^{-1}(t) \mathscr{V}(t)P\zeta\|\] \[\leq \|\mathscr{V}(0)P\mathscr{V}^{-1}(t)\|\|\mathscr{V}(t)P\zeta\|\leq Ke ^{\alpha t}\|\mathscr{V}(t)P\zeta\|,\] thus, \[\|\mathscr{V}(t)P\zeta\|\geq K^{-1}e^{-\alpha t}\|P\zeta\|. \tag{3.2}\] On the other hand, we get (also using (2.5)) \[\begin{split}\|\mathscr{V}(t)(I-P)\zeta\|=&\| \mathscr{V}(t)(I-P)\mathscr{V}^{-1}(0)\mathscr{V}(0)(I-P)\zeta\|\\ \leq&\|\mathscr{V}(t)(I-P)\mathscr{V}^{-1}(0)\|\| \mathscr{V}^{-1}(0)\mathscr{V}(0)(I-P)\zeta\|\leq K\|(I-P)\zeta\|e^{\alpha t}. \end{split} \tag{3.3}\] Combining (3.2) and (3.3), we have \[\begin{split}\|x(t)\|=&\|\mathscr{V}(t)P\zeta+ \mathscr{V}(t)(I-P)\zeta\|\\ \geq&\|\mathscr{V}(t)P\zeta\|-\|\mathscr{V}(t)(I-P) \zeta\|\\ \geq& K^{-1}e^{-\alpha t}\|P\zeta\|-Ke^{\alpha t}\|(I-P) \zeta\|,\end{split}\] which implies that \[\lim_{t\to-\infty}\|x(t)\|=\infty,\] and thus \(x(t)\) is unbounded solution, which contradicts to the assumption. Hence, \(x(t)=0\) for all \(t\leq 0\). A similar argument can be applied to \(t\geq 0\), then \(x(t)=0\). Proof of Theorem 3.1.: **Step 1.** We claim that there exists a bounded solution satisfying Eq. (3.1). Indeed, let \(x_{0}(t):=0\). Then \[\begin{split} x_{1}(t):=&\int_{0}^{t}DF(x_{0}(\tau),s)-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_ {0}^{\sigma}DF(x_{0}(\tau),s)\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x_{0}(\tau),s).\end{split}\] By the definition of Kurzweil integral, for each \(\varepsilon>0\), there exists a gauge \(\varepsilon\) of \([0,t]\) such that for every \(\varepsilon\)-fine tagged division \(D=\{(\tau_{j},[s_{j-1},s_{j}]),j=1,2,\cdots,|D|\}\) of \([0,t]\), we have \[\left\|\int_{0}^{t}DF(x_{0}(\tau),s)\right\|=\left\|\sum_{j=1}^{|D|}\left[F(x( \tau_{j}),s_{j})-F(x(\tau_{j}),s_{j-1})\right]\right\|.\] It follows form condition (A3) that \[\left\|\sum_{j=1}^{|D|}\left[F(x(\tau_{j}),s_{j})-F(x(\tau_{j}),s_{j-1})\right] \right\|\leq\left|\sum_{j=1}^{|D|}h(s_{j})-h(s_{j-1})\right|\leq|h(t)-h(0)|\leq 2 V_{h}. \tag{3.4}\] Then \(x_{1}(t)\) is well defined, since \[\begin{split}\|x_{1}(t)\|\leq&|h(t)-h(0)|+\int_{-\infty }^{t}\|d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\||h(\sigma)-h(0)|\\ &+\int_{t}^{\infty}\|d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{ -1}(\sigma)]\||h(\sigma)-h(0)|,\end{split}\] which together with conditions (A1)-(A2) and (2.10), (2.11) yields that \[\|x_{1}(t)\|\leq 2V_{h}+2K\|P\|C^{3}e^{3CV_{A}}V_{A}^{2}V_{h}+2K(1+\|P\|)C^{3}e ^{3CV_{A}}V_{A}^{2}V_{h}.\] Taking \(t=s\), by (2.5), we have that \(\|P\|\leq K\) and thus \(\|x_{1}(t)\|<\infty\), namely, \(x_{1}(t)\) is bounded and well-defined. If for any fixed \(m\in\mathbb{N}\), \(x_{m}(t)\) is well defined and bounded, then \[\begin{split} x_{m+1}(t):=&\int_{0}^{t}DF(x_{m}( \tau),s)-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)] \int_{0}^{\sigma}DF(x_{m}(\tau),s)\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x_{m}(\tau),s)\end{split}\] and \(x_{m+1}(t)\) is also bounded. By the induction principle, for any \(m\in\mathbb{N}\), the function sequence \(\{x_{m}(t)\}_{m=0}^{\infty}\) is bounded. Moreover, \[\begin{split}\|x_{m+1}(t)-x_{m}(t)\|\leq&\int_{0}^{ t}\|DF(x_{m}(\tau),s)-DF(x_{m-1}(\tau),s)\|\\ &+\int_{-\infty}^{t}\|d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1} (\sigma)]\|\int_{0}^{\sigma}\|DF(x_{m}(\tau),s)-DF(x_{m-1}(\tau),s)\|\\ &+\int_{t}^{\infty}\|d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{- 1}(\sigma)]\|\int_{0}^{\sigma}\|DF(x_{m}(\tau),s)-DF(x_{m-1}(\tau),s)\|.\end{split}\] From the definition of Kurzweil integral and the condition (A3), it follows that \[\begin{split}&\left\|\int_{0}^{t}DF(x(\tau),s)-DF(y(\tau),s) \right\|\\ =&\left\|\sum_{j=1}^{|D|}\left[F(x(\tau_{j}),s_{j})-F( x(\tau_{j}),s_{j-1})\right]-\left[F(y(\tau_{j}),s_{j})-F(y(\tau_{j}),s_{j-1}) \right]\right\|\\ \leq&\sum_{j=1}^{|D|}\|x(\tau_{j})-y(\tau_{j})\|\cdot |h(s_{j})-h(s_{j-1})|\\ =&\int_{0}^{t}\|x(\tau)-y(\tau)\|dh(s).\end{split} \tag{3.5}\] By using condition (A3), we have \[\begin{split}\|x_{m+1}(t)-x_{m}(t)\|\leq&\int_{0}^{ t}\|x_{m}(\tau)-x_{m-1}(\tau)\|dh(s)\\ &+\int_{-\infty}^{t}\|d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1} (\sigma)]\|\int_{0}^{\sigma}\|x_{m}(\tau)-x_{m-1}(\tau)\|dh(s)\\ &+\int_{t}^{\infty}\|d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{- 1}(\sigma)]\|\int_{0}^{\sigma}\|x_{m}(\tau)-x_{m-1}(\tau)\|dh(s).\end{split}\] Set \(T_{m}:=\sup_{t\in\mathbb{R}}\|x_{m+1}(t)-x_{m}(t)\|\). We derive from conditions (A1)-(A2) and (2.10), (2.11) that \[T_{m}\leq |h(t)-h(0)|\cdot T_{m-1}+K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}|h(t)-h(0)| \cdot T_{m-1}\] \[+K(1+K)C^{3}e^{3CV_{A}}V_{A}^{2}|h(t)-h(0)|\cdot T_{m-1}.\] Taking \(V_{h}\) sufficiently small. Then there exist a positive constant \(\delta<1\) such that \(T_{m}\leq\delta\cdot T_{m-1}\). Hence, \(\sum_{m=1}^{\infty}\|x_{m+1}(t)-x_{m}(t)\|\) converges uniformly on \(\mathbb{R}\), which means that the function sequence \(\{x_{m}(t)\}_{m=0}^{\infty}\) also converges uniformly on \(\mathbb{R}\). We write \[\lim_{m\to\infty}x_{m}(t)=\widetilde{x}(t),\] thus \(\widetilde{x}(t)\) is bounded and \[\begin{split}\widetilde{x}(t)=&\int_{0}^{t}DF( \widetilde{x}(\tau),s)-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V }^{-1}(\sigma)]\int_{0}^{\sigma}DF(\widetilde{x}(\tau),s)\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{- 1}(\sigma)]\int_{0}^{\sigma}DF(\widetilde{x}(\tau),s).\end{split} \tag{3.6}\] **Step 2.** We claim the uniqueness of the solution for GODE (3.1). Let \(y(t)\) be another bounded solution of GODE (3.1) satisfying the initial value \(y(0)=y\). From Lemma 2.8, we have \[\begin{split} y(t)=&\mathscr{V}(t)y+\int_{0}^{t}DF( y(\tau),s)-\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)]\int_{0} ^{\sigma}DF(y(\tau),s)\\ =&\mathscr{V}(t)y+\int_{0}^{t}DF(y(\tau),s)-\int_{0} ^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(y( \tau),s)\\ &\pm\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(y(\tau),s)\\ &-\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(y(\tau),s)\\ &\pm\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{ -1}(\sigma)]\int_{0}^{\sigma}DF(y(\tau),s)\\ =&\mathscr{V}(t)y+\int_{0}^{t}DF(y(\tau),s)-\int_{- \infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma }DF(y(\tau),s)\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(y(\tau),s)\\ &+\mathscr{V}(t)\left(\int_{-\infty}^{0}d_{\sigma}[P\mathscr{V}^{ -1}(\sigma)]-\int_{0}^{\infty}d_{\sigma}[(I-P)\mathscr{V}^{-1}(\sigma)]\right) \int_{0}^{\sigma}DF(y(\tau),s).\end{split}\] Set \[y_{1}:=\int_{-\infty}^{0}d_{\sigma}[P\mathscr{V}^{-1}(\sigma)]\int_{0}^{ \sigma}DF(y(\tau),s) \tag{3.7}\] and \[y_{2}:=\int_{0}^{\infty}d_{\sigma}[(I-P)\mathscr{V}^{-1}(\sigma)]\int_{0}^{ \sigma}DF(y(\tau),s). \tag{3.8}\] Then it is not difficult to show that the above two integrals are well defined. Note that \(y(t)\) is bounded, and since by using (3.4) and conditions (A1)-(A3), the following expression \[\int_{0}^{t}DF(y(\tau),s)-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}( t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(y(\tau),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(y(\tau),s)\] is bounded. Then \(\mathscr{V}(t)(y+y_{1}+y_{2})\) is also bounded, which implies that \[\mathscr{V}(t)(y+y_{1}+y_{2})=0\] due to Proposition 3.4. Therefore, \[\begin{split} y(t)=&\int_{0}^{t}DF(y(\tau),s)-\int_ {-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{ \sigma}DF(y(\tau),s)\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1 }(\sigma)]\int_{0}^{\sigma}DF(y(\tau),s).\end{split} \tag{3.9}\] Combining (3.6) and (3.9), we have \[\begin{split}\|\widetilde{x}(t)-y(t)\|\leq&\int_{ 0}^{t}\|DF(\widetilde{x}(\tau),s)-DF(y(\tau),s)\|\\ &+\int_{-\infty}^{t}\|d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1} (\sigma)]\|\int_{0}^{\sigma}\|DF(\widetilde{x}(\tau),s)-DF(y(\tau),s)\|\\ &+\int_{t}^{\infty}\|d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{- 1}(\sigma)]\|\int_{0}^{\sigma}\|DF(\widetilde{x}(\tau),s)-DF(y(\tau),s)\|, \end{split} \tag{3.10}\] which implies \[\sup_{t\in\mathbb{R}}\|\widetilde{x}(t)-y(t)\|\leq\delta\sup_{t\in\mathbb{R}} \|\widetilde{x}(t)-y(t)\|,\] due to conditions (A1)-(A3) and (2.10), (2.11). Since \(V_{h}\) is sufficiently small such that \(\delta<1\), we assert that \(\widetilde{x}(t)=y(t)\). Consequently, the uniqueness is claimed. ## 4 Topological conjugacy for the GODEs In this section, we presents a version of Hartman-Grobman-type theorem in the framework of GODEs. **Theorem 4.1**.: _Assume that assumptions (A1)-(A3) hold. For the sufficiently small \(V_{h}\), the nonlinear GODEs (3.1) are topologically conjugated to the linear GODEs (2.1)._ **Remark 4.2**.: _Theorem 4.1 states the Hartman-Grobman theorem in the framework of GODEs. We construct two approximate identity maps \(\Phi\) and \(\Psi\), (i) prove that \(\Phi\) sends the solution of the linear GODEs onto the solution of the nonlinear GODEs; (ii) prove that \(\Psi\) maps the solution of the nonlinear GODEs to the solution of the linear GODEs;_ _(iii) verify that \(\Phi\) is a homeomorphism map and its inverse is \(\Psi\), that is, \(\Phi\circ\Psi=I\) and \(\Psi\circ\Phi=I\). Finally, the definition of topological conjugacy for nonautonomous systems proposed by Palmer [45] is verified. Different from Palmer, in the GODEs' environment, we use the Kurzweil integral theory to deal with the relationship between more complicated integral equations._ Proof.: **Step 1.** We establish the existence of the map \(\Phi\), where \(\Phi(t,x):=x+\phi(t,x)\). Let \(\Theta\) be the space of all maps \(\phi:\mathbb{R}\times\mathscr{X}\to\mathscr{X}\) such that \[\|\phi\|_{\infty}:=\sup_{t,x}\|\phi(t,x)\|<\infty.\] Then \((\Theta,\|\cdot\|_{\infty})\) is a Banach space. Given \(\phi\in\Theta\), we define an operator \(\mathscr{T}:\Theta\to\Theta\) as follows \[(\mathscr{T}\phi)(\tau,\xi)= \int_{0}^{\tau}DF(p(r),s)-\int_{-\infty}^{\tau}d_{\sigma}[ \mathscr{V}(\tau)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(p(r),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(\tau)(I-P)\mathscr{V}^{ -1}(\sigma)]\int_{0}^{\sigma}DF(p(r),s),\] where \[p(r)=\mathscr{V}(r)\mathscr{V}^{-1}(\tau)\xi+\phi(r,\mathscr{V}(r)\mathscr{V} ^{-1}(\tau)\xi) \tag{4.1}\] and \((\tau,\xi)\in\mathbb{R}\times\mathscr{X}\). Similar to the procedure of (3.4), we have \[C_{\tau}:= \left\|\int_{0}^{\tau}DF(p(r),s)\right\|=\left\|\sum_{j=1}^{|D|}[F( p(r_{j}),s_{j})-F(p(r_{j}),s_{j-1})]\right\|\] \[= |h(\tau)-h(0)|\leq 2V_{h}.\] Thus, \[\|(\mathscr{T}\phi)(\tau,\xi)\|\leq C_{\tau}+\int_{-\infty}^{\tau}\|d_{\sigma} [\mathscr{V}(\tau)P\mathscr{V}^{-1}(\sigma)]\|\cdot C_{\sigma}+\int_{\tau}^{ \infty}\|d_{\sigma}[\mathscr{V}(\tau)(I-P)\mathscr{V}^{-1}(\sigma)]\|\cdot C _{\sigma}<\infty,\] by Theorem 3.1. This implies that \(\mathscr{T}\phi\in\Theta\). Set \[q_{i}(r)=\mathscr{V}(r)\mathscr{V}^{-1}(\tau)\xi+\phi_{i}(r,\mathscr{V}(r) \mathscr{V}^{-1}(\tau)\xi),\quad i=1,2.\] Then, taking any \(\phi_{1},\phi_{2}\in\Theta\), and by using (3.5) we have \[\|(\mathscr{T}\phi_{1})(\tau,\xi)-(\mathscr{T}\phi_{2})(\tau,\xi)\|\] \[\leq \int_{0}^{\tau}\|DF(q_{1}(r),s)-DF(q_{2}(r),s)\|\] \[+\int_{-\infty}^{\tau}\|d_{\sigma}[\mathscr{V}(\tau)P\mathscr{V}^{ -1}(\sigma)]\|\int_{0}^{\sigma}\|DF(q_{1}(r),s)-DF(q_{2}(r),s)\|\] \[+\int_{\tau}^{\infty}\|d_{\sigma}[\mathscr{V}(\tau)(I-P)\mathscr{ V}^{-1}(\sigma)]\|\int_{0}^{\sigma}\|DF(q_{1}(r),s)-DF(q_{2}(r),s)\|\] \[\leq \int_{0}^{\tau}\|\phi_{1}(r,\mathscr{V}(r)\mathscr{V}^{-1}(\tau )\xi)-\phi_{2}(r,\mathscr{V}(r)\mathscr{V}^{-1}(\tau)\xi)\|dh(s)\] \[+\int_{-\infty}^{\tau}\|d_{\sigma}[\mathscr{V}(\tau)P\mathscr{V}^ {-1}(\sigma)]\|\int_{0}^{\sigma}\|\phi_{1}(r,\mathscr{V}(r)\mathscr{V}^{-1}( \tau)\xi)-\phi_{2}(r,\mathscr{V}(r)\mathscr{V}^{-1}(\tau)\xi)\|dh(s)\] \[+\int_{\tau}^{\infty}\|d_{\sigma}[\mathscr{V}(\tau)(I-P)\mathscr{ V}^{-1}(\sigma)]\|\int_{0}^{\sigma}\|\phi_{1}(r,\mathscr{V}(r)\mathscr{V}^{-1}( \tau)\xi)-\phi_{2}(r,\mathscr{V}(r)\mathscr{V}^{-1}(\tau)\xi)\|dh(s).\] It follows from (A1)-(A3) and (2.10), (2.11) that \[\|\mathscr{T}\phi_{1}-\mathscr{T}\phi_{2}\|_{\infty}\leq 2V_{h}(1+K\|P\|C^{3}e^{3 CV_{A}}V_{A}^{2}+K(1+\|P\|)C^{3}e^{3CV_{A}}V_{A}^{2})\|\phi_{1}-\phi_{2}\|_{ \infty}.\] Taking \(V_{h}\) sufficiently small satisfying \[2V_{h}(1+K\|P\|C^{3}e^{3CV_{A}}V_{A}^{2}+K(1+\|P\|)C^{3}e^{3CV_{A}}V_{A}^{2})<1.\] Then we obtain that the map \(\mathscr{T}:\Theta\to\Theta\) is a contraction and consequently \(\mathscr{T}\) has a unique fixed point \(\phi\in\Theta\) such that \(\mathscr{T}\phi=\phi\). Hence, \[\phi(\tau,\xi)= \int_{0}^{\tau}DF(p(r),s)-\int_{-\infty}^{\tau}d_{\sigma}[ \mathscr{V}(\tau)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(p(r),s)\] \[+\int_{\tau}^{\infty}d_{\sigma}[\mathscr{V}(\tau)(I-P)\mathscr{V} ^{-1}(\sigma)]\int_{0}^{\sigma}DF(p(r),s),\] where \(p(r)\) is given by (4.1). By using the identity \[\mathscr{V}(t)\mathscr{V}^{-1}(r)\mathscr{V}(r)\mathscr{V}^{-1}(\tau)x= \mathscr{V}(t)\mathscr{V}^{-1}(\tau)x,\] we have \[\int_{0}^{t}DF(\mathscr{V}(r)\mathscr{V}^{-1}(t)\mathscr{V}(t) \mathscr{V}^{-1}(\tau)x+\phi(r,\mathscr{V}(r)\mathscr{V}^{-1}(t)\mathscr{V}(t )\mathscr{V}^{-1}(\tau)x),s)\] \[= \int_{0}^{t}DF(\mathscr{V}(r)\mathscr{V}^{-1}(\tau)x+\phi(r, \mathscr{V}(r)\mathscr{V}^{-1}(\tau)x),s),\] and thus \[\phi(t,\mathscr{V}(t)\mathscr{V}^{-1}(\tau)x)\] \[= \int_{0}^{t}DF(\mathscr{V}(r)\mathscr{V}^{-1}(\tau)x+\phi(r, \mathscr{V}(r)\mathscr{V}^{-1}(\tau)x),s)\] \[-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(\mathscr{V}(r)\mathscr{V}^{-1}(\tau)x+\phi(r, \mathscr{V}(r)\mathscr{V}^{-1}(\tau)x),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(\mathscr{V}(r)\mathscr{V}^{-1}(\tau)x+\phi(r, \mathscr{V}(r)\mathscr{V}^{-1}(\tau)x),s).\] If \(t\mapsto x(t)\) is a solution of (2.1), then \[\phi(t,x(t))= \int_{0}^{t}DF(x(r)+\phi(r,x(r)),s)-\int_{-\infty}^{t}d_{\sigma}[ \mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(x(r)+\phi(r,x(r)),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x(r)+\phi(r,x(r)),s).\] We next prove that \(\Phi(t,x(t)):=x(t)+\phi(t,x(t))\) sends the solution of the linear GODEs (2.1) onto the solution of the nonlinear GODEs (3.1). In fact, \[\Phi(t,x(t))= \mathscr{V}(t)x+\int_{0}^{t}DF(x(r)+\phi(r,x(r)),s)\] \[-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(x(r)+\phi(r,x(r)),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x(r)+\phi(r,x(r)),s)\] \[= \mathscr{V}(t)x+\int_{0}^{t}DF(\Phi(r,x(r)),s)-\int_{-\infty}^{t }d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(\Phi( r,x(r)),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(\Phi(r,x(r)),s)\] \[:= \mathscr{V}(t)x+\int_{0}^{t}DF(\Phi(r,x(r)),s)-I_{1}+I_{2}.\] We divide the integrals \(I_{1}\) and \(I_{2}\) into two parts: \[I_{1}=\int_{0}^{t}+\int_{-\infty}^{0}:=I_{11}+\mathscr{V}(t)\cdot I_{12}\quad \text{and}\quad I_{2}=\int_{t}^{0}+\int_{0}^{\infty}:=I_{21}+\mathscr{V}(t) \cdot I_{22}.\] It is easy to obtain that \[-I_{11}+I_{21}=\int_{t}^{0}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)] \int_{0}^{\sigma}DF(\Phi(r,x(r)),s).\] Similar to (3.7) and (3.8), we see that \(I_{12}\) and \(I_{22}\) are bounded, and we denote them by \(x_{1}\) and \(x_{2}\). Consequently, \[\Phi(t,x(t))= \mathscr{V}(t)(x-x_{1}+x_{2})+\int_{0}^{t}DF(\Phi(r,x(r)),s)-\int _{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF (\Phi(r,x(r)),s),\] which implies that \(\Phi(t,x(t))\) is a solution of the nonlinear GODEs (3.1). **Step 2.** We establish the existence of the map \(\Psi\), where \(\Psi(t,x):=x+\psi(t,x)\). Let \(x(t,\tau,\xi)\) be the solution of the nonlinear GODEs (3.1) with the initial value \((\tau,\xi)\in\mathbb{R}\times\mathscr{X}\). Set \[\psi(\tau,\xi)= -\int_{0}^{\tau}DF(x(r,\tau,\xi),s)+\int_{-\infty}^{\tau}d_{ \sigma}[\mathscr{V}(\tau)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(x(r, \tau,\xi),s)\] \[-\int_{\tau}^{\infty}d_{\sigma}[\mathscr{V}(\tau)(I-P)\mathscr{V} ^{-1}(\sigma)]\int_{0}^{\sigma}DF(x(r,\tau,\xi),s).\] Similar to the procedure in \(\phi\), we show that \(\psi\in\Theta\). Then, \[\psi(t,x(t,\tau,x))= -\int_{0}^{t}DF(x(\tau,\tau,x),s)+\int_{-\infty}^{t}d_{\sigma}[ \mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(x(r,\tau,x),s)\] \[-\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x(r,\tau,x),s)\] since the identity \[x(t,s,x(s,\tau,x))=x(t,\tau,x).\] We next prove that \(\Psi(t,x(t)):=x(t)+\psi(t,x(t))\) is a solution of the linear GODEs (2.1). In fact, \[\Psi(t,x(t))= \mathscr{V}(t)x+\int_{0}^{t}DF(x(r),s)-\int_{0}^{t}d_{\sigma}[ \mathscr{V}(t)\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF((x(r),s)\] \[-\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}DF(x(r),s)\] \[:= \mathscr{V}(t)x-\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V} ^{-1}(\sigma)]\int_{0}^{\sigma}DF((x(r),s)+J_{1}-J_{2}.\] We also divide the integrals \(J_{1}\) and \(J_{2}\) into two parts: \[J_{1}=\int_{0}^{t}+\int_{-\infty}^{0}:=J_{11}+\mathscr{V}(t)\cdot J_{12}\quad \text{and}\quad J_{2}=\int_{t}^{0}+\int_{0}^{\infty}:=J_{21}+\mathscr{V}(t) \cdot J_{22}.\] Then \[J_{11}-J_{21}=\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)] \int_{0}^{\sigma}DF(x(r),s),\] and \(J_{12}\) and \(J_{22}\) are bounded, and we denote them by \(x_{3}\) and \(x_{4}\). Hence, \[\Psi(t,x(t))=\mathscr{V}(t)(x+x_{3}-x_{4}),\] which implies that \(\Psi(t,x(t))\) maps the solution of the nonlinear GODEs (3.1) onto the solution of the linear GODEs (2.1). **Step 3.** This step is to prove \(\Psi\circ\Phi=I\), that is, \(\Psi(t,\Phi(t,x(t)))=x(t)\), where \(x(t)\) is the solution of the linear GODEs (2.1). Indeed, by Step 1, we see that \(\Phi(t,x(t))\) is the solution of the nonlinear GODEs (3.1), and by Step 2, \(\Psi(t,\Phi(t,x(t)))\) is the solution of the linear GODEs (2.1). We write \(\widehat{x}(t)=\Psi(t,\Phi(t,x(t)))\). Set \(L(t)=\widehat{x}(t)-x(t)\). Then \(L(t)\) is also the solution of the linear GODEs (2.1). Thus \[\|L(t)\|= \|\Psi(t,\Phi(t,x(t)))-x(t)\|\leq\|\Psi(t,\Phi(t,x(t)))-\Phi(t,x(t ))\|+\|\Phi(t,x(t))-x(t)\|\] \[\leq \|\psi(t,\Phi(t,x(t)))\|+\|\phi(t,x(t))\|<\infty,\] since \(\phi,\psi\in\Theta\). From Proposition 3.4, it follows that \(L(t)=0\), namely, \(\Psi(t,\Phi(t,x(t)))=x(t)\). **Step 4.** We claim that \(\Phi\circ\Psi=I\), that is, \(\Phi(t,\Psi(t,x(t)))=x(t)\), where \(x(t)\) is the solution of the nonlinear GODE (3.1). In fact, it follows from Step 2 and Step 1 that \(\Psi(t,x(t))\) is the solution of the linear GODEs (2.1) and \(\Phi(t,\Psi(t,x(t)))\) is the solution of the nonlinear GODEs (3.1). We write \(\widetilde{x}(t):=\Phi(t,\Psi(t,x(t)))\). Set \(H(t)=\widetilde{x}(t)-x(t)\). Then \(H(t)\) is the solution of the following integral equation \[\begin{split} H(t)=&\mathscr{V}(t)(\widetilde{x}( 0)-x(0))+\int_{0}^{t}(DF(H(r)+x(r),s)-DF(x(r),s))\\ &-\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)] \int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s))\\ :=& K_{1}+K_{2}-K_{3}.\end{split} \tag{4.2}\] In addition, \[\begin{split}\|H(t)\|=&\|\Phi(t,\Psi(t,x(t)))-x(t)\| \leq\|\Phi(t,\Psi(t,x(t)))-\Psi(t,x(t))\|+\|\Psi(t,x(t))-x(t)\|\\ \leq&\|\phi(t,\Psi(t,x(t)))\|+\|\psi(t,x(t))\|<\infty, \end{split}\] due to \(\phi,\psi\in\Theta\). Since \[\begin{split} K_{3}=&\int_{0}^{t}d_{\sigma}[ \mathscr{V}(t)\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x (r),s))\\ =&\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V} ^{-1}(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s))\\ &+\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s))\\ :=& K_{31}+\mathscr{V}(t)\cdot K_{32}+K_{33}+ \mathscr{V}(t)\cdot K_{34},\end{split}\] where \[\begin{split} K_{31}=&\int_{-\infty}^{t}d_{\sigma}[ \mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF( x(r),s)),\\ K_{32}=&\int_{0}^{-\infty}d_{\sigma}[P\mathscr{V} ^{-1}(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s)),\\ K_{33}=&-\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t) (I-P)\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s)),\\ K_{34}=&\int_{0}^{\infty}d_{\sigma}[(I-P)\mathscr{V} ^{-1}(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s)).\end{split}\] Similar to (3.7) and (3.8), the integrals \(K_{32}\) and \(K_{34}\) are bounded, we denote them by \(x_{5}\) and \(x_{6}\). By using Theorem 3.1, it is obvious that the integral \(K_{2}+K_{31}+K_{33}\) is bounded. It means that \(K_{1}+\mathscr{V}(t)(x_{5}+x_{6})\) is bounded, but \(K_{1}+\mathscr{V}(t)(x_{5}+x_{6})\) is a solution of the linear GODE (2.1), which further implies that \(K_{1}+\mathscr{V}(t)(x_{5}+x_{6})=0\) (by using Proposition 3.4). Thus, (4.2) can be written as \[\begin{split} H(t)=&\int_{0}^{t}(DF(H(r)+x(r),s)-DF( x(r),s))\\ &-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s))\\ &+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1 }(\sigma)]\int_{0}^{\sigma}(DF(H(r)+x(r),s)-DF(x(r),s))\end{split}\] Similar to (3.10), we have \[\sup_{t\in\mathbb{R}}\|H(t)\|\leq\delta\sup_{t\in\mathbb{R}}\|H(t)\|,\] where \(\delta<1\). Therefore, \(H(t)=0\), namely, \(\Phi(t,\Psi(t,x(t)))=x(t)\). Consequently, it follows from Steps 1-4 that the nonlinear GODEs (3.1) is topologically conjugated to the linear GODEs (2.1). ## 5 Holder conjuacies In this section, we consider the Holder regularity for the conjugacies \(\Phi\) and \(\Psi\). We propose a strong hypothesis: \[\|F(x,s)-F(x,t)\|\leq e^{-\alpha|s-t|}|h(s)-h(t)|,\quad\text{for }t,s\in \mathbb{R}\text{ and }x\in\mathscr{X}, \tag{5.1}\] where \(\alpha\) is defined in (2.5). Notice that this hypothesis does not change our linearization results since \[e^{-\alpha|s-t|}|h(s)-h(t)|\leq|h(s)-h(t)|.\] Then we have the result of this section. **Theorem 5.1**.: _Suppose that all conditions of Theorem 4.1 hold. If further \(F(x.t)\) satisfies (5.1), then_ _(i) there exists a positive constant_ \(C>0\) _such that for any_ \(t\in\mathbb{R}\) _and_ \(x,\widetilde{x}\in\mathscr{X}\) _satisfying_ \(0<\|x-\widetilde{x}\|<1\)_, we have_ \[\|\Psi(t,x)-\Psi(t,\widetilde{x})\|\leq C\|x-\widetilde{x}\|^{\frac{\alpha}{ \alpha+\widetilde{\alpha}}},\] _where_ \(\widetilde{\alpha}\) _is given by Definition_ 2.6_;_ _(ii) there exist positive constants_ \(\widehat{C}>0\) _and_ \(0<q\leq\frac{\alpha}{\alpha+\widetilde{\alpha}}\) _such that for any_ \(t\in\mathbb{R}\) _and_ \(x,\widetilde{x}\in\mathscr{X}\) _satisfying_ \(0<\|x-\widetilde{x}\|<1\)_, we have_ \[\|\Phi(t,x)-\Phi(t,\widetilde{x})\|\leq\widehat{C}\|x-\widetilde{x}\|^{q}.\] **Remark 5.2**.: _We emphasize that condition (5.1) is to help us deal with the integral term \(\int_{0}^{t}DF(X(\tau),s)\) under Kurzweil integrable. If it is under Riemann integrable or Lebesgue integrable, condition (5.1) is not required, one can refer to Pinto and Robledo [69]._ To prove it, we first present some useful inequalities and then give rigorous proofs in subsections 5.2 and 5.3, respectively. For the sake of convenience, given \(t,t_{0}\in\mathbb{R}\) and \(x,y\in X\), let \(X(t,t_{0},x)\) be the solution of Eq. (3.1) satisfying the initial value \(x(t_{0})=x\), and let \(Y(t,t_{0},y)=\mathscr{V}(t)y\) be the solution of Eq. (2.1) such that \(Y(t_{0})=y\). ### Some useful estimates **Lemma 5.3**.: _([21], Corollary 1.43) Suppose that \(h:[a,\infty)\to(0,\infty)\) is a nondecreasing left continuous function. If \(u:[a,\infty)\to(0,\infty)\) is bounded and Perron-Stieltjes integrable with respect to \(h\) such that_ \[u(t)\leq c_{1}+c_{2}\int_{a}^{t}u(s)dh(s),\quad t\in[a,\infty),\] _where \(c_{1},c_{2}>0\) are constants, then_ \[u(t)\leq c_{1}e^{c_{2}|h(t)-h(0)|}.\] **Lemma 5.4**.: _Suppose that the linear GODE (2.1) admits a strong exponential dichotomy and conditions (A1)-(A3) hold. Then we have_ \[\|X(t,0,x)-X(t,0,\widetilde{x})\|\leq K\|x-\widetilde{x}\|e^{(1+\mathscr{L})|h (t)-h(0)|}e^{\widetilde{\alpha}|t|},\quad t\in\mathbb{R},\] _where \(\mathscr{L}:=KC^{3}e^{3CV_{A}}V_{A}^{2}\)._ Proof.: Since \(X(t,0,x)\) is the solution of the nonlinear GODE (3.1), we obtain that for any \(x,\widetilde{x}\in\mathscr{X}\) and \(t\geq 0\), \[X(t,0,x)-X(t,0,\widetilde{x})= \mathscr{V}(t)(x-\widetilde{x})+\int_{0}^{t}\left(DF(X(\tau,0,x), s)-DF(X(\tau,0,\widetilde{x}),s)\right)\] \[-\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)] \int_{0}^{\sigma}\left(DF(X(\tau,0,x),s)-DF(X(\tau,0,\widetilde{x}),s)\right).\] By taking the supremum norm and using strong exponential dichotomy (see Definition 2.6) and (A2), (A3), we have \[\|X(t,0,x)-X(t,0,\widetilde{x})\|\leq Ke^{\widetilde{\alpha}t}\|x-\widetilde{x}\|+\int_{0}^{t}\|X(s,0,x)-X(s,0, \widetilde{x})\|dh(s)\] \[+\int_{0}^{t}\|d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma) ]\|\int_{0}^{\sigma}\|X(s,0,x)-X(s,0,\widetilde{x})\|dh(s).\] Set \(Z(t):=e^{-\widetilde{\alpha}t}\|X(t,0,x)-X(t,0,\widetilde{x})\|\). Then \[Z(t)\leq K\|x-\widetilde{x}\|+\int_{0}^{t}e^{\widetilde{\alpha}(s-t)}Z(s)dh(s )+\left\|\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)]\right\| \int_{0}^{\sigma}e^{\widetilde{\alpha}(s-t)}Z(s)dh(s).\] We deal with the integral \(\left\|\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}(\sigma)]\right\|\). Given a \(\epsilon\)-fine tagged division \(D=\{(\tau_{j},[s_{j-1},s_{j}]),j=1,2,\cdots,|D|\}\) of \([0,t]\), we deduce that \[\left\|\int_{0}^{t}d_{\sigma}[\mathscr{V}(t)\mathscr{V}^{-1}( \sigma)]\right\|= \left\|\sum_{j=1}^{|D|}[\mathscr{V}(t)\mathscr{V}^{-1}(s_{j})- \mathscr{V}(t)\mathscr{V}^{-1}(s_{j-1})\right\|\] \[\leq \sum_{j=1}^{|D|}\|\mathscr{V}(t)\|\|\mathscr{V}^{-1}(s_{j})- \mathscr{V}^{-1}(s_{j-1})\|.\] By using Theorem 6.15 in [1], one has \[\|V(t,s)\|\leq Ce^{C\mathrm{var}_{s}^{t}A},\quad\mathrm{var}_{s}^{t}V(\cdot,s) \leq Ce^{C\mathrm{var}_{s}^{t}A}\mathrm{var}_{s}^{t}A\quad\text{and}\quad \mathrm{var}_{s}^{t}V(t,\cdot)\leq C^{2}e^{2C\mathrm{var}_{s}^{t}A}\mathrm{ var}_{s}^{t}A,\] Recall that \(\mathscr{V}^{-1}(t)=V(0,t)\). Then by condition (A2), the following estimate holds: \[\sum_{j=1}^{|D|}\|\mathscr{V}(t)\|\|\mathscr{V}^{-1}(s_{j})- \mathscr{V}^{-1}(s_{j-1})\|\leq Ke^{\widetilde{\alpha}(t-s)}\|\mathscr{V}(s)\|C^{2}e^{2C \mathrm{var}_{s}^{t}A}\mathrm{var}_{s}^{t}A\] \[\leq Ke^{\widetilde{\alpha}(t-s)}C^{3}e^{3CV_{A}}V_{A}^{2}.\] Therefore, \[Z(t)\leq K\|x-\widetilde{x}\|+\int_{0}^{t}e^{\widetilde{\alpha}(s-t)}Z(s) dh(s)+\int_{0}^{\sigma}KC^{3}e^{3CV_{A}}V_{A}^{2}\cdot Z(s)dh(s)\] \[\leq K\|x-\widetilde{x}\|+\int_{0}^{t}Z(s)dh(s)+\int_{0}^{t}KC^{3}e^{ 3CV_{A}}V_{A}^{2}\cdot Z(s)dh(s)\] \[\leq K\|x-\widetilde{x}\|+\int_{0}^{t}(1+KC^{3}e^{3CV_{A}}V_{A}^{2}) \cdot Z(s)dh(s),\] which implies that \[Z(t)\leq K\|x-\widetilde{x}\|e^{(1+KC^{3}e^{3CV_{A}}V_{A}^{2})(h(t)-h(0))},\] due to Lemma 5.3. Hence, for all \(t\geq 0\), we conclude that \[\|X(t,0,x)-X(t,0,\widetilde{x})\|\leq K\|x-\widetilde{x}\|e^{(1+KC^{3}e^{3CV_{A} }V_{A}^{2})(h(t)-h(0))}e^{\widetilde{\alpha}t}.\] A similar conclusion is reached for \(t\leq 0\). ### Holder continuous for the conjugacy \(\Psi\) We claim in this subsection that \(\Psi(t,x):=x+\psi(t,x)\) is Holder continuous with respect to \(x\). For this purpose, we estimate \(\psi(t,x)-\psi(t,\widetilde{x})\) for any \(t\in\mathbb{R}\) and \(x,\widetilde{x}\in\mathscr{X}\). In fact, \[\psi(t,x)-\psi(t,\widetilde{x})= -\int_{0}^{t}(DF(X(r,t,x),s)-DF(X(r,t,x),s))\] \[+\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}(DF(X(r,t,x),s)-DF(X(r,t,x),s))\] \[-\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1} (\sigma)]\int_{0}^{\sigma}(DF(X(r,t,x),s)-DF(X(r,t,x),s))\] \[:= \mathcal{I}_{1}+\mathcal{I}_{2}+\mathcal{I}_{3}.\] We then deal with \(\mathcal{I}_{1}\), \(\mathcal{I}_{2}\) and \(\mathcal{I}_{3}\). Without loss of generality, assume that \(0<\|x-\widetilde{x}\|<1\). Set \(\theta=\frac{1}{\alpha+\alpha}\ln\frac{1}{\|x-\widetilde{x}\|}\) such that \(t-\theta\geq\theta\), where \(\widetilde{\alpha}\) is given by Definition 2.6. Then we split \(\mathcal{I}_{1}\), \(\mathcal{I}_{2}\) and \(\mathcal{I}_{3}\) into several parts: \[\mathcal{I}_{1}= -\int_{0}^{\theta}-\int_{\theta}^{t-\theta}-\int_{t-\theta}^{t}= \mathcal{I}_{11}+\mathcal{I}_{12}+\mathcal{I}_{13},\] \[\mathcal{I}_{2}= \int_{-\infty}^{t-\theta}+\int_{t-\theta}^{t}=\mathcal{I}_{21}+ \mathcal{I}_{22},\] \[\mathcal{I}_{3}= -\int_{t+\theta}^{\infty}-\int_{t}^{t+\theta}=\mathcal{I}_{31}+ \mathcal{I}_{32}.\] From the definition of Kurzweil integral and conditions (A3), (5.1), there is a \(\epsilon\)-fine tagged division of \([0,\theta]\) such that \[\|\mathcal{I}_{11}\|\leq 2\left\|\int_{0}^{\theta}DF(X(r,t,x),s)\right\|\leq 2\sum_{j=1}^{ |D|}e^{-\alpha(s_{j}-s_{j-1})}|h(s_{j})-h(s_{j-1})|\] \[\leq 4V_{h}e^{-\alpha\theta}\leq 4V_{h}\|x-\widetilde{x}\|^{\frac{ \alpha}{\alpha+\alpha}}.\] Similarly, we also have \[\|\mathcal{I}_{12}\|\leq 2\left\|\int_{\theta}^{t-\theta}DF(X(r,t,x),s)\right\|\leq 2 \left\|\int_{0}^{t-\theta}DF(X(r,t,x),s)\right\|\] \[\leq 4V_{h}e^{-\alpha(t-\theta)}\leq 4V_{h}e^{-\alpha\theta}\leq 4V_{ h}\|x-\widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}}.\] By using conditions (A1)-(A3) and (2.10), (2.11), we deduce that \[\|\mathcal{I}_{21}\|\leq 2\left\|\int_{-\infty}^{t-\theta}d_{\sigma}[\mathscr{V}(t)P \mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(X(r,t,x),s)\right\|\] \[\leq 4e^{-\alpha\sigma}V_{h}\lim_{\eta\to\infty}\left\|\int_{-\eta}^ {t-\theta}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\right\|\] \[\leq 4V_{h}e^{-\alpha\sigma}K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}\lim_{\eta \to\infty}e^{-\alpha(t-\theta+\eta)}\] \[\leq 4V_{h}e^{-\alpha\sigma}K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}e^{-\alpha\theta}\] \[\leq 4V_{h}e^{-\alpha\sigma}K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}\|x- \widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}}.\] By a similar procedure, we have \[\|\mathcal{I}_{31}\|\leq 2\left\|\int_{t+\theta}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P) \mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(X(r,t,x),s)\right\|\] \[\leq 4V_{h}e^{-\alpha\sigma}K(1+K)C^{3}e^{3CV_{A}}V_{A}^{2}\|x- \widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}}.\] It remains to estimate \(\mathcal{I}_{13}\), \(\mathcal{I}_{22}\) and \(\mathcal{I}_{32}\). From condition (A3) and Lemma 5.4, we have \[\|\mathcal{I}_{13}\|\leq \int_{t-\theta}^{t}\|X(s,t,x)-X(s,t,\widetilde{x})\|dh(s)\] \[\leq \int_{t-\theta}^{t}K\|x-\widetilde{x}\|e^{2(1+\mathscr{L})V_{h}}e ^{\widetilde{\alpha}|s-t|}dh(s)\] \[\leq \sum_{j=1}^{|D|}K\|x-\widetilde{x}\|e^{2(1+\mathscr{L})V_{h}}e^{ \widetilde{\alpha}(t-s_{j})}|h(s_{j})-h(s_{j-1})|\] \[\leq 2Ke^{2(1+\mathscr{L})V_{h}}e^{\widetilde{\alpha}\theta}\theta V _{h}\|x-\widetilde{x}\|\] \[\leq 2K\theta V_{h}e^{2(1+\mathscr{L})V_{h}}\|x-\widetilde{x}\|^{ \frac{\alpha}{\alpha+\alpha}},\] \[\|\mathcal{I}_{22}\|\leq \left\|\int_{t-\theta}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^ {-1}(\sigma)]\right\|\int_{0}^{\sigma}\|X(s,t,x)-X(s,t,\widetilde{x})\|dh(s)\] \[\leq 2K\theta V_{h}e^{2(1+\mathscr{L})V_{h}}K^{2}C^{3}e^{3CV_{A}}V_{ A}^{2}e^{(\widetilde{\alpha}-\alpha)\theta}\|x-\widetilde{x}\|\] \[\leq 2\theta K^{3}C^{3}e^{3CV_{A}}V_{A}^{2}V_{h}e^{2(1+\mathscr{L})V_ {h}}\|x-\widetilde{x}\|^{\frac{2\alpha}{\alpha+\alpha}}\] and \[\|\mathcal{I}_{32}\|\leq \left\|\int_{t-\theta}^{t}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{ V}^{-1}(\sigma)]\right\|\int_{0}^{\sigma}\|X(s,t,x)-X(s,t,\widetilde{x})\|dh(s)\] \[\leq 2\theta(1+K)K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}V_{h}e^{2(1+\mathscr{L })V_{h}}\|x-\widetilde{x}\|^{\frac{2\alpha}{\alpha+\alpha}}.\] Hence, it follows from the above all inequalities that \[\|\psi(t,x)-\psi(t,\widetilde{x})\|\leq \|\mathcal{I}_{1}\|+\|\mathcal{I}_{2}\|+\|\mathcal{I}_{3}\|\] \[\leq 8V_{h}\|x-\widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}}+8V_{h}e ^{-\alpha\sigma}K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}\|x-\widetilde{x}\|^{\frac{ \alpha}{\alpha+\alpha}}\] \[+2K\theta V_{h}e^{2(1+\mathscr{L})V_{h}}\|x-\widetilde{x}\|^{ \frac{\alpha}{\alpha+\alpha}}+2\theta K^{3}C^{3}e^{3CV_{A}}V_{A}^{2}V_{h}e^{2( 1+\mathscr{L})V_{h}}\|x-\widetilde{x}\|^{\frac{2\alpha}{\alpha+\alpha}}\] \[+2\theta(1+K)K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}V_{h}e^{2(1+\mathscr{L })V_{h}}\|x-\widetilde{x}\|^{\frac{2\alpha}{\alpha+\alpha}}\] \[\leq C_{1}\|x-\widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}},\] for some constant \(C_{1}>0\). Consequently, if \(\|x-\widetilde{x}\|<1\), then \[\|\Psi(t,x)-\Psi(t,\widetilde{x})\|\leq\|x-\widetilde{x}\|+\|\psi(t,x)-\psi(t,\widetilde{x})\|\leq C\|x-\widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}},\] for some constant \(C>0\). ### Holder continuous for the conjugacy \(\Phi\) Next, we claim that \(\Phi(t,x):=x+\phi(t,x)\) is Holder continuous with respect to \(x\). By Step 1 in Theorem 4.1, we see that \[\lim_{k\to\infty}\phi_{k}(t,x)=\phi(t,x),\quad k\in\mathbb{N}.\] Furthermore, we define \(\phi_{0}(t,x)=0\) for any \(t\in\mathbb{R}\) and \(x\in\mathscr{X}\), and by recursion define \[\phi_{k+1}(t,x)= \int_{0}^{t}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)\] \[-\int_{-\infty}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)\] \[+\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{V}^{-1}( \sigma)]\int_{0}^{\sigma}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s).\] Now we prove that if \(0<\|x-\widetilde{x}\|<1\), then \[\|\phi_{k}(t,x)-\phi_{k}(t,\widetilde{x})\|\leq\widetilde{C}\|x-\widetilde{x} \|^{q}, \tag{5.2}\] where \(\widetilde{C}>0\) and \(0<q\leq\frac{\alpha}{\widetilde{\alpha}+\alpha}\). If \(k=0\), then the inequality (5.2) holds obviously. Making the inductive assumption that (5.2) holds, we have \[\|\phi_{k+1}(t,x)-\phi_{k+1}(t,\widetilde{x})\|\leq \left\|\int_{0}^{t}\hbar(r)\right\|+\left\|\int_{-\infty}^{t}d_{ \sigma}[\mathscr{V}(t)P\mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}\hbar(r)\right\|\] \[+\left\|\int_{t}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{ V}^{-1}(\sigma)]\int_{0}^{\sigma}\hbar(r)\right\|\] \[:= \mathcal{J}_{1}+\mathcal{J}_{2}+\mathcal{J}_{3},\] where \[\hbar(r)=DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)-DF(Y(r,t,\widetilde{x})+\phi_{k}( r,Y(r,t,\widetilde{x})),s).\] We also split \(\mathcal{J}_{1}\), \(\mathcal{J}_{2}\) and \(\mathcal{J}_{3}\) into several parts: \[\mathcal{J}_{1}= \int_{0}^{\theta}+\int_{\theta}^{t-\theta}+\int_{t-\theta}^{t}= \mathcal{J}_{11}+\mathcal{J}_{12}+\mathcal{J}_{13},\] \[\mathcal{J}_{2}= -\int_{-\infty}^{t-\theta}-\int_{t-\theta}^{t}=\mathcal{J}_{21}+ \mathcal{J}_{22},\] \[\mathcal{J}_{3}= \int_{t+\theta}^{\infty}+\int_{t}^{t+\theta}=\mathcal{J}_{31}+ \mathcal{J}_{32},\] where \(\theta=\frac{1}{\widetilde{\alpha}+\alpha}\ln\frac{1}{\|x-\widetilde{x}\|}\) such that \(t-\theta\geq\theta\). Similar to \(\mathcal{I}_{11}\), \(\mathcal{I}_{12}\), \(\mathcal{I}_{21}\) and \(\mathcal{I}_{31}\), the following estimates are valid: \[\|\mathcal{J}_{11}\|\leq 2\left\|\int_{0}^{\theta}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)\right\|\] \[\leq 2\sum_{j=1}^{|D|}e^{-\alpha(s_{j}-s_{j-1})}|h(s_{j})-h(s_{j-1})| \leq 4V_{h}\|x-\widetilde{x}\|^{\frac{\alpha}{\widetilde{\alpha}+\alpha}},\] \[\|\mathcal{J}_{12}\|\leq 2\left\|\int_{\theta}^{t-\theta}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)\right\|\leq 4V_{h}\|x-\widetilde{x}\|^{\frac{\alpha}{\widetilde{\alpha}+ \alpha}},\] \[\|\mathcal{J}_{21}\|\leq 2\left\|\int_{-\infty}^{t-\theta}d_{\sigma}[\mathscr{V}(t)P \mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)\right\|\] \[\leq 4V_{h}e^{-\alpha\sigma}K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}\|x- \widetilde{x}\|^{\frac{\alpha}{\widetilde{\alpha}+\alpha}},\] \[\|\mathcal{J}_{31}\|\leq 2\left\|\int_{t+\theta}^{\infty}d_{\sigma}[\mathscr{V}(t)(I-P) \mathscr{V}^{-1}(\sigma)]\int_{0}^{\sigma}DF(Y(r,t,x)+\phi_{k}(r,Y(r,t,x)),s)\right\|\] \[\leq 4V_{h}e^{-\alpha\sigma}(1+K)KC^{3}e^{3CV_{A}}V_{A}^{2}\|x- \widetilde{x}\|^{\frac{\alpha}{\widetilde{\alpha}+\alpha}}.\] Notice that \[\|Y(s,t,x)-Y(s,t,\widetilde{x})\|\leq Ke^{\widetilde{\alpha}|t-s|}\|x- \widetilde{x}\|\] and by (5.2), \[\|\phi_{k}(s,Y(s,t,x))-\phi_{k}(s,Y(s,t,\widetilde{x}))\|\leq\widetilde{C}K^{ q}e^{\widetilde{\alpha}q|t-s|}\|x-\widetilde{x}\|^{q},\] for \(t,s\in\mathbb{R}\) and \(x,\widetilde{x}\in\mathscr{X}\). Then we have \[\|\mathcal{J}_{13}\|\leq \int_{t-\theta}^{t}\|Y(s,t,x)-Y(s,t,\widetilde{x})+\phi_{k}(s,Y(s,t,x))-\phi_{k}(s,Y(s,t,\widetilde{x}))\|dh(s)\] \[\leq \int_{t-\theta}^{t}Ke^{\widetilde{\alpha}(t-s)}\|x-\widetilde{x} \|dh(s)+\int_{t-\theta}^{t}\widetilde{C}K^{q}e^{\widetilde{\alpha}q(t-s)}\|x- \widetilde{x}\|^{q}dh(s)\] \[\leq K\|x-\widetilde{x}\|\sum_{j=1}^{|D|}e^{\widetilde{\alpha}(t-s_{j })}|h(s_{j})-h(s_{j-1})|+\widetilde{C}K^{q}\|x-\widetilde{x}\|^{q}\sum_{j=1}^{| D|}e^{\widetilde{\alpha}q(t-s_{j})}|h(s_{j})-h(s_{j-1})|\] \[\leq 2K\theta V_{h}\|x-\widetilde{x}\|^{\frac{\widetilde{\alpha}}{ \widetilde{\alpha}+\alpha}}+2\widetilde{C}K^{q}\theta V_{h}\|x-\widetilde{x} \|^{\frac{\widetilde{\alpha}}{\widetilde{\alpha}+\alpha}},\] and \[\|\mathcal{J}_{22}\|\leq \left\|\int_{t-\theta}^{t}d_{\sigma}[\mathscr{V}(t)P\mathscr{V} ^{-1}(\sigma)]\right\|\int_{0}^{\sigma}\|Y(s,t,x)-Y(s,t,\widetilde{x})+\phi_ {k}(s,Y(s,t,x))-\phi_{k}(s,Y(s,t,\widetilde{x}))\|dh(s)\] \[\leq 2K^{3}\theta C^{3}e^{3CV_{A}}V_{h}V_{A}^{2}e^{(\widetilde{\alpha} -\alpha)\theta}\|x-\widetilde{x}\|+2\widetilde{C}K^{2+q}\theta C^{3}e^{3CV_{A} }V_{h}V_{A}^{2}e^{(\widetilde{\alpha}q-\alpha)\theta}\|x-\widetilde{x}\|^{q}\] \[\leq 2K^{3}\theta C^{3}e^{3CV_{A}}V_{h}V_{A}^{2}\|x-\widetilde{x}\|^{ \frac{2\alpha}{\widetilde{\alpha}+\alpha}}+2\widetilde{C}K^{2+q}\theta C^{3} \|x-\widetilde{x}\|^{\frac{\alpha q+\alpha}{\widetilde{\alpha}+\alpha}},\] and \[\|\mathcal{J}_{32}\|\leq \left\|\int_{t}^{t+\theta}d_{\sigma}[\mathscr{V}(t)(I-P)\mathscr{ V}^{-1}(\sigma)]\right\|\int_{0}^{\sigma}\|Y(s,t,x)-Y(s,t,\widetilde{x})+\phi_{k}(s,Y(s,t,x))- \phi_{k}(s,Y(s,t,\widetilde{x}))\|dh(s)\] \[\leq 2(1+K)K^{2}\theta C^{3}e^{3CV_{A}}V_{h}V_{A}^{2}\|x- \widetilde{x}\|^{\frac{2\alpha}{\widetilde{\alpha}+\alpha}}+2\widetilde{C}(1+K) K^{1+q}\theta C^{3}\|x-\widetilde{x}\|^{\frac{\alpha q+\alpha}{\widetilde{ \alpha}+\alpha}}.\] Since \(0<q\leq\frac{\alpha}{\alpha+\alpha}\), and from the above all inequalities, we conclude that \[\|\phi_{k+1}(t,x)-\phi_{k+1}(t,\widetilde{x})\|\leq \|\mathcal{J}_{1}\|+\|\mathcal{J}_{2}\|+\|\mathcal{J}_{3}\|\] \[\leq 8V_{h}\|x-\widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}}+4V_{h}e ^{-\alpha\sigma}K^{2}C^{3}e^{3CV_{A}}V_{A}^{2}\|x-\widetilde{x}\|^{\frac{ \alpha}{\alpha+\alpha}}\] \[+4V_{h}e^{-\alpha\sigma}(1+K)KC^{3}e^{3CV_{A}}V_{A}^{2}\|x- \widetilde{x}\|^{\frac{\alpha}{\alpha+\alpha}}+2K\theta V_{h}\|x-\widetilde{x }\|^{\frac{\widetilde{\alpha}}{\alpha+\alpha}}\] \[+2\widetilde{C}K^{q}\theta V_{h}\|x-\widetilde{x}\|^{\frac{ \widetilde{\alpha}q}{\alpha+\alpha}}+2K^{3}\theta C^{3}e^{3CV_{A}}V_{h}V_{A}^ {2}\|x-\widetilde{x}\|^{\frac{2\alpha}{\alpha+\alpha}}\] \[+2\widetilde{C}K^{2+q}\theta C^{3}\|x-\widetilde{x}\|^{\frac{ \alpha q+\alpha}{\alpha+\alpha}}+2(1+K)K^{2}\theta C^{3}e^{3CV_{A}}V_{h}V_{A}^ {2}\|x-\widetilde{x}\|^{\frac{2\alpha}{\alpha+\alpha}}\] \[+2\widetilde{C}(1+K)K^{1+q}\theta C^{3}\|x-\widetilde{x}\|^{\frac {\alpha q+\alpha}{\alpha+\alpha}}\] \[\leq \widetilde{C}\|x-\widetilde{x}\|^{q},\] for some constant \(\widetilde{C}>0\), which implies that (5.2) holds for all \(k\). Consequently, if \(\|x-\widetilde{x}\|<1\), then \[\|\Phi(t,x)-\Phi(t,\widetilde{x})\|\leq\widehat{C}\|x-\widetilde{x}\|^{q}.\] ## 6 Applications ### A Hartman-Grobman theorem for Measure differential equations (MDEs) #### 6.1.1 Fundamental theory for MDEs Denote that \(\mathscr{X}\) is a Banach space and \(\mathbb{I}\subset\mathbb{R}\) is an interval. Consider the linear MDE \[Dx=\mathscr{A}(t)x+\mathscr{C}(t)Du, \tag{6.1}\] where \(Dx\) and \(Du\) denote the distributional derivatives of \(x\) and \(u\), and the functions \(\mathscr{A}:\mathbb{I}\to\mathscr{B}(\mathscr{X})\), \(\mathscr{C}:\mathbb{I}\to\mathscr{B}(\mathscr{X})\) and \(u:\mathbb{I}\to\mathbb{R}\) satisfying the following conditions: (D1) \(\mathscr{A}(t)\) is Perron integrable for any \(t\in\mathbb{I}\). (D2) \(u(t)\) is of locally bounded variation for any \(t\in\mathbb{I}\) and continuous from the left on \(\mathbb{I}\backslash\{\inf\mathbb{I}\}\). (D3) \(du\) denotes the Lebesgue-Stieltjes measure generated by the function \(u\), \(\mathscr{C}(t)\) is Perron-Stieltjes integrable with respect to \(u\) for any \(t\in\mathbb{I}\). Moreover, we consider some additional assumptions: (D4) There is a Lebesgue measure function \(m_{1}:\mathbb{I}\to\mathbb{R}\) satisfying for any \(c,d\in\mathbb{I}\), we have \(\int_{c}^{d}m_{1}(s)ds<\infty\) and \[\left\|\int_{c}^{d}\mathscr{A}(s)ds\right\|\leq\int_{c}^{d}m_{1}(s)ds.\] (D5) There exists a function \(du\)-measurable \(m_{2}:\mathbb{I}\to\mathbb{R}\) satisfying for any \(c,d\in\mathbb{I}\), we have \(\int_{c}^{d}m_{2}(s)du(s)<\infty\) and \[\left\|\int_{c}^{d}\mathscr{C}(s)du(s)\right\|\leq\int_{c}^{d}m_{2}(s)du(s).\] (D6) For all \(t\) such that \(t\) is a point of discontinuity of \(u\), we have \[\left(I+\lim_{r\to t^{+}}\int_{t}^{r}\mathscr{C}(s)du(s)\right)^{-1}\in\mathscr{B }(\mathscr{X}).\] By conditions (D1)-(D3), we say that \(x:[c,d]\subset\mathbb{I}\to\mathscr{X}\) is a solution of (6.1) satisfying the initial value \(x(t_{0})=x_{0}\), if \[x(t)=x_{0}+\int_{t_{0}}^{t}\mathscr{A}(s)x(s)ds+\int_{t_{0}}^{t}\mathscr{C}(s) x(s)du(s).\] If all conditions (D1)-(D6) hold, then the existence and uniqueness of a solution of (6.1) associated to the initial value \(x(t_{0})=x_{0}\) follows from Theorem 5.2 in [20] immediately. Hence, conditions (D1)-(D6) are valid throughout this subsection. **Lemma 6.1**.: _([1] Theorem 5.17) Given \(t_{0}\in[c,d]\), the function \(x:[c,d]\subset\mathbb{I}\to\mathscr{X}\) is a solution of (6.1) iff \(x\) is a solution of_ \[\begin{cases}\frac{dx}{d\tau}=D[A(t)x+G(t)x],\\ x(t_{0})=x_{0},\end{cases} \tag{6.2}\] _where \(A(t)=\int_{t_{0}}^{t}\mathscr{A}(s)ds\) and \(G(t)=\int_{t_{0}}^{t}\mathscr{C}(s)du(s)\)._ The fundamental operator \(U:\mathbb{I}\times\mathbb{I}\to\mathscr{B}(\mathscr{X})\) of MDEs (6.1) was given by [20] satisfying \[U(t,s)=I+\int_{s}^{t}\mathscr{A}(r)U(r,s)dr+\int_{s}^{t}\mathscr{C}(r)U(r,s)du (r),\quad t,s\in\mathbb{I}. \tag{6.3}\] Moreover, the function \(x(t)=U(t,t_{0})x_{0}\) is the solution of (6.1) satisfying the initial value \(x(t_{0})=x_{0}\in\mathscr{X}\). **Definition 6.2**.: _(exponential dichotomy, [20]) The MDEs (6.1) have an exponential dichotomy with \((P,K,\alpha)\) on \(\mathbb{I}\), if there exist a projection \(P:\mathscr{X}\to\mathscr{X}\) and constants \(K,\alpha\) satisfying_ \[\begin{cases}\|\mathscr{U}(t)P\mathscr{U}^{-1}(s)\|\leq Ke^{-\alpha(t-s)}, \quad t\geq s;\\ \|\mathscr{U}(t)(I-P)\mathscr{U}^{-1}(s)\|\leq Ke^{\alpha(t-s)},\quad t\leq s,\end{cases} \tag{6.4}\] _where \(\mathscr{U}(t)=U(t,0)\) and \(\mathscr{U}^{-1}(t)=U(0,t)\)._ **Definition 6.3**.: _(strong exponential dichotomy) The MDEs (6.1) have a strong exponential dichotomy on \(\mathbb{I}\), if (6.4) holds and there exists a positive constant \(\widetilde{\alpha}>\alpha\) such that_ \[\|\mathscr{U}(t)\mathscr{U}^{-1}(s)\|\leq Ke^{\widetilde{\alpha}|t-s|},\quad \mathrm{for}\;t,s\in\mathbb{R}. \tag{6.5}\] **Lemma 6.4**.: _([20] Proposition 5.7) The MDEs (6.1) have an exponential dichotomy with \((P,K,\alpha)\) iff the GODEs_ \[\frac{dx}{d\tau}=D[A(t)x+F(t)x],\quad t\in\mathbb{I}, \tag{6.6}\] _have an exponential dichotomy with \((P,K,\alpha)\), where \(A(t)=\int_{t_{0}}^{t}\mathscr{A}(s)ds\) and \(G(t)=\int_{t_{0}}^{t}\mathscr{C}(s)du(s)\)._ #### 6.1.2 Main results for MDEs Consider the nonlinear MDEs as follows. \[Dx=\mathscr{A}(t)x+\mathscr{C}(t)xDu+\mathscr{H}(t,x)Du, \tag{6.7}\] where \(\mathscr{H}(t,x):\mathbb{R}\times\mathscr{X}\to\mathscr{X}\) is Lebesgue-Stieltjes integrable with respect to \(u\) and \(\mathscr{H}(t,0)=0\). Suppose that: (a) for all \(t\) such that \(t\) is a point of discontinuity of \(u\), there exists a positive constant \(C_{g}>0\) satisfying \[\left\|\left(Id+\lim_{r\to t^{+}}\int_{t}^{r}\mathscr{C}(s)du(s)\right)^{-1} \right\|\leq C_{g};\] (b) \(u\) is a bounded variation function on \(\mathbb{R}\), i.e., \[V_{u}:=\sup\{\mathrm{var}_{c}^{d}u:c,d\in\mathbb{R},c<d\}<\infty;\] (c) \(\mathscr{H}(t,x)\) is uniformly bounded in \(t\in\mathbb{R}\) with constant \(M_{h}>0\) for any \(x\in\mathscr{X}\), i.e., \[\|\mathscr{H}(t,x)\|\leq M_{h};\] (d) there is a sufficiently small Lipschitz constant \(L_{h}\) such that for any \(t\in\mathbb{R}\) and \(x,\widetilde{x}\in\mathscr{X}\), \[\|\mathscr{H}(t,x)-\mathscr{H}(t,\widetilde{x})\|\leq L_{h}\|x-\widetilde{x}\|.\] Then we establish our main results for MDEs. **Theorem 6.5**.: _Suppose that linear MDEs (6.1) possess an exponential dichotomy with the form (6.4). Further assume that conditions (a), (b), (c), (d) hold. If_ \[2L_{h}V_{u}\left(1+K(1+2K)C_{g}^{3}e^{3C_{g}V_{A+F}}V_{A+F}^{2}\right)<1,\] _then Eq. (6.7) has a unique bounded solution._ **Theorem 6.6**.: _Suppose that all conditions of Theorem 6.5 hold. Then the linear MDEs (6.1) are topologically conjugated to the nonlinear MDEs (6.7)._ **Theorem 6.7**.: _Suppose that linear MDEs (6.1) admits a strong exponential dichotomy with the form (6.5) and all conditions of Theorem 6.5 hold. If further the bounded variation function \(u\) satisfies \(|u(t)-u(s)|\leq e^{-\alpha|t-s|}\), then the conjugacies are both Holder continuous._ We only prove Theorem 6.5, and verify that Theorem 6.5 satisfies all the conditions of Theorem 3.1. The proofs of the other two theorems are similar to Theorem 6.5. Proof of Theorem 6.5.: Since (6.1) has an exponential dichotomy with \((P,K,\alpha)\), we derive from Lemma 6.4 that \[\frac{dx}{d\tau}=D[A(t)x+F(t)x]\] has an exponential dichotomy with the same \((P,K,\alpha)\) on \(\mathbb{R}\). Note that the solution of Eq. (6.7) with the initial value \(x(t_{0})=x_{0}\) is defined by \[x(t)=x_{0}+\int_{t_{0}}^{t}\mathscr{A}(s)x(s)ds+\int_{t_{0}}^{t}\mathscr{C}(s) x(s)du(s)+\int_{t_{0}}^{t}\mathscr{H}(s,x(s))du(s).\] Given \(t,t_{0}\in\mathbb{R}\) and set \(A(t)=\int_{t_{0}}^{t}\mathscr{A}(s)ds\), \(F(t)=\int_{t_{0}}^{t}\mathscr{C}(s)du(s)\). We define the Kurzweil integrable map \(\mathcal{N}:\mathscr{X}\times\mathbb{R}\to\mathscr{X}\) \[\mathcal{N}(x(t),t):=\int_{t_{0}}^{t}\mathscr{H}(s,x(s))du(s).\] Then we have \[x(t)=x_{0}+\int_{t_{0}}^{t}d[A(s)]x(s)+\int_{t_{0}}^{t}d[F(s)]x(s)+\int_{t_{0}} ^{t}D\mathcal{N}(x(s),s),\] which is a solution of the following nonlinear GODEs with the initial value \(x(t_{0})=x_{0}\) \[\frac{dx}{d\tau}=D[A(t)x+F(t)x+\mathcal{N}(x,t)]. \tag{6.8}\] We now claim that \(\mathcal{N}(x,t)\) belongs to the class \(\mathscr{A}(\Omega,u)\), here \(\Omega=\mathscr{X}\times\mathbb{R}\). In fact, for any \(t,\widetilde{t}\in\mathbb{R}\) and \(x\in\mathscr{X}\), by using condition (c), we have \[\|\mathcal{N}(x,t)-\mathcal{N}(x,\widetilde{t})\|=\left\|\int_{\widetilde{t}} ^{t}\mathscr{H}(s,x)du(s)\right\|\leq\|\mathscr{H}(s,x)\|\|u(t)-u(\widetilde{ t})|\leq M_{h}|u(t)-u(\widetilde{t})|,\] and for any \(t,\widetilde{t}\in\mathbb{R}\) and \(x,\widetilde{x}\in\mathscr{X}\), by using conditions (c) and (d), we have \[\|\mathcal{N}(x,t)-\mathcal{N}(x,\widetilde{t})-\mathcal{N}( \widetilde{x},t)+\mathcal{N}(\widetilde{x},\widetilde{t})\|= \left\|\int_{\widetilde{t}}^{t}\left(\mathscr{H}(s,x)-\mathscr{H}( s,\widetilde{x})\right)du(s)\right\|\] \[\leq \int_{\widetilde{t}}^{t}\|\mathscr{H}(s,x)-\mathscr{H}(s, \widetilde{x})\|du(s)\] \[\leq \|\mathscr{H}(s,x)-\mathscr{H}(s,\widetilde{x})\|\|u(t)-u( \widetilde{t})|\] \[\leq L_{h}\|x-\widetilde{x}\|u(t)-u(\widetilde{t})|.\] From Theorem 5.2 of [20], it follows that \(\mathrm{var}_{c}^{d}(A+F)<\infty\) for all \(c,d\in\mathbb{R}\) and \(c<d\). For convenience, we write \(V_{A+F}=\mathrm{var}_{c}^{d}(A+F)\). Indeed, let \(D=\{t_{0},t_{1},\cdots,t_{|D|}\}\) be a division of \([c,d]\). Then \[\sum_{j=1}^{|D|}\|A(t_{j})+F(t_{j})-A(t_{j-1})-F(t_{j-1})\|\leq\sum_{j=1}^{|D| }\left\|\int_{t_{j-1}}^{t_{j}}\mathscr{A}(s)ds\right\|+\sum_{j=1}^{|D|}\left\| \int_{t_{j-1}}^{t_{j}}\mathscr{C}(s)du(s)\right\|,\] by using condition (D4) and (D5), we deduce that \[\sum\limits_{j=1}^{|D|}\left\|\int_{t_{j-1}}^{t_{j}}\mathscr{A}(s)ds\right\|+ \sum\limits_{j=1}^{|D|}\left\|\int_{t_{j-1}}^{t_{j}}\mathscr{C}(s)du(s)\right\| \leq\int_{c}^{d}m_{1}(s)ds+\int_{c}^{d}m_{2}(s)du(s)<\infty,\] that is, \(V_{A+F}<\infty\). Hence, taking \(L_{h}\) is sufficiently small, and by using conditions (a), (b), (c) and (d), we can ensure that \[2L_{h}V_{u}(1+K(1+2K))C_{g}^{3}e^{3C_{g}V_{A+F}}V_{A+F}^{2}<1.\] Consequently, all conditions of Theorem 3.1 are satisfied, we assert that Eq. (6.7) has a unique bounded solution. ### A Hartman-Grobman theorem for the impulsive differential equations (IDEs) #### 6.2.1 Fundamental theory for IDEs Denote that \(\mathscr{X}\) is a Banach space and \(\mathbb{I}\subset\mathbb{R}\) is an interval. We consider the linear IDEs \[\begin{cases}\dot{x}(t)=\widetilde{A}(t)x(t),\quad t\neq t_{i},\\ \vartriangle x(t_{i})=x(t_{i}^{+})-x(t_{i})=B_{i}x(t_{i}),\quad i\in\mathscr{I }:=\{i\in\mathbb{Z}:t_{i}\in\mathbb{I}\},\end{cases} \tag{6.9}\] where \(\widetilde{A}:\mathbb{I}\to\mathscr{B}(\mathscr{X})\) and \(B_{i}\in\mathscr{B}(\mathscr{X})\) satisfy the following assumptions: (B1) \(\widetilde{A}(t)\) is Perron integrable for any \(t\in\mathbb{I}\); (B2) there is a Lebesgue measure function \(m:\mathbb{I}\to\mathbb{R}\) satisfyinf for any \(c,d\in\mathbb{I}\) and \(c<d\), the Lebesgue integral \(\int_{c}^{d}m(s)ds\) is finite and \[\left\|\int_{c}^{d}\widetilde{A}(s)ds\right\|\leq\int_{c}^{d}m(s)ds.\] (B3) \((I+B_{i})^{-1}\in\mathscr{B}(\mathscr{X})\), where \(i\in\mathscr{I}\). In addition, let \(\{t_{k}\}_{k\in\mathbb{Z}}\) be the impulsive points satisfying the relation \[\cdots<t_{-k}<\cdots<t_{-1}<t_{0}=0<t_{1}<\cdots<t_{k}<\cdots,\] and \(\lim\limits_{k\to\pm\infty}t_{k}=\pm\infty\). Set \(\mathscr{I}_{c}^{d}:=\{i\in\mathscr{I}:c\leq t_{i}\leq d\}\) for \(c,d\in\mathbb{I}\). Define the Heaviside function \(H_{l}\): \[H_{l}(t)=\begin{cases}0,\quad\text{for }t\leq l,\\ 1,\quad\text{for }t>l.\end{cases}\] Then, the solution of Eq. (6.9) with the initial value \(x(t_{0})=x_{0}\) satisfying the following integral equation \[x(t)=\begin{cases}x_{0}+\int_{t_{0}}^{t}\widetilde{A}(s)x(s)ds+\sum\limits_{i \in\mathscr{I}_{t_{0}}^{t}}B_{i}x(t_{i})H_{t_{i}}(t),\quad t\geq t_{0}(t\in \mathbb{I}),\\ x_{0}+\int_{t_{0}}^{t}\widetilde{A}(s)x(s)ds-\sum\limits_{i\in\mathscr{I}_{t_{ 0}}^{t_{0}}}B_{i}x(t_{i})(1-H_{t_{i}}(t)),\quad t<t_{0}(t\in\mathbb{I}).\end{cases}\] Then by using Theorem 5.20 from [1], \(x(t)\) is a solution of Eq. (6.9) iff \(x(t)\) is a solution of the linear GODE \(\frac{dx}{d\tau}=D[A(t)x]\), where \(A\) is given by \[A(t)=\begin{cases}\int_{t_{0}}^{t}\widetilde{A}(s)ds+\sum\limits_{i\in\mathscr{ J}_{t_{0}}^{t}}B_{i}H_{t_{i}}(t),\quad t\geq t_{0},\\ \int_{t_{0}}^{t}\widetilde{A}(s)ds-\sum\limits_{i\in\mathscr{J}_{t}^{t_{0}}}B _{i}(1-H_{t_{i}}(t)),\quad t<t_{0}.\end{cases} \tag{6.10}\] Let \(W:\mathbb{I}\times\mathbb{I}\to\mathscr{B}(\mathscr{X})\) be the evolution operator of the IDE (6.9), it has the following form: if \(t\geq s\), \(t\in(t_{i},t_{i+1}]\) and \(s\in(t_{j-1},t_{j}]\), then \[W(t,s)=\Upsilon(t,t_{k})\left(\prod_{k=i}^{j+1}[Id+B_{k}]\Upsilon(t_{k},t_{k-1 })\right)[Id+B_{j}]\Upsilon(t_{j},s),\] where \(\Upsilon:\mathbb{I}\times\mathbb{I}\to\mathscr{B}(\mathscr{X})\) is the evolution operator of \(\dot{x}=\widetilde{A}(t)x\), and if \(t<s\), \(s\in(t_{j},t_{j+1}]\) and \(t\in(t_{j-1},t_{j}]\), then \[W(t,s)=[W(s,t)]^{-1}=\Upsilon(t,t_{j})[Id+B_{j}]^{-1}\cdot[Id+B_{i}]^{-1} \Upsilon(t_{j},s).\] **Definition 6.8**.: _[_20_]_ _The IDEs (6.9) possess an exponential dichotomy with \((P,K,\alpha)\) on \(\mathbb{I}\) if there exist a projection \(P\) and constants \(K,\alpha\) such that_ \[\begin{cases}\|\mathscr{W}(t)P\mathscr{W}^{-1}(s)\|\leq Ke^{-\alpha(t-s)}, \quad t\geq s,\\ \|\mathscr{W}(t)(I-P)\mathscr{W}^{-1}(s)\|\leq Ke^{\alpha(t-s)},\quad t<s. \end{cases} \tag{6.11}\] _where \(\mathscr{W}(t)=W(t,t_{0})\) and \(\mathscr{W}^{-1}(t)=W(t_{0},t)\)._ **Definition 6.9**.: _The IDEs (6.9) admit a strong exponential dichotomy on \(\mathbb{I}\), if (6.11) holds and there exists a positive constant \(\widetilde{\alpha}>\alpha\) such that_ \[\|\mathscr{W}(t)\mathscr{W}^{-1}(s)\|\leq Ke^{\widetilde{\alpha}|t-s|},\quad \mathrm{for}\;t,s\in\mathbb{R}. \tag{6.12}\] **Lemma 6.10**.: _([20] Proposition 5.21) The IDEs (6.9) possess an exponential dichotomy with \((P,K,\alpha)\) iff the GODEs_ \[\frac{dx}{d\tau}=D[A(t)x],\quad t\in\mathbb{I},\] _have an exponential dichotomy with \((P,K,\alpha)\), where \(A\) is defined by (6.10)._ #### 6.2.2 Main results for IDEs Consider the nonlinear IDEs as follows: \[\begin{cases}\dot{x}(t)=\widetilde{A}(t)x(t)+f(t,x(t)),\quad t\neq t_{i},\\ \triangle\;x(t_{i})=B_{i}x(t_{i}),\quad i\in\mathbb{Z},\end{cases} \tag{6.13}\] where \(f:\mathbb{R}\times\mathscr{X}\to\mathscr{X}\) is Perron integrable and \(f(t,0)=0\). We suppose that the following conditions hold: (a) for all \(i\in\mathbb{Z}\), there exists a positive constant \(C_{b}\) such that \[\sum_{i\in\mathbb{Z}}\|B_{i}\|\leq C_{b}\quad\text{and}\quad\|(Id+B_{i})^{-1}\| \leq C_{b};\] (b) there exists a Lebesgue measure function \(m:\mathbb{R}\to\mathbb{R}\) such that the Lebesgue integral \(\int_{\mathbb{R}}m(s)ds\) is finite and \[\left\|\int_{\mathbb{R}}\widetilde{A}(s)ds\right\|\leq\int_{\mathbb{R}}m(s)ds;\] (c) for any \(t\in\mathbb{R}\) and \(x,y\in\mathscr{X}\), there exists a Lebesgue measure function \(\gamma:\mathbb{R}\to\mathbb{R}\) such that the Lebesgue integral \(\int_{\mathbb{R}}\gamma(s)ds\) is finite and \[\|f(t,x)\|\leq\gamma(t)\quad\text{and}\quad\|f(t,x)-f(t,y)\|\leq\gamma(t)\|x- y\|.\] Then we give our main results for IDEs. **Theorem 6.11**.: _Suppose that the IDEs (6.9) possess an exponential dichotomy with the form (6.11). If conditions (a), (b), (c) hold and \(\int_{\mathbb{R}}\gamma(s)ds\) is sufficiently small, then Eq. (6.13) has a unique bounded solution._ **Theorem 6.12**.: _Suppose that all conditions of Theorem 6.11 hold. Then Eq. (6.9) is topologically conjugated to Eq. (6.13)._ **Theorem 6.13**.: _Suppose that linear IDEs (6.9) admit a strong exponential dichotomy with the form (6.12) and all conditions of Theorem 6.11 hold. If further there exists a bounded nondecreasing function \(\mu:\mathbb{R}\to\mathbb{R}\) such that \(|\mu(t)-\mu(s)|\leq e^{-\alpha|t-s|}\), then the conjugacies are both Holder continuous._ We only prove Theorem 6.11, and verify that Theorem 6.11 satisfies all the conditions of Theorem 3.1. The proofs of the other two theorems are similar to Theorem 6.11. Proof.: Since IDEs (6.9) possess an exponential dichotomy with \((P,K,\alpha)\), we derive from Lemma 6.10 that \[\frac{dx}{d\tau}=D[A(t)x]\] has an exponential dichotomy with the same \((P,K,\alpha)\) on \(\mathbb{R}\). For any \(t\geq t_{0}\), we note that the solution of Eq. (6.13) with the initial value \(x(t_{0})=x_{0}\) is defined by \[x(t)=x_{0}+\int_{t_{0}}^{t}\widetilde{A}(s)x(s)ds+\sum_{i\in\mathscr{I}_{t_{0} }^{t}}B_{i}x(t_{i})H_{t_{i}}(t)+\int_{t_{0}}^{t}f(s,x(s))ds.\] Given \(t,t_{0}\in\mathbb{R}\) and set \(A(t)=\int_{t_{0}}^{t}\widetilde{A}(s)ds+\sum\limits_{i\in\mathscr{J}_{t_{0}}^{t} }B_{i}H_{t_{i}}(t)\). We define the Kurzweil integrable map \(\mathcal{Q}:\mathscr{X}\times\mathbb{R}\rightarrow\mathscr{X}\) \[\mathcal{Q}(x(t),t):=\int_{t_{0}}^{t}f(s,x(s))ds.\] Then we obtain that for every \(t\geq t_{0}\), \[x(t)=x_{0}+\int_{t_{0}}^{t}d[A(s)]x(s)+\int_{t_{0}}^{t}D\mathcal{Q}(x(s),s),\] which is a solution of the following nonlinear GODEs with the initial value \(x(t_{0})=x_{0}\) \[\frac{dx}{d\tau}=D[A(t)x+\mathcal{Q}(x,t)]. \tag{6.14}\] Now we claim that the function \(\mathcal{Q}(x,t)\in\mathscr{A}(\Omega,\mu)\), here \(\Omega=\mathscr{X}\times\mathbb{R}\). In fact, for any \(t,\widetilde{t}\in\mathbb{R}\) and \(x\in\mathscr{X}\), by condition (c), there must exists a bounded nondecreasing function \(\mu:\mathbb{R}\rightarrow\mathbb{R}\) such that \[\|\mathcal{Q}(x,t)-\mathcal{Q}(x,\widetilde{t})\|=\left\|\int_{\widetilde{t }}^{t}f(s,x)ds\right\|\leq\int_{\widetilde{t}}^{t}\gamma(s)ds\leq|\mu(t)-\mu (\widetilde{t})|,\] and for any \(t,\widetilde{t}\in\mathbb{R}\) and \(x,\widetilde{x}\in\mathscr{X}\), by also using condition (c), we have \[\|\mathcal{Q}(x,t)-\mathcal{Q}(x,\widetilde{t})-\mathcal{Q}(\widetilde{x},t) +\mathcal{Q}(\widetilde{x},\widetilde{t})\|=\left\|\int_{\widetilde{t}}^{t} \left(f(s,x)-f(s,\widetilde{x})\right)ds\right\|\leq\|x-\widetilde{x}\||\mu(t )-\mu(\widetilde{t})|.\] We now show that \(V_{A}:=\text{var}_{c}^{d}A<\infty\) for all \(c,d\in\mathbb{R}\) and \(c<d\). Indeed, let \(D=\{t_{0},t_{1},\cdots,t_{|D|}\}\) be a division of \([c,d]\). Then \[\sum_{j=1}^{|D|}\|A(t_{j})-A(t_{j-1})\|\leq\sum_{j=1}^{|D|}\left\|\int_{t_{j-1 }}^{t_{j}}\widetilde{A}(s)ds\right\|+\sum_{j=1}^{|D|}\sum_{i\in\mathscr{J}_{t _{j-1}}^{t_{j}}}\left\|B_{i}H_{t_{i}}(t)\right\|,\] by using condition (a) and (b), we deduce that \[\sum_{j=1}^{|D|}\left\|\int_{t_{j-1}}^{t_{j}}\widetilde{A}(s)ds\right\|+\sum_{ j=1}^{|D|}\sum_{i\in\mathscr{J}_{t_{j-1}}^{t_{j}}}\left\|B_{i}H_{t_{i}}(t)\right\| \leq\int_{c}^{d}m(s)ds+C_{b}<\infty,\] that is, \(V_{A}<\infty\). Set \(|\mu(t)|\leq M_{\mu}\) for some sufficiently small \(M_{\mu}>0\), and by using conditions (a), (b) and (c), we can ensure that \[2M_{\mu}(1+K(1+2K))C_{b}^{3}e^{3C_{b}V_{A}}V_{A}^{2}<1.\] Consequently, all assumptions of Theorem 3.1 are valid, we assert that Eq. (6.13) has a unique bounded solution. ## Data Availability Statement No data was used for the research in this article. It is pure mathematics. ## Conflict of Interest The authors declare that they have no conflict of interest. ### Contributions We declare that all the authors have same contributions to this paper.
2304.05769
Critical States Generators from Perturbed Flatbands
One-dimensional all-bands-flat lattices are networks with all bands being flat and highly degenerate. They can always be diagonalized by a finite sequence of local unitary transformations parameterized by a set of angles \(\theta_{i}\). In our previous work, Ref.~\onlinecite{lee2023critical}, we demonstrated that quasiperiodic perturbations of the one-dimensional all-bands-flat lattice with \(\theta_{i} = \pi/4\) give rise to a critical-to-insulator transition and fractality edges separating critical from localized states. In this study we consider the full range of angles \(\theta_{i}\)s available for the all-bands-flat model and study the effect of the quasiperiodic perturbation. For weak perturbation, we derive an effective Hamiltonian and we identify the sets of \(\theta_{i}\)s for which the effective model maps to extended or off-diagonal Harper models and hosts critical states. For all the other values of the angles the spectrum is localized. Upon increasing the perturbation strength, the extended Harper model evolves into the system with energy dependent critical-to-insulator transitions, that we dub \emph{fractality edges}. The case where the effective model maps onto the off-diagonal Harper model features a critical-to-insulator transition at a finite disorder strength.
Sanghoon Lee, Sergej Flach, Alexei Andreanov
2023-04-12T11:20:43Z
http://arxiv.org/abs/2304.05769v1
# Critical States Generators from Perturbed Flatbands ###### Abstract One-dimensional all-bands-flat lattices are networks with all bands being flat and highly degenerate. They can always be diagonalized by a finite sequence of local unitary transformations parameterized by a set of angles \(\theta_{i}\). In our previous work, Ref. [1], we demonstrated that quasiperiodic perturbations of the one-dimensional all-bands-flat lattice with \(\theta_{i}=\pi/4\) give rise to a critical-to-insulator and fractality edges separating critical from localized states. In this study we consider the full range of angles \(\theta_{5}\) available for the all-bands-flat model and study the effect of the quasiperiodic perturbation. For weak perturbation, we derive an effective Hamiltonian and we identify the sets of \(\theta_{5}\) for which the effective model maps to extended or off-diagonal Harper models and hosts critical states. For all the other values of the angles the spectrum is localized. Upon increasing the perturbation strength, the extended Harper model evolves into the system with energy dependent critical-to-insulator transitions, that we dub _fractality edges_. The case where the effective model maps onto the off-diagonal Harper model features a critical-to-insulator transition at a finite disorder strength. **The breaking of the macroscopic degeneracy in flatband systems allows to realize a variety of interesting and exotic phases depending on the types of the perturbation applied. One particular example is all-band-flat systems where all energy bands are flat. It requires high degree of fine-tuning, but in 1D is always achieved by applying local unitary transformations to decoupled sites producing the entire manifold of all-bands-flat Hamiltonians. In the previous work,[1] we studied the effect of a quasiperiodic perturbation on a specific all-bands-flat Hamiltonian (a specific point of the manifold). We found a critical-to-insulator transition analytically for weak perturbation and fractality edges numerically for finite perturbation.** **In this work, we consider the full manifold of all-bands-flat systems in presence of the quasiperiodic perturbation. Considering the full ABF manifold might be relevant for possible experimental realizations of these perturbed models. We identify analytically submanifolds with critical eigenstates for weak potential strengths. For finite strengths we confirm numerically the emergence of fractality edges on these submanifolds. In all other cases the perturbed Hamiltonians have all their eigenstates localized.** ## I Introduction Recently, physical systems with macroscopic degeneracies have received a lot of attention. Their study is motivated by the observation that macroscopic degeneracies are fragile and are easily lifted even by weak perturbations, resulting in various exotic and unusual correlated phases. One example of such systems is a flatband Hamiltonian - a translationally invariant tight-binding network with dispersionless Bloch bands \(E(\mathbf{k})=E\).[2; 3; 4] The fine-tuned geometry or the symmetry of the flatband system causes destructive interference which leads to zero group velocity \(\nabla_{\mathbf{k}}E\) and traps particles over a strictly finite number of sites,[5; 6] resulting in the appearance of compact localized states - eigenstates with strictly compact support. The extreme sensitivity of macroscopically degenerate flatbands to perturbations leads to emergence of a variety of interesting and exotic phases: flatband ferromagnetism,[7] frustrated magnetism,[8; 9] unconventional Anderson localization,[10] ergodicity breaking[11; 12; 13; 14] and superconductivity.[15] Flatband systems can be further fine-tuned to turn all their energy bands flat, resulting in the _all-bands-flat_ (ABF) networks.[16; 17] Despite the high degree of fine-tuning they can be constructed systematically in 1D via a sequence of local unitary transformations applied to isolated sites.[10; 12; 17; 18; 1] We conjectured that this results also holds in higher dimensions.[17] ABF models are even more sensitive to perturbations and interesting phenomena have been observed in presence of perturbations: nonperturbative metal-insulator transitions[18; 19; 20], ergodicity-breaking[11; 12; 13] and caging of particles.[17; 21; 22; 23; 24] In Ref. [1], we looked at a specific two-band ABF model and studied its behavior in the presence of a _quasiperiodic perturbation_. We identified parameters of the model for which it features critical states with subdiffusive/almost-diffusive transport for weak perturbation and an exotic phase transition - critical-to-insulator transition (CIT) - was observed. For finite potential strength, we discovered fractality edges, an energy dependent CITs. In this paper, we explore the full manifold of this two-band ABF ladder and identify additional submanifolds which also host critical states, CIT and fractality edges under the quasiperiodic perturbation. The outline of the paper is the following: We discuss the construction of the ABF in Sec. II. Then we derive an effective model in Sec. III valid for weak quasiperiodic perturbation and use it to locate the ABF submanifolds supporting critical
2307.04127
Self-healing unitarity is an optical illusion
Among the vast variety of proposals put forward by the community to resolve tree-level unitarity violations in Higgs inflation models, there exists the concept of self-healing. It heals the theory from supposed tree-level violations for elastic scattering processes by summing over successive vacuum polarization loop corrections. In this work, we examine this technique to check whether unitarity is indeed restored and find that there exist underlying constraints in self-healing unitarity that pose the same perturbative unitarity bounds that it was expected to heal.
Archit Vidyarthi
2023-07-09T08:50:36Z
http://arxiv.org/abs/2307.04127v2
Self-healing unitarity is an Optical illusion: Comment on 'Self-healing of unitarity in effective field theories and the onset of new physics' ###### Abstract Among the vast variety of proposals put forward by the community to resolve tree-level unitarity violations in Higgs inflation models, there exists the concept of self-healing. This mechanism helps cancel out tree-level violations for elastic scattering processes by summing over successive vacuum polarization loop corrections. In this comment, we shall see how self-healing is a manifestation of the optical theorem for a theory tailored to behave in a certain way. ## I Introduction Unitarity is one of several properties at the heart of a quantum theory, and essentially implies that the probability of an event cannot exceed unity. Along with other properties such as positivity, causality, etc., it helps provide us with useful bounds on a theory (for example: perturbative bounds, Froissart bounds, etc.) in the form of constraints on a parameter, or on the domain within which the theory is valid, without needing to introduce new degrees of freedom (DsOF). Tree-level unitarity violations, estimated using perturbative unitarity bounds, are immensely helpful in pointing out missing pieces in a theory. For a non-renormalizable theory, these may imply that the loop corrections might become relevant as we approach the apparent violation scale in describing the complete process [1]. For others, they may indicate that the theory is incomplete. Beyond Standard Model (BSM) physics helps fill in gaps stemming from the incompatibility of the Standard Model and gravity, and provides us with possible candidates for the missing D\(\otimes\)OF, often motivated by the existence of dark matter and dark energy that make up the majority of the energy content of the universe. Given how Higgs driven inflation has been one of the prime candidates for a theory describing the birth of the universe, the fact that it faces unitarity violations far below the Planck scale is something the scientific community has been trying to explain away for a long time (see our recent work [2] and cited works for more info). After several decades of search, though, we have as of yet not been able to resolve the issue completely. Among the several approaches suggested towards resolving the issue is'self-healing' of unitarity proposed in [3] and later applied in the context of Higgs inflation in [4], which are at the heart of what we discuss in this work. This paper is organized as follows: in Sec.II, we introduce the reader to the optical theorem and partial wave unitarity bounds as presented in [1]; in Sec.III, we briefly review the idea of self-healing as it was put forward in [3]; followed by our explanation how self-healing is a special case of the optical theorem in Sec.IV; and lastly, some discussions in Sec.V. Brief recap Imposing that the action is unitary, we obtain the famous optical theorem, which equates the imaginary part of the scattering amplitude to the total scattering cross section. \[\mathcal{M}(i\to f)-\mathcal{M}^{*}(f\to i)=i\sum_{X}\int d\Pi_{X}(2\pi)^{4} \delta^{4}(p_{i}-p_{X})\mathcal{M}(i\to X)\mathcal{M}^{*}(f\to X). \tag{1}\] In its generalized form (1), this theorem states that order-by-order in perturbation theory, imaginary parts of higher loop amplitudes are determined by lower loop amplitudes. For instance, the imaginary part of one-loop amplitude could be determined by the tree-level amplitude. A special case arises from this using the assumption that the initial and final states are the same state \(|A\rangle\): \[\text{Im}\mathcal{M}(A\to A)=2E_{CM}|\vec{p}_{i}|\sum_{X}\sigma(A\to X). \tag{2}\] Optical theorem puts a constraint on how large a scattering amplitude can be. From the approximate form, \[\text{Im}\mathcal{M}\leq|\mathcal{M}|^{2}\implies|\mathcal{M}|<1. \tag{3}\] Now, using the partial wave expansion of the scattering amplitude to impose constraints on coefficients of the Legendre polynomials. To recap, we first expand the scattering amplitude as: \[\mathcal{M}(\theta)=16\pi\sum_{j}a_{j}(2j+1)P_{j}(\cos\theta), \tag{4}\] where \(P_{j}(\cos\theta)\) are Legendre polynomials, with \(P(1)=1\), and \[\int_{-\infty}^{\infty}P_{j}(\cos\theta)P_{k}(\cos\theta)d\cos\theta=\frac{2} {2j+1}\delta_{jk}. \tag{5}\] For a case where the initial and final states are the same, we can write the cross section in the center of mass frame as: \[\sigma_{CM_{tot}}=\frac{16\pi}{E_{CM}^{2}}\sum_{j}|a_{j}|^{2}(2j+1). \tag{6}\] Employing the optical theorem at \(\theta=0\), we have, \[\text{Im}\mathcal{M}(AB\to AB\text{ at }\theta=0) =2E_{\text{CM}}\left|\vec{p}_{i}\right|\sum_{X}\sigma_{\text{tot} }(AB\to X)\] \[\geq 2E_{\text{CM}}\left|\vec{p}_{i}\right|\sigma_{\text{tot}}(AB \to AB), \tag{7}\] where a simple inequality has been introduced owing to the fact that \(|AB\rangle\in|X\rangle\). Then, \[\sum_{j=0}^{\infty}(2j+1)\,\text{Im}\left(a_{j}\right)\geq\frac{2\left|\vec{ p}_{i}\right|}{E_{\text{CM}}}\sum_{j=0}^{\infty}(2j+1)\left|a_{j}\right|^{2}. \tag{8}\] This, coupled with the inequality \(|a_{j}|\geq\text{Im}(a_{j})\), means that the magnitude of \(a_{j}\) is now constrained as \(|a_{j}|\leq 1\), \(0\leq\text{Im}(a_{j})\leq 1\), and \(|\text{Re}(a_{j})|\leq 1/2\), as seen in Fig. (1). ## III Proposition: self-healing unitarity Preceding [3], authors of [5] worked with a set of complex scalar fields, nonminimally coupled with gravity, and tried to estimate the scattering amplitude for the process \(s\bar{s}\to s^{\prime}\bar{s^{\prime}}\), where they set \(s\neq s^{\prime}\) to make sure that only the \(s\)-channel graviton exchange diagram contributed to the process, and they could avoid collinear divergences in the \(t\) and \(u\) channels. They, then, try to estimate the scale at which the standard model of particle physics and the minimal supersymmetric standard model, both coupled with gravity, would similarly violate unitarity at tree-level. They claim that in the limit where the number of particles is large, the leading order loop corrections are successive vacuum polarization diagrams and that these violations could be fixed by considering such higher-loop corrections. Following this, authors of [3] consider a similar Lagrangian as [5] involving a nonminimal coupling between gravity and multiple scalar fields and provide a useful confirmation for the results presented in [5]. They first use partial wave analysis to do so, and then verify the result using a summation of the infinite series loops diagrams. Please note that [3] focused on \(j=2\) partial wave amplitudes only. Authors of [4] expanded on the work of the preceding authors and verified the results for a theory involving the Higgs doublet. Instead of sticking to just one process, however, the authors considered certain combinations of these scalars for initial and final states, making sure the combinations adhered to the rules set forth in [5] mentioned earlier. Later, they summed over the contributions from all of these processes to show explicitly that the self-healing phenomenon could be applied to \(j=0\) level as well. ## IV Equalities The most important step in order to proceed with Eq.(2) is to fix the initial state \(|AB\rangle\). \(|X\rangle\) would, then, contain all possible states that \(|AB\rangle\) could transform to, with \(|AB\rangle\) itself being one of them. This is what causes the inequality in Eq.(7). What happens if we constrain the theory in such a way that the only possible state is \(|AB\rangle\)? Then, instead of an inequality we'd get the equality \(|a_{j}|=\text{Im}(a_{j})\) for all \(j\). This is exactly what's observed in [3; 5], though they only show it for \(j=2\). So while the iterative sums are novel and useful in explicitly demonstrating the idea of self-healing, it is, for all intents and purposes, simply an artefact of the optical theorem. This could be visualized easily in Fig.(1). Additionally, if we fix the initial state, find out all the elements of the corresponding scattering matrix, and sum over their contributions, we revert to the initial form of the optical theorem Eq.(2) and, again, instead of a partial wave bound (inequality), we get an equality as proposed originally. This can be seen in the result of [4] for all \(j\), though it was shown explicitly only for \(j=0\). Again, another manifestation of the optical theorem. This latter result covers theories that could be transformed to different forms using field transformations where the 'ideal' structure (as required in [3; 5]) of these theories could get ruined as more interaction terms show up, meaning more varied final states. Also note that it was stated in [5] that the contribution from vacuum polarization corrections far exceeded that from other sources only when the number of particles was large. For a limited number of DsOF, contribution from other loop diagrams, such as vertex corrections or embedded loops, might be of the same order as the vacuum polarization corrections. Nevertheless, the optical theorem should still be able to restore unitarity in those theories. One example of this is [4] where, as previously mentioned, the authors have considered four DsOF in the form of the Higgs doublet. ## V Discussion Well-behaved gravitational theories are expected to face unitarity violations close to the Planck scale, where the loop diagrams start to contribute. Any violations below this scale imply either that the loops from DsOF (other than graviton) already present in the theory may be contributing, or that some new DsOF need to be introduced. It was seen to be the former in theories mentioned in this work, where summing over loop contributions was able to restore unitarity through the self-healing mechanism, which turned out to be a special case of the optical theorem. The results of the optical theorem Eq.(1) and Eq.(2), and even the partial wave analysis Eq.(8), are independent of whether the collisions are elastic or inelastic. Therefore, this analysis should be applicable to those cases as well, i.e. even the inelastic versions of the processes considered in [3; 4] should be able to'self-heal' adequately. This could be explicitly verified as an independent work. ## Acknowledgement This work is partially funded by DST (Govt. of India), Grant No. SERB/PHY/2021057.
2305.13222
Measure-theoretic Uniformly Positive Entropy on the Space of Probability Measures
For a homeomorphism $T$ on a compact metric space $X$, a $T$-invariant Borel probability measure $\mu$ on $X$ and a measure-theoretic quasifactor $\widetilde{\mu}$ of $\mu$, we study the relationship between the local entropy of the system $(X,\mu,T)$ and of its induced system $(\mathcal{M}(X),\widetilde{\mu},\widetilde{T})$, where $\widetilde{T}$ is the homeomorphism induced by $T$ on the space $\mathcal{M}(X)$ of all Borel probability measures defined on $X$.
Rômulo M. Vermersch
2023-05-22T16:54:28Z
http://arxiv.org/abs/2305.13222v4
# Measure-theoretic uniformly positive entropy ###### Abstract. For a homeomorphism \(T\) on a compact metric space \(X\), a \(T\)-invariant Borel probability measure \(\mu\) on \(X\) and a measure-theoretic quasifactor \(\widetilde{\mu}\) of \(\mu\), we study the relationship between the local entropy of the system \((X,\mu,T)\) and of its induced system \((\mathcal{M}(X),\widetilde{\mu},\widetilde{T})\), where \(\widetilde{T}\) is the homeomorphism induced by \(T\) on the space \(\mathcal{M}(X)\) of all Borel probability measures defined on \(X\). Key words and phrases:homeomorphisms, probability measures, entropy, quasifactors, dynamics 2010 Mathematics Subject Classification: Primary 28D20; Secondary 28A33 ## 1. Introduction By a _topological dynamical system_ (TDS) we mean a pair \((X,T)\) consisting of a compact metric space \(X\) with metric \(d\) and a homeomorphism \(T:X\to X\). Such a TDS induces, in a natural way, the TDS \((\mathcal{M}(X),\widetilde{T}).\) Here, \(\mathcal{M}(X)\) denotes the space of all Borel probability measures on \(X\) endowed with the _Prohorov metric_ \[d_{P}(\mu,\nu):=\inf\{\delta>0:\mu(A)\leq\nu(A^{\delta})+\delta\text{ for all }A\in\mathcal{X}\},\] where \(\mathcal{X}\) is the \(\sigma\)-algebra of all Borel subsets of \(X\), and \(\widetilde{T}:\mathcal{M}(X)\to\mathcal{M}(X)\) is the homeomorphism given by \[(\widetilde{T}(\mu))(A):=\mu(T^{-1}(A))\quad(\mu\in\mathcal{M}(X),A\in \mathcal{X}).\] It is well known that \(\mathcal{M}(X)\) is a compact metric space. We refer the reader to the books [4, 9] for a study of the space \(\mathcal{M}(X)\). The investigations between the dynamics of the TDS \((X,T)\) and the dynamics of the induced TDS \((\mathcal{M}(X),\widetilde{T})\) were initiated by Bauer and Sigmund [1], and later were developed by several authors; see [2, 3, 13, 14, 22, 24], for instance. By a _measure-theoretic dynamical system_ (MDS) we mean a triple \((X,\mu,T)\), where \((X,T)\) is a TDS and \(\mu\) is a \(T\)-invariant Borel probability measure on \(X\). Now, recall the notion of a measure-theoretic quasifactor of a MDS \(\mathfrak{X}=(X,\mu,T)\) due to Glasner [10] (see also [13, 14]): A measure-theoretic _quasifactor_ of \(\mathfrak{X}\) is a \(\widetilde{T}\)-invariant Borel probability measure \(\widetilde{\mu}\) on \(\mathcal{M}(X)\) which satisfies the so-called _barycenter equation_: \[\mu=\int_{\mathcal{M}(X)}\theta d\widetilde{\mu}(\theta).\] Equivalently, we say that \(\mu\) is the barycenter of \(\tilde{\mu}\). The barycenter equation means that, by choosing any compact topology on \(X\) compatible with its Borel structure, one has \[\int_{X}f(x)d\mu(x)=\int_{\mathcal{M}(X)}\int_{X}f(x)d\theta(x)d\widetilde{ \mu}(\theta)\] for every continuous function \(f:X\to\mathbb{R}\). This definition is independent on the choice of the compact topology compatible with the Borel structure [10]. We denote by \(Q(\mu)\) the set of all measure-theoretic quasifactors of \(\mu\) and, sometimes, we will say that the induced MDS \(\widetilde{\mathfrak{X}}=(\mathcal{M}(X),\widetilde{\mu},\widetilde{T})\) is a measure-theoretic quasifactor of \(\mathfrak{X}=(X,\mu,T)\). In this work we are concerned with the relationship between the local entropy of the measure-theoretic dynamical systems \((X,\mu,T)\) and \((\mathcal{M}(X),\widetilde{\mu},\widetilde{T})\), where \(\widetilde{\mu}\) is a quasifactor of \(\mu\). The research on the relationship between the entropy of a MDS and of a measure-theoretic quasifactor of it can be traced back to a deep result due to Glasner and Weiss [13] which asserts that if \((X,\mu,T)\) has zero entropy, then so does \((\mathcal{M}(X),\widetilde{\mu},\widetilde{T})\) for every \(\widetilde{\mu}\in Q(\mu)\). By using the variational principle, it follows that if \((X,T)\) has zero topological entropy, then so does \((\mathcal{M}(X),\widetilde{T})\). Qiao and Zhou [22] obtained such a result for the notion of sequence entropy. We point out that our second result in this work is a version of the aforementioned Glasner and Weiss result in the context of local entropy theory [15]. In other direction, Glasner and Weiss [14] proved that any ergodic system of positive entropy admits _every_ ergodic system of positive entropy as a measure-theoretic quasifactor. So, in particular, one sees that the set of measure-theoretic quasifactors of an ergodic system of positive entropy is very large. In the present work we study, with the powerful tools of local entropy theory [15, 18], the relationship between the entropy of a measure-theoretic quasifactor of it. Recently, Liu and Wei [20] studied the relationship between the local entropy of a topological factor of a dynamical system and the local entropy of the topological system induced by that factor on the space of Borel probability measures. We now recall the notion of _entropy pairs_ for a TDS/MDS. By following [5], we say that a pair of distinct points \(x,x^{\prime}\in X\) is an _entropy pair_ for the TDS \((X,T)\) if, for every open cover \(\mathcal{U}=\{U,V\}\) of \(X\) with \(x\in U^{c}\) and \(x^{\prime}\in V^{c}\), one has \(h_{top}(T,\mathcal{U})>0\). In [5] Blanchard proved that \((X,T)\) has positive topological entropy if and only if there exists an entropy pair \((x,x^{\prime})\) for \((X,T)\). So, entropy pairs are a tool for localising the topological entropy in the Cartesian product \(X\times X\). We denote by \(E_{X}\) the set of all entropy pairs for the TDS \((X,T)\). By following [8], we say that a pair of distinct points \(x,x^{\prime}\in X\) is a \(\mu\)_-entropy pair_ for the MDS \((X,\mu,T)\) if, for any measurable partition \(\mathcal{P}=\{F,F^{c}\}\) of \(X\) such that \(F\) contains a neighborhood of \(x\) and \(F^{c}\) contains a neighborhood of \(x^{\prime}\), one has \(h_{\mu}(T,\mathcal{P})>0\). We denote by \(E_{\mu}\) the set of all \(\mu\)-entropy pairs for the MDS \((X,\mu,T)\). It was proved in [8] that \(E_{\mu}\subset E_{X}\) for each \(T\)-invariant Borel probability measure on \(X\) and that the reverse inclusion holds when \(\mu\) is uniquely ergodic. On the other hand, in [7] it was proved that if \((X,T)\) has positive topological entropy, then there exists a \(T\)-invariant Borel probability measure \(\mu\) on \(X\) such that \(E_{\mu}=E_{X}\). Now we recall the notion of _uniformly positive entropy_ (UPE). We say that a TDS \((X,T)\) has UPE if every pair of distinct points \(x,x^{\prime}\in X\) is in \(E_{X}\). This notion was introduced by Blanchard [5] as a candidate for an analogue in topological dynamics for the notion of a \(K\)-process in ergodic theory. As it was shown by Blanchard in [5] and [6], the notion of UPE is more adequate than the notion of _topological completely positive entropy_ (topological CPE - that is, every topological factor of the TDS has postive entropy) in that respect. Indeed, in [5] he proved that every non-trivial factor of an UPE system has positive topological entropy and in [6] he proved that an UPE system is disjoint from every minimal zero entropy system, while also in [5] he showed that topological CPE does not implies any degree of topological mixing, not even transitivity. Although an UPE system is topologically weakly mixing but not always strongly mixing [6], Glasner and Weiss [12] proved that UPE is a necessary condition for a TDS \((X,T)\) to have a \(T\)-invariant probability measure \(\mu\) of full support whose corresponding measurable dynamical system \((X,\mu,T)\) is a \(K\)-process. Moreover, Huang and Ye [16] generalized this by proving that if a topological dynamical system admits an invariant \(K\)-measure with full support, then it has UPE of all orders; so, UPE of all orders is the topological analogue of the \(K\)-property from Ergodic Theory. We now introduce the notion of a \(\mu\)-UPE system. Let \(\mathfrak{X}=(X,\mu,T)\) be a MDS. We say that a two-set Borel partition \(\mathcal{P}=\{P_{0},P_{1}\}\) of \(X\) is a _replete partition_ if \(\mathrm{int}P_{0}\neq\varnothing\) and \(\mathrm{int}P_{1}\neq\varnothing\)[8]. We say that \(\mathfrak{X}\) has \(\mu\)-UPE if any pair of distinct points \(x,x^{\prime}\in X\) is in \(E_{\mu}\). Equivalently, \(\mathfrak{X}\) has \(\mu\)-UPE if \(h_{\mu}(T,\mathcal{P})>0\) for every replete partition \(\mathcal{P}\). In the proofs of our results we will use local entropy theory as in the unifying work due to Kerr and Li [18]. Finally, we remark that the present work represents the measure-theoretic counterpart of [2] and complements the works [13] and [14] in the context of local entropy. ## 2. Preliminaries Recall that the Prohorov metric \(d_{P}\) on \(\mathcal{M}(X)\) induces the so-called _weak*-topology_, that is, the topology whose basic open neighborhoods of \(\mu\in\mathcal{M}(X)\) are the sets of the form \[\mathbb{V}(\mu;f_{1},\ldots,f_{k};\varepsilon):=\Big{\{}\nu\in\mathcal{M}(X): \Big{|}\int_{X}f_{i}\,d\nu-\int_{X}f_{i}\,d\mu\Big{|}<\varepsilon\text{ for }i=1,\ldots,k\Big{\}},\] where \(k\geq 1\), \(f_{1},\ldots,f_{k}:X\to\mathbb{R}\) are continuous functions and \(\varepsilon>0\). In [2] Bernardes, Darji and the author proved the following result that gives another basis for the weak*-topology on \(\mathcal{M}(X)\): **Lemma 1**.: _The sets of the form_ \[\mathbb{W}(U_{1},\ldots,U_{k};\eta_{1},\ldots,\eta_{k}):=\{\nu\in\mathcal{M}(X ):\nu(U_{i})>\eta_{i}\text{ for }i=1,\ldots,k\},\] _where \(k\geq 1\), \(U_{1},\ldots,U_{k}\) are nonempty disjoint open sets in \(X\) and \(\eta_{1},\ldots,\eta_{k}\) are positive real numbers with \(\eta_{1}+\cdots+\eta_{k}<1\), form a basis for the weak\({}^{*}\)-topology on \(\mathcal{M}(X)\)._ Let us now recall some definitions and notations from entropy theory. In what follows, all logarithms are in base 2. Let \((X,\mu,T)\) be a MDS. Given finite measurable partitions \(\mathcal{P}_{1},\ldots,\mathcal{P}_{n}\) of \(X\), let \[\mathcal{P}_{1}\vee\cdots\vee\mathcal{P}_{n}:=\{P_{1}\cap\cdots\cap P_{n}:P_{1 }\in\mathcal{P}_{1},\ldots,P_{n}\in\mathcal{P}_{n}\}.\] The _entropy_ of a finite partition \(\mathcal{P}\) of \(X\) is defined by \[H_{\mu}(\mathcal{P}):=-\sum_{P\in\mathcal{P}}\mu(P)\log\mu(P).\] The _entropy of \(T\) with respect to \(\mathcal{P}\)_ is defined by \[h_{\mu}(T,\mathcal{P}):=\lim_{n\to\infty}\frac{1}{n}H_{\mu}(\mathcal{P}^{n-1}),\] where \(\mathcal{P}^{n-1}:=\mathcal{P}\lor T^{-1}\mathcal{P}\vee\cdots\lor T^{-(n-1)} \mathcal{P}\), and the _entropy_ of \(T\) is given by \[h_{\mu}(T):=\sup_{\mathcal{P}}h_{\mu}(T,\mathcal{P}),\] where the supremum is taken over all finite measurable partitions \(\mathcal{P}\) of \(X\). The notion of entropy was introduced in Ergodic Theory by Kolmogorov [19]. It plays a major role in dynamics, notably in the Isomorphism Theory [21]. We now introduce some elements of Kerr-Li machinery [18] that we shall use in the sequel. Let \(\mathfrak{X}=(X,\mu,T)\) be a MDS and let \(\boldsymbol{A}=(A_{1},\ldots,A_{k})\) be a tuple of subsets of \(X\). For a subset \(D\) of \(X\), we say that \(J\subset\mathbb{N}\) is an _independence set for \(\boldsymbol{A}\) relative to \(D\)_ if for every nonempty finite subset \(I\subset J\) and every map \(\sigma:I\to\{1,\ldots,k\}\), we have \[D\cap\bigcap_{j\in I}T^{-j}A_{\sigma(j)}\neq\emptyset.\] For each \(\delta>0\), we denote by \(\mathcal{B}(\mu,\delta)\) the collection of all Borel subsets \(D\) of \(X\) such that \(\mu(D)\geq 1-\delta\). For each \(m\geqslant 1\) and \(\delta>0\), we define \[\varphi(\boldsymbol{A},\delta,m)=\min_{D\in\mathcal{B}(\mu,\delta)}\max\big{\{} |\{1,\ldots,m\}\cap J|:J\text{ is an independence set for }\boldsymbol{A}\text{ relative to }D\big{\}}.\] Now, put \[\overline{\mathsf{I}}_{\mu}(\boldsymbol{A},\delta):=\limsup_{m\to\infty}\frac {\varphi(\boldsymbol{A},\delta,m)}{m}.\] Finally, let us define the _upper \(\mu\)-independence density_ of \(\boldsymbol{A}\) as \[\overline{\mathsf{I}}_{\mu}(\boldsymbol{A}):=\sup_{\delta>0}\overline{\mathsf{ I}}_{\mu}(\boldsymbol{A},\delta).\] The following useful characterization of \(\mu\)-UPE is due to Kerr and Li [18]. **Theorem 2**.: _Let \((X,\mu,T)\) be a MDS. Then, \((X,\mu,T)\) has \(\mu\)-UPE if and only if for every pair \(\boldsymbol{U}=(U_{0},U_{1})\) of nonempty disjoint open sets in \(X\), one has \(\overline{\mathsf{I}}_{\mu}(\boldsymbol{U})>0\)._ ## 3. Our results For each \(n\in\mathbb{N}\), let \[\mathcal{M}_{n}(X):=\Big{\{}\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}\in \mathcal{M}(X):x_{1},\ldots,x_{n}\in X\text{ not necessarily distinct}\Big{\}},\] where \(\delta_{x}\) denotes the unit mass concentrated at the point \(x\) of \(X\). It is classical that \(\bigcup_{n\in\mathbb{N}}\mathcal{M}_{n}(X)\) is dense in \(\mathcal{M}(X)\). Since \(\mathcal{M}_{n}(X)\) is \(\widetilde{T}\)-invariant, we can consider the TDS \((\mathcal{M}_{n}(X),\widetilde{T})\), where we are also denoting by \(\widetilde{T}\) the corresponding restricted map. For each \(n\in\mathbb{N}\), let us consider \((X^{(n)},\mu^{(n)})\) the canonical symmetric \(n\)-fold joining of \((X,\mu)\)[11], where \(\mu^{(n)}:=\mu\times\cdots\times\mu\) is the product measure on \(X^{(n)}:=X\times\cdots\times X\). We also consider \(T_{n}:=T\times\cdots\times T\) and, given any \(\widetilde{\mu}\in Q(\mu)\), we consider the MDS \((\mathcal{M}_{n}(X),\widetilde{\mu},\widetilde{T})\), where we are also denoting by \(\widetilde{\mu}\) the corresponding normalized induced measure. Denote by \(S_{n}\) the group of all permutations of \(n\) elements and let us consider \(\tau:X^{(n)}\to\hat{X}^{(n)}:=X^{(n)}/S_{n}\) the quotient map. A typical element of \(\hat{X}^{(n)}\) will be denoted by \((x_{1},\ldots,x_{n})\). Moreover, we can consider the quotient measure \(\tau_{*}(\mu^{(n)}):=\mu^{(n)}\circ\tau^{-1}\) on \(\hat{X}^{(n)}\). Now let us consider the maps \[\psi:(x_{1},\ldots,x_{n})\in X^{(n)}\mapsto(1/n)\sum_{l=1}^{n}\delta_{x_{l}} \in\mathcal{M}_{n}(X)\] and \[\hat{\psi}:\langle x_{1},\ldots,x_{n}\rangle\in\hat{X}^{(n)}\mapsto(1/n)\sum _{l=1}^{n}\delta_{x_{l}}\in\mathcal{M}_{n}(X).\] Clearly, \(\hat{\psi}\) is a Borel isomorphism and we can consider the measure \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})\) on \(\mathcal{M}_{n}(X)\). It follows from Lemma 3.5 in [14] that \((\mathcal{M}(X),(\hat{\psi}\circ\tau)_{*}(\mu^{(n)}),\tilde{T})\) is a quasifactor of \((X,\mu,T)\) which is isomorphic to \((\hat{X}^{(n)},\hat{\mu}^{(n)},\hat{T}_{n})\), where we are denoting by \(\hat{T}_{n}\) the map defined by \(\hat{T}_{n}\circ\tau=\tau\circ T_{n}\) and \(\hat{\mu}^{(n)}:=\tau_{*}(\mu^{(n)})\). Moreover, it is easy to verify that if \((X^{(n)},\mu^{(n)},T_{n})\) has \(\mu^{(n)}\)-UPE, then \((\hat{X}^{(n)},\hat{\mu}^{(n)},\hat{T}_{n})\) has \(\hat{\mu}^{(n)}\)-UPE. We are ready to state and prove our first result: **Theorem 3**.: _For every ergodic MDS \((X,\mu,T)\), the following assertions are equivalent:_ 1. \((X,\mu,T)\) _has_ \(\mu\)_-UPE;_ 2. \((\mathcal{M}_{n}(X),(\hat{\psi}\circ\tau)_{*}(\mu^{(n)}),\widetilde{T})\) _has_ \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})\)_-UPE for some_ \(1\leq n<\infty\)_;_ 3. \((\mathcal{M}_{n}(X),(\hat{\psi}\circ\tau)_{*}(\mu^{(n)}),\widetilde{T})\) _has_ \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})\)_-UPE for every_ \(1\leq n<\infty\) Proof.: (i) \(\Rightarrow\) (iii): Suppose that \((X,\mu,T)\) has \(\mu\)-UPE. Fix any \(1\leq n<\infty\) and let \(\widetilde{U}^{(n)}:=(\widetilde{U}_{0}^{(n)},\widetilde{U}_{1}^{(n)})\) be any pair of nonempty disjoint open sets in \(\mathcal{M}_{n}(X)\). By Lemma 1, there exist nonempty open sets \(U_{0,1},\ldots,U_{0,n},U_{1,1},\ldots,U_{1,n}\) in \(X\) such that \[\psi(U_{0,1}\times\cdots\times U_{0,n})\subset\widetilde{U}_{0}^{(n)}\ \ \text{and}\ \ \psi(U_{1,1}\times\cdots\times U_{1,n})\subset\widetilde{U}_{1}^{(n)} \tag{1}\] Put \[U_{0}^{(n)}:=U_{0,1}\times\cdots\times U_{0,n}\ \text{and}\ U_{1}^{(n)}:=U_{1,1} \times\cdots\times U_{1,n}\,\] which are nonempty open sets in \(X^{(n)}\). Thus, \[\hat{U}_{0}^{(n)}:=\tau(U_{0}^{(n)})\ \text{and}\ \hat{U}_{1}^{(n)}:=\tau(U_{1}^{ (n)})\] are nonempty open sets in \(\hat{X}^{(n)}\) such that \[\hat{\psi}(\hat{U}_{0}^{(n)})\subset\widetilde{U}_{0}^{(n)}\ \text{and}\ \hat{\psi}(\hat{U}_{1}^{(n)})\subset \widetilde{U}_{1}^{(n)} \tag{2}\] Note that \(\widetilde{T}\circ\hat{\psi}=\hat{\psi}\circ\hat{T}_{n}\). Since \(\widetilde{U}_{0}^{(n)}\cap\widetilde{U}_{1}^{(n)}=\varnothing\) (2) implies that \[\hat{U}_{0}^{(n)}\cap\hat{U}_{1}^{(n)}=\varnothing. \tag{3}\] Now, since any \(n\)-product of \(\mu\)-UPE systems is a \(\mu^{(n)}\)-UPE system [16] (see also [18]), by (3) and Theorem 2, there exist \(d>0\), \(\delta>0\) and \(m_{k}\to\infty\) such that, for every \(k\geqslant 1\) and every \(\hat{D}^{(n)}\subset\hat{X}^{(n)}\) with \(\hat{\mu}^{(n)}(\hat{D}^{(n)})\geqslant 1-\delta\) there exists \(J\subset\mathbb{N}\) with \(|J\cap\{1,\ldots,m_{k}\}|\geqslant d.m_{k}\) such that \[\hat{D}^{(n)}\cap\bigcap_{j\in I}\hat{T}_{n}^{-j}(\hat{U}_{\sigma(j)}^{(n)}) \neq\varnothing,\] for every nonempty finite subset \(I\subset J\) and for every map \(\sigma:I\to\{0,1\}\). Suppose that \(k\geqslant 1\) and \(\widetilde{D}^{(n)}\subset\mathcal{M}_{n}(X)\) is any Borel set such that \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})(\widetilde{D}^{(n)})\geqslant 1-\delta\). So, \(\hat{D}^{(n)}:=\hat{\psi}^{-1}(\widetilde{D}^{(n)})\) is such that \(\hat{\psi}(\hat{D}^{(n)})=\widetilde{D}^{(n)}\) and \(\hat{\mu}^{(n)}(\hat{D}^{(n)})\geqslant 1-\delta\). Let \(J\subset\mathbb{N}\) be as above and pick any nonempty finite set \(I\subset J\). Thus, for any map \(\sigma:I\to\{0,1\}\), we have \[\varnothing\neq\hat{\psi}\Big{(}\hat{D}^{(n)}\cap\bigcap_{j\in I }\hat{T}_{n}^{-j}(\hat{U}_{\sigma(j)}^{(n)})\Big{)} \subset\hat{\psi}(\hat{D}^{(n)})\cap\bigcap_{j\in I}(\hat{\psi} \circ\hat{T}_{n}^{-j})\big{(}\hat{U}_{\sigma(j)}^{(n)}\big{)}\] \[\subset\hat{\psi}(\hat{D}^{(n)})\cap\bigcap_{j\in I}\widetilde{ T}^{-j}\big{(}\hat{\psi}\big{(}\hat{U}_{\sigma(j)}^{(n)}\big{)}\big{)}\] \[\subset\widetilde{D}^{(n)}\cap\bigcap_{j\in I}\widetilde{T}^{-j} (\widetilde{U}_{\sigma(j)}),\] where we have used (2) in the second inclusion. We conclude that \(\overline{\mathfrak{I}}_{(\hat{\psi}\circ\tau)_{*}(\mu^{(n)})}(\widetilde{ \boldsymbol{U}}^{(n)})\geqslant d\). Hence, by Theorem 2, we see that \((\mathcal{M}_{n}(X),(\hat{\psi}\circ\tau)_{*}(\mu^{(n)}),\widetilde{T})\) has \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})\)-UPE. (ii) \(\Rightarrow\) (i): Suppose that \((\mathcal{M}_{n}(X),(\hat{\psi}\circ\tau)_{*}(\mu^{(n)}),\widetilde{T})\) has \((\psi\circ\tau)_{*}(\mu^{(n)})\)-UPE for some \(2\leq n<\infty\). Let \(\boldsymbol{U}:=(U_{0},U_{1})\) be any pair of nonempty disjoint open sets in \(X\) and consider \[\widetilde{U}_{0}^{(n)}:=\Big{\{}\mu\in\mathcal{M}_{n}(X):\mu(U_{0})>\frac{n-1 }{n}\Big{\}}\ \ \text{and}\ \ \widetilde{U}_{1}^{(n)}:=\Big{\{}\mu\in\mathcal{M}_{n}(X):\mu(U_{1})>\frac{n-1 }{n}\Big{\}},\] which are nonempty disjoint open sets in \(\mathcal{M}_{n}(X)\). Since \((\mathcal{M}_{n}(X),(\hat{\psi}\circ\tau)_{*}(\mu^{(n)}),\widetilde{T})\) has \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})\)-UPE, the pair \(\widetilde{\boldsymbol{U}}:=(\widetilde{U}_{0}^{(n)},\widetilde{U}_{1}^{(n)})\) has positive upper \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})\)-density, that is, there are \(d>0\), \(\delta>0\) and \(m_{k}\to\infty\) such that, for every \(k\geqslant 1\) and every \(\widetilde{D}^{(n)}\subset\mathcal{M}_{n}(X)\) with \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})(\widetilde{D}^{(n)})\geqslant 1-\delta\), there exists \(J\subset\mathbb{N}\) with \(|J\cap\{1,\ldots,m_{k}\}|\geqslant d.m_{k}\) such that \[\widetilde{D}^{(n)}\cap\bigcap_{j\in I}\widetilde{T}^{-j}(\widetilde{U}_{ \sigma(j)}^{(n)})\neq\varnothing,\] for every nonempty finite subset \(I\subset J\) and for every map \(\sigma:I\to\{0,1\}\). Fix any \(0<\alpha<1-\sqrt[n]{1-\delta}\) and pick \(k\geqslant 1\) and \(D\subset X\) with \(\mu(D)\geqslant 1-\alpha\). If \(D^{(n)}:=D\times\cdots\times D\), then \(\mu^{(n)}(D^{(n)})\geqslant 1-\delta\) and notice that the set \(\widetilde{D}_{n}:=(\hat{\psi}\circ\tau)(D^{(n)})\subset\mathcal{M}_{n}(X)\) is so that \((\hat{\psi}\circ\tau)_{*}(\mu^{(n)})(\widetilde{D}_{n})=\mu^{(n)}(D^{(n)}) \geqslant 1-\delta\). So, there exists some \(J\subset\mathbb{N}\) with \(|J\cap\{1,\ldots,m_{k}\}|\geqslant d.m_{k}\) such that \[\widetilde{D}_{n}\cap\bigcap_{j\in I}\widetilde{T}^{-j}(\widetilde{U}^{(n)}_{ \sigma(j)})\neq\varnothing,\] for every nonempty finite subset \(I\subset J\) and for every map \(\sigma:I\to\{0,1\}\). Fix any nonempty finite subset \(I\subset J\) and any map \(\sigma:I\to\{0,1\}\), and pick any \(\nu\in\widetilde{D}_{n}\cap\bigcap_{j\in I}\widetilde{T}^{-j}(\widetilde{U}^{ (n)}_{\sigma(j)})\neq\varnothing\). So, there are \(x_{1},\ldots,x_{n}\in D\) such that \[\nu=\frac{1}{n}\sum_{l=1}^{n}\delta_{x_{l}}\] and \[\left(\frac{1}{n}\sum_{l=1}^{n}\delta_{T^{j}x_{l}}\right)(U_{\sigma(j)})> \frac{n-1}{n}\] for every \(j\in I\). Therefore, \(\{x_{1},\ldots,x_{n}\}\subset D\cap\bigcap_{j\in I}T^{-j}(U_{\sigma(j)})\). Since \(\mu(D)\geqslant 1-\alpha\) we obtain \(\overline{\mathbb{I}}_{\mu}(\boldsymbol{U})\geqslant d\) and so, by Theorem 2, we conclude that \((X,\mu,T)\) has \(\mu\)-UPE. In order to prove our next result we will need to recall some notation and also two important results. For each \(n\geqslant 1\), \(\delta>0\) and \(\mathcal{P}\) a finite partition, let us denote by \(N(T,\mathcal{P},n,\delta)\) the minimal cardinality of a Borel subcollection of \(\mathcal{P}\lor T^{-1}\mathcal{P}\lor\cdots\lor T^{-(n-1)}\mathcal{P}\) needed to cover a set \(D\subset X\) with \(\mu(D)\geqslant 1-\delta\). The following result will be essential to our argument (see [17, 23]): **Theorem 4**.: _If \((X,\mu,T)\) is ergodic, then \(h_{\mu}(T,\mathcal{P})=\lim_{n\to\infty}(1/n)\log(N(T,\mathcal{P},n,\delta))\), for each fixed \(0<\delta<1\)._ Now, let us denote by \(\ell_{1}^{k}\) the vector space \(\mathbb{R}^{k}\) endowed with the \(\ell_{1}\)-norm, that is, \(\|(r_{1},\ldots,r_{k})\|:=|r_{1}|+\cdots+|r_{k}|\), and \(\ell_{\infty}^{m}\) the vector space \(\mathbb{R}^{m}\) endowed with the \(\ell_{\infty}\)-norm, that is, \(\|(s_{1},\ldots,s_{m})\|:=\max\{|s_{1}|,\ldots,|s_{m}|\}\). Moreover, let us denote by \(B_{\ell_{1}^{k}}\) the closed unit ball of the Banach space \(\ell_{1}^{k}\). The following quantitative technique connecting combinatorics to linear maps on finite-dimensional spaces was developed by Glasner and Weiss (Proposition 2.1 from [13]): **Lemma 5**.: _Given constants \(\varepsilon>0\) and \(b>0\), there exist constants \(m_{0}\in\mathbb{N}\) and \(c>0\) such that the following property holds for every \(m\geq m_{0}\): if \(\varphi:\ell_{1}^{k}\to\ell_{\infty}^{m}\) is a linear map with \(\|\varphi\|\leq 1\), and if \(\varphi(B_{\ell_{1}^{k}})\) contains more than \(2^{bm}\) vectors that are \(\varepsilon\)-separated, then \(k\geq 2^{\text{cm}}\)._ Let us now establish our next result, which was inspired by its topological counterpart from [2]: **Theorem 6**.: _For every ergodic MDS \((X,\mu,T)\) and every \(\widetilde{\mu}\in Q(\mu)\), if \((\mathcal{M}(X),\widetilde{\mu},\widetilde{T})\) has \(\widetilde{\mu}\)-UPE, then \((X,\mu,T)\) has \(\mu\)-UPE._ Proof.: Suppose that \((X,\mu,T)\) is ergodic and that \(\widetilde{\mu}\in Q(\mu)\) is so that \((\mathcal{M}(X),\widetilde{\mu},\widetilde{T})\) has \(\widetilde{\mu}\)-UPE. Let \(\mathcal{P}:=\{P_{0},P_{1}\}\) be a replete partition of \(X\). We have to prove that \(h_{\mu}(T,\mathcal{P})>0\). For this we take \(A_{0}\) and \(A_{1}\) any pair of nonempty open sets in \(X\) with \[A_{0}\subset P_{0}\backslash\overline{P_{1}},\ \ A_{1}\subset P_{1}\backslash \overline{P_{0}}\ \ \ \text{and}\ \ \ \overline{A_{0}}\cap\overline{A_{1}}=\varnothing.\] Define \[\widetilde{A}_{0}:=\{\mu\in\mathcal{M}(X):\mu(A_{0})>0.9\}\ \ \text{and}\ \ \widetilde{A}_{1}:=\{\mu\in\mathcal{M}(X):\mu(A_{1})>0.9\},\] which are nonempty disjoint open sets in \(\mathcal{M}(X)\). By Theorem 2, there are \(d>0\), \(\delta>0\) and \(m_{l}\to\infty\) such that, for every \(l\geqslant 1\) and every \(\widetilde{D}\subset\mathcal{M}(X)\) with \(\widetilde{\mu}(\widetilde{D})\geqslant 1-\delta\), there exists \(J\subset\mathbb{N}\) with \(|J\cap\{1,\ldots,m_{l}\}|\geqslant d.m_{l}\) such that \[\widetilde{D}\cap\bigcap_{j\in I}\widetilde{T}^{-j}(\widetilde{A}_{\sigma(j)} )\neq\varnothing \tag{4}\] for every nonempty finite subset \(I\subset J\) and for every map \(\sigma:I\to\{0,1\}\). Let \(n_{0}\in\mathbb{N}\) and \(c>0\) be constants associated to \(\varepsilon:=0.5\) and \(d>0\) according to Lemma 5. Fix \(l\geqslant 1\) large enough so that \(m:=m_{l}\geq n_{0}\). Let \(D\subset X\) be any Borel set with \(\mu(D)\geqslant 1-\delta\) and take \(\{C_{1},\ldots,C_{k_{m}}\}\) a subcollection of \(\mathcal{P}^{m-1}\) with minimal cardinality that covers \(D\). Let us consider \[B_{1}:=C_{1}\ \ \text{and}\ \ B_{i}:=C_{i}\backslash(C_{1}\cup\ldots\cup C_{i-1}) \ \ \text{for}\ 2\leq i\leq k_{m}.\] As \(\{C_{1},C_{2},\ldots,C_{k_{m}}\}\) is minimal, we have \(B_{i}\cap D\neq\varnothing\) for every \(i\). Let \[M:=\big{[}t_{i,j}\big{]}_{1\leq i\leq k_{m},0\leq j\leq m-1}\] be a \(k_{m}\times m\) matrix of \(0\)'s and \(1\)'s such that \[B_{i}\subset P_{t_{i,0}}\cap T^{-1}(P_{t_{i,1}})\cap T^{-2}(P_{t_{i,2}})\cap \ldots\cap T^{-(m-1)}(P_{t_{i,m-1}}),\] for all \(1\leq i\leq k_{m}\). Consider the linear map \(\varphi:\ell_{1}^{k_{m}}\to\ell_{\infty}^{m}\) given by \[\varphi(r_{1},\ldots,r_{k_{m}}):=[r_{1}\ \cdots\ r_{k_{m}}]\,M.\] Clearly, \(\|\varphi\|\leq 1\). Since \(\widetilde{\mu}(\mathcal{M}(X))=1\geqslant 1-\delta\), by (4) there exists some \(J\subset\mathbb{N}\) with \(|J\cap\{1,\ldots,m\}|\geqslant d.m\) such that \[\bigcap_{j\in I}\widetilde{T}^{-j}(\widetilde{A}_{\sigma(j)})\neq\varnothing\] for every nonempty finite subset \(I\subset J\) and for every map \(\sigma:I\to\{0,1\}\). Take \(I:=J\cap\{1,\ldots,m\}\). For any \(\sigma:I\to\{0,1\}\), there exists \(\nu_{\sigma}\in\mathcal{M}(X)\) such that \(\widetilde{T}^{j}\nu_{\sigma}\in\widetilde{A}_{\sigma(j)}\) for each \(j\in I\). Let \(\sigma,\sigma^{\prime}:I\to\{0,1\}\) be distinct functions and let \(s\in J\) be such that \(\sigma(s)=1\) and \(\sigma^{\prime}(s)=0\) (say). Then, \[\nu_{\sigma}(T^{-s}(A_{1}))>0.9\ \ \ \text{and}\ \ \ \nu_{\sigma^{\prime}}(T^{-s}(A_{0}))>0.9.\] Since \(T^{-s}(A_{1})\cap T^{-s}(P_{0})=\varnothing\), we have \[T^{-s}(A_{1})\subset\bigcup\{B_{i}:t_{i,s}=1\}\subset T^{-s}(P_{1}).\] Thus, we obtain \[\nu\big{(}T^{-s}(V_{1})\big{)}\leq\sum_{i=1}^{k_{m}}t_{i,s}\,\nu(B_{i})\leq\nu \big{(}T^{-s}(U_{1})\big{)}\ \ \ \ (\nu\in\mathcal{M}(X)).\] Hence, the \(s^{\text{th}}\) coordinates of the vectors \[\varphi\big{(}\nu_{\sigma}(B_{1}),\ldots,\nu_{\sigma}(B_{k_{m}})\big{)}\ \ \ \text{and}\ \ \ \varphi\big{(}\nu_{\sigma^{\prime}}(B_{1}),\ldots,\nu_{\sigma^{\prime}}(B_{k_{m}}) \big{)}\] are greater than \(0.9\) and smaller than \(0.1\), respectively, showing that these vectors are \(\varepsilon\)-separated. Since \(|I|\geqslant d.m\), there are at least \(2^{dm}\) functions \(\sigma\). This shows that \(\varphi(B_{\ell_{m}^{m}})\) contains at least \(2^{dm}\) vectors that are \(\varepsilon\)-separated. Thus, by Lemma 5, \(k_{m}\geq 2^{cm}\). By Theorem 4, this implies that \(h_{\mu}(T,\mathcal{P})\geq c>0\), as desired. **Remark 7**.: It was proved in [14] that any ergodic system of positive entropy admits _every_ ergodic system of positive entropy as a measure-theoretic quasifactor. As a consequence, for each ergodic \(\mu\)-UPE system \((X,\mu,T)\) there exists some ergodic quasifactor of it _without_ the measure-theoretic UPE property. This shows that, unlike its topological counterpart from [2], the converse of Theorem 6 does _not_ hold. ## Acknowledgement The author would like to thank Nilson Bernardes Jr. for helpful comments that improved the text.
2307.05239
The regularity of difference divisors
For a prime number $p>2$, we explain the construction of the difference divisors on the unitary Rapoport-Zink spaces of hyperspecial level and the GSpin Rapoport-Zink spaces of hyperspecial level associated to a minuscule cocharacter $\mu$ and a basic element $b$. We prove the regularity of the difference divisors, find the formally smooth locus of both the special cycles and the difference divisors, by a purely deformation-theoretic approach.
Baiqing Zhu
2023-07-11T13:12:46Z
http://arxiv.org/abs/2307.05239v3
# The regularity of difference divisors ###### Abstract. We prove the regularity of difference divisors on unitary and GSpin Rapoport-Zink spaces. ###### Contents * 1 Introduction * 2 Special cycles on Rapoport-Zink spaces * 3 Deformation theory * 4 Regularity of the difference divisor ## 1. Introduction ### Background For every positive integer \(n\geq 1\), let \(\mathcal{N}_{1,n-1}\) be the unitary Rapoport-Zink space of signature \((1,n-1)\), which has been defined and extensively studied in the works of Kudla and Rapoport [11], [12]. Terstiege introduced the difference divisors on the formal scheme \(\mathcal{N}_{1,n-1}\) in [13], he also proved the regularity of difference divisors in [13]. A key step in the proof is a previous result in his joint work with Rapoport and Zhang [11, Theorem 10.7], whose proof is based on the windows theory developed by Zink in [14] and [14]. For a self-dual quadratic lattice \(V\) of rank \(m=n+1\), a Hodge type Rapoport-Zink space \(\mathcal{N}(V)\) has been constructed and studied in the works of Kim [15] and Howard-Pappas [16]. When \(V\) is one of the self-dual lattices of rank \(4\), Terstiege defined difference divisors on the formal scheme \(\mathcal{N}(V)\simeq\mathcal{M}^{HB}\) and studied the intersection numbers of them in [13], where he also proved the regularity of these difference divisors. The notion of difference divisors can be defined similarly on \(\mathcal{N}(V)\) for general \(V\), but the regularity of these difference divisors was previously not known to us. It's difficult to prove a similar result as [11, Theorem 10.7] since the \(p\)-divisible groups parameterized by the formal scheme \(\mathcal{N}(V)\) have dimension \(2^{n}\) and height \(2^{n+1}\), so the windows theory becomes much more complicated in this case. ### Novelty of the method In this article, we give a unified proof of the regularity of difference divisors on \(\mathcal{N}_{1,n-1}\) and \(\mathcal{N}(V)\), see Theorem 4.2.4 and its Corollary 4.2.5. In the unitary case, the proof is based on the Grothendieck-Messing deformation theory of \(p\)-divisible groups of signature \((1,n-1)\), see Lemma 3.4.2, which is simpler compared to Terstiege's proof. In the GSpin case, similar deformation results are also available in Madapusi's work [16, Proposition 5.16], see Lemma 3.4.3. Both Lemma 3.4.2 and Lemma 3.4.3 are of Grothendieck-Messing type, which makes computations easier. Our main result is obtained by combining these two lemmas with a commutative algebra result Lemma 4.2.2. We also identify the formally smooth locus of difference divisors. ### Applications In the work of Li and Zhang [15, Lemma 2.8.1], they proved the linear invariance of the derived intersections on the Rapoport-Zink space \(\mathcal{N}_{1,n-1}\) based on the regularity of difference divisors, which guarantees that the local arithmetic intersection numbers on \(\mathcal{N}_{1,n-1}\) are well-defined. The parallel result in the GSpin case is proved by Terstiege [13, Lemma 4.1, Proposition 4.2] (see also [15, Lemma 4.11.1]) using globalization. Our result can be applied to give an alternative proof of the linear invariance of the derived intersections on the Rapoport-Zink space \(\mathcal{N}(V)\) along the lines of [15, Lemma 2.8.1] (see [15, footnote 3]), which guarantees that the local arithmetic intersection numbers on \(\mathcal{N}(V)\) are well-defined. ### Acknowledgements The author is grateful to Professor Michael Rapoport and Professor Chao Li for their careful reading of the original manuscript and many helpful comments. The author is supported by the Department of Mathematics at Columbia University in the city of New York. ## 2. Special cycles on Rapoport-Zink spaces ### Rapoport-Zink spaces Fix an odd prime \(p\). Let \(F=\mathbb{Q}_{p}\), and \(\mathbb{F}=\overline{\mathbb{F}}_{p}\). Let \(K\) be the completion of the maximal unramified extension of \(F\) with the Frobenius automorphism \(\sigma\), let \(W\) be the integer ring of \(K\). Let \(\operatorname{Nilp}_{W}\) be the category of \(W\)-schemes on which \(p\) is locally nilpotent, for an object \(S\) in \(\operatorname{Nilp}_{W}\), we use \(\overline{S}\) to denote the scheme \(S\times_{W}\mathbb{F}\). #### 2.1.1. Unitary Rapoport-Zink space \(\mathcal{N}^{u}=\mathcal{N}_{1,n-1}\) Let \(E=\mathbb{Q}_{p^{2}}\) be the unramified quadratic extension of \(F\), and \(\mathcal{O}_{E}\) be the integer ring of \(E\). Let \(\varpi=p\) be the uniformizer of \(\mathcal{O}_{F}\) and \(\mathcal{O}_{E}\). Fix an \(F\)-algebra embedding \(\phi_{0}:\mathcal{O}_{E}\to W\) and denote by \(\phi_{1}\) the embedding \(\sigma\circ\phi_{0}:\mathcal{O}_{E}\to W\). The embedding \(\phi_{0}\) induces an embedding between the residue fields \(\mathbb{F}_{p^{2}}\to\mathbb{F}\), which we shall think of as the natural embedding. For any \(\mathcal{O}_{E}\)-module \(\Lambda\), we shall write \(\Lambda_{W}\) for \(\Lambda\otimes_{\mathcal{O}_{E},\phi_{0}}W\). Let \(n\) be a positive integer. For a \(W\)-scheme \(S\), a unitary \(p\)-divisible groups of signature \((1,n-1)\) over \(S\) is a triple \((X,\iota,\lambda)\), where (1) \(X\) is a \(p\)-divisible group of dimension \(n\) and height \(2n\) over \(S\); (2) \(\iota:\mathcal{O}_{E}\to\operatorname{End}(X)\) is an action satisfying the signature \((1,n-1)\) condition, i.e., for \(\alpha\in\mathcal{O}_{E}\), \[\operatorname{char}(\iota(\alpha):\operatorname{Lie}X)(T)=(T-\phi_{0}(\alpha) )(T-\phi_{1}(\alpha))^{n-1}\in\mathcal{O}_{S}[T];\] (3) \(\lambda:X\to X^{\vee}\) is a principal polarization such that the associated Rosati involution induces \(\alpha\mapsto\sigma(\alpha)\) on \(\mathcal{O}_{E}\) via \(\iota\). Over \(\mathbb{F}\), there is a unique such triple \(\mathbb{X}=(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})\) such that \(\mathbb{X}\) is supersingular, up to \(\mathcal{O}_{E}\)-linear isogeny preserving the polarization up to scalar. The unitary Rapoport-Zink space of signature \((1,n-1)\) is the following functor \(\mathcal{N}_{1,n-1}:\operatorname{Nilp}_{W}\to\mathbf{Sets}\): for a scheme \(S\) in the category \(\operatorname{Nilp}_{W}\), the set of \(\mathcal{N}_{1,n-1}(S)\) is the isomorphism classes of quadruples \((X,\iota,\lambda,\rho)\), where \((X,\iota,\lambda)\) is a unitary \(p\)-divisible group over \(S\) of signature \((1,n-1)\) and \(\rho:X\times_{S}\overline{S}\to\mathbb{X}\times_{\mathbb{F}}\overline{S}\) is an \(\mathcal{O}_{E}\)-linear quasi-isogeny of height zero which respects \(\lambda\) and \(\lambda_{\mathbb{X}}\) up to a scalar \(c(\rho)\in\mathcal{O}_{F}^{\times}\) (i.e., \(\rho^{\vee}\circ\lambda_{\mathbb{X}}\circ\rho=c(\rho)\cdot\lambda\)). **Theorem 2.1.1**.: _The functor \(\mathcal{N}_{1,n-1}\) is represented by a formal scheme which is formally locally of finite type and formally smooth of relative dimension \(n-1\) over \(\operatorname{Spf}W\)._ Proof.: These facts about \(\mathcal{N}_{1,n-1}\) are proved in [1, Theorem 2.16], [1, SS2.1] and [20]. **Remark 2.1.2**.: For a general \(p\)-adic local field \(F\), let \(\mathcal{O}_{\tilde{F}}\) be the integer ring of the maximal unramified extension of \(F\), the relative unitary Rapoport-Zink space is constructed in Mihatsch's work [16], similar results as Theorem 2.1.1 are also proved (replace \(W\) by \(\mathcal{O}_{\tilde{F}}\), see Proposition 2.17 of _loc. cit._). Our method of proving the regularity of difference divisors can be extended to the relative setting by [16, Lemma 2.10] if we work with the divided power thickening \((\mathcal{O}_{\tilde{F}}/\varpi^{n+1}\to\mathcal{O}_{\tilde{F}}/\varpi^{n})\) successively in proving Lemma 4.2.3 below. In the following we denote \(\mathcal{N}^{u}\coloneqq\mathcal{N}_{1,n-1}\), it has relative dimension \(n-1\) over \(\operatorname{Spf}W\), let \(\mathbb{Y}=(\mathbb{Y},\iota_{\mathbb{Y}},\lambda_{\mathbb{Y}})\) be the framing object for \(n=1\), let \((\overline{\mathbb{Y}},\iota_{\overline{\mathbb{Y}}},\lambda_{\overline{ \mathbb{Y}}})=(\mathbb{Y},\iota_{\mathbb{Y}}\circ\sigma,\lambda_{\mathbb{Y}})\) be its conjugate. #### 2.1.2. GSpin Rapoport-Zink space \(\mathcal{N}^{o}=\mathcal{N}(V)\) Let \(k\geq 1\) be an integer, a quadratic lattice of rank \(k\) is a pair \((L,q_{L})\) such that \(L\) is a free \(\mathcal{O}_{F}\)-module of rank \(k\) and \(q_{L}:L\to F\) is a quadratic form on \(L\). The quadratic form \(q_{L}\) induces a symmetric bilinear form \((\cdot,\cdot):L\times L\to F\) by \((x,y)=\frac{1}{2}(q(x+y)-q(x)-q(y))\). Let \(L^{\sharp}=\{x\in L\otimes_{\mathcal{O}_{F}}F:(x,L)\subset\mathcal{O}_{F}\}\). We say a quadratic lattice is integral if \(q_{L}(x)\in\mathcal{O}_{F}\) for all \(x\in L\), is self-dual if it is integral and \(L=L^{\sharp}\). Let \(n\geq 2\) be an integer, and \(V\) be a self-dual quadratic lattice of rank \(m=n+1\), associated to \(V\) we have a local unramified Shimura Hodge data \((G,b,\mu,C)\) (in the sense of [10, Definition 2.2.4]) constructed in [10, Proposition 4.2.6], where \(G=\operatorname{GSpin}(V)\), \(b\in G(K)\) is a basic element, \(\mu:\mathbb{G}_{m}\to G\) is a certain cocharacter, and \(C=C(V)\) is the Clifford algebra of \(V\). Let \(C^{\vee}=\operatorname{Hom}_{\mathcal{O}_{F}}(C,\mathcal{O}_{F})\) be the linear dual of \(C\). By [10, Lemma 2.2.5], this local unramified Shimura-Hodge data gives rise to a (unique up to isomorphism) supersingular \(p\)-divisible group \(\mathbb{X}_{V}\) over \(\mathbb{F}\) whose contravariant Dieudonne module \(\mathbb{D}(\mathbb{X}_{V})(W)\) is given by \(C^{\vee}_{W}:=C^{\vee}\otimes_{\mathcal{O}_{F}}W\) with Frobenius \(\mathbf{F}=b\circ\sigma\), it is also equipped with a \(p\)-principal polarization \(\lambda_{0}:\mathbb{X}_{V}\to\mathbb{X}_{V}^{\vee}\). Associated to the local unramified Shimura-Hodge data \((G,b,\mu,C)\), we have a \(\operatorname{GSpin}\) Rapoport-Zink space \(\operatorname{RZ}(V)=\operatorname{RZ}(G,b,\mu,C)\) of Hodge type ([10, SS4.3], see also [11]) parametrizing \(p\)-divisible groups with crystalline Tate tensors. **Theorem 2.1.3**.: _The \(\operatorname{GSpin}\) Rapoport-Zink space \(\operatorname{RZ}(V)\) is formally locally of finite type and formally smooth of relative dimension \(n-1\) over \(\operatorname{Spf}W\)._ Proof.: This is Theorem B in [10]. Let \(X^{\operatorname{univ}}\) be the universal \(p\)-divisible group over \(\operatorname{RZ}(V)\) with the universal quasi-isogeny \(\rho^{\operatorname{univ}}:\mathbb{X}_{V}\times_{\mathbb{F}}\overline{ \operatorname{RZ}(V)}\to X^{\operatorname{univ}}\times_{\operatorname{RZ}(V)} \overline{\operatorname{RZ}(V)}\). Let \(\mathcal{N}^{o}=\mathcal{N}(V)\) be the connected component of \(\operatorname{RZ}(V)\) such that the \(p\)-principal polarization \(\lambda_{0}\) of \(\mathbb{X}_{V}\) lifts to a \(p\)-principal polarization \(\lambda^{\operatorname{univ}}\) on \(X^{\operatorname{univ}}\) via the universal quasi-isogeny \(\rho^{\operatorname{univ}}\) (cf. [1, SS4.5]). ### The space of special quasi-homorphisms #### 2.2.1. The unitary case The space of special homomorphisms is the \(\mathcal{O}_{E}\)-module \(\operatorname{Hom}_{\mathcal{O}_{E}}(\overline{\mathbb{Y}},\mathbb{X})\). There is a natural \(\mathcal{O}_{E}\)-valued \(\sigma\)-Hermitian form on \(\operatorname{Hom}_{\mathcal{O}_{E}}(\overline{\mathbb{Y}},\mathbb{X})\) given by \[(x,y)\mapsto\lambda_{\overline{\mathbb{Y}}}^{-1}\circ y^{\vee}\circ\lambda_{ \mathbb{X}}\circ x\in\operatorname{End}_{\mathcal{O}_{E}}(\overline{\mathbb{Y }})\stackrel{{\sim}}{{\to}}\mathcal{O}_{E}.\] In the following we denote \(\mathbb{V}^{u}=\operatorname{Hom}_{\mathcal{O}_{E}}^{\circ}(\overline{\mathbb{ Y}},\mathbb{X})\coloneqq\operatorname{Hom}_{\mathcal{O}_{E}}(\overline{ \mathbb{Y}},\mathbb{X})[\frac{1}{\varpi}]\). #### 2.2.2. The \(\operatorname{GSpin}\) case The inclusion \(V\subset C^{\operatorname{op}}\) realizes \(V\subset\operatorname{End}_{W}(C^{\vee}_{W})\) as special homomorphisms of \(C^{\vee}_{W}\). Tensoring with \(K\) gives a subspace \(V_{K}\subset\operatorname{End}_{K}(C^{\vee}_{K})\). Define the \(\sigma\)-linear operator \(\Phi=\overline{b}\circ\sigma\) on \(V_{K}\), where \(\overline{b}\in\operatorname{SO}(V)(K)\) is the image of \(b\in G(K)\) under the natural quotient map \(G=\operatorname{GSpin}(V)\to\operatorname{SO}(V)\). Then \((V_{K},\Phi)\) is an isocrystal. The \(\Phi\)-fixed vectors form an \(F\)-vector subspace of dimension \(m=n+1\), \[\mathbb{V}^{o}\coloneqq V_{K}^{\Phi}\subset\operatorname{End}^{\circ}(\mathbb{ X})\coloneqq\operatorname{End}(\mathbb{X})[1/\varpi],\] this space is called the space of special quasi-homomorphisms of \(\mathbb{X}_{V}\). The restriction of the quadratic form to \(\mathbb{V}^{o}\) satisfies \(x\circ x=q_{\mathbb{V}^{o}}(x)\cdot\operatorname{id}_{\mathbb{X}}\) for \(x\in\mathbb{V}^{o}\) where \(q_{\mathbb{V}^{o}}\) is the quadratic form on \(\mathbb{V}^{o}\) defined by the base change of the quadratic form \(q_{V}\) on \(V\). ### Special cycles and difference divisors #### 2.2.3. The \(\operatorname{GSpin}\) case The \(\operatorname{GSpin}\) case is the \(\operatorname{GSpin}\) case. The \(\operatorname{GSpin}\) case is the \(\operatorname{GSpin}\) case. **Definition 2.3.1**.: For any subset \(L\subset\mathbb{V}^{u}\), define the special cycle \(\mathcal{Z}(L)\) in \(\mathcal{N}^{u}\) as the following subfunctor: for any object \(S\) in \(\operatorname{Nilp}_{W}\), the set \(\mathcal{Z}(L)(S)\) consists of elements \((X,\iota_{X},\lambda_{X},\rho)\in\mathcal{N}^{u}(S)\) such that the quasi-homomorphism \(\rho^{-1}\circ x\circ\rho_{\overline{Y}}:\overline{Y}\times_{S}\overline{S} \to X\times_{S}\overline{S}\) extends to a homomorphism from \(\overline{Y}\) to \(X\) for all \(x\in L\). For any subset \(L\subset\mathbb{V}^{o}\), define the special cycle \(\mathcal{Z}(L)\subset\mathcal{N}^{o}\) to be the closed formal subscheme cut out by the condition \[\rho^{\operatorname{univ}}\circ x\circ(\rho^{\operatorname{univ}})^{-1} \subset\operatorname{End}(X^{\operatorname{univ}}),\] for all \(x\in L\). **Remark 2.3.2**.: We use the same symbol \(\mathcal{Z}(L)\) for special cycles, although they may lie in different formal schemes. In the following discussion, we will always make clear on which space the special cycle lie in. **Proposition 2.3.3**.: _The special cycle functor \(\mathcal{Z}(L)\) is represented by a closed formal subscheme of \(\mathcal{N}^{u}\) (resp. \(\mathcal{N}^{o}\)). In fact, for any \(x\in\mathbb{V}^{u}\) (\(x\in\mathbb{V}^{o}\)) such that \((x,x)\in\mathcal{O}_{E}\backslash\{0\}\) (resp. \(q_{\mathbb{V}^{o}}(x)\in\mathcal{O}_{F}\backslash\{0\}\)), the special cycle \(\mathcal{Z}(x)\) is an effective Cartier divisor in \(\mathcal{N}^{u}\) (resp. \(\mathcal{N}^{o}\)) and flat over \(W\)._ Proof.: For the unitary case, this is proved in [11, Proposition 3.5, Lemma 3.7]. For the GSpin case, this is proved in [15, Proposition 4.10.1] For a point \(z\in\mathcal{N}^{u}(\mathbb{F})\) (resp. \(\mathcal{N}^{o}(\mathbb{F})\)), let \(\mathcal{O}_{\mathcal{N}^{u},z}\) (resp. \(\mathcal{O}_{\mathcal{N}^{o},z}\)) be the completed local ring of \(\mathcal{N}^{u}\) (resp. \(\mathcal{N}^{o}\)) at \(z\), then Theorem 2.1.1 (resp. Theorem 2.1.3) implies that \(\mathcal{O}_{\mathcal{N}^{u},z}\) (resp. \(\mathcal{O}_{\mathcal{N}^{o},z})\simeq W[[\![t_{1},t_{2},\cdot\cdot\cdot,t_{n-1} ]\!]]\). For a special quasi-homomorphism \(x\), the completed local ring \(\mathcal{O}_{\mathcal{Z}(x),z}\) of \(\mathcal{Z}(x)\) at \(z\) is cut out by a single equation \(f_{x,z}\in W[\![t_{1},t_{2},\cdot\cdot\cdot,t_{n-1}]\!]\) such that \(\varpi\nmid f_{x,z}\), i.e., \(\mathcal{O}_{\mathcal{Z}(x),z}\simeq W[\![t_{1},t_{2},\cdot\cdot\cdot,t_{n-1} ]\!]/(f_{x,z})\). If \(z\notin\mathcal{Z}(x)(\mathbb{F})\), we can take \(f_{x,z}=1\), otherwise \(f_{x,z}\) belongs to the maximal ideal \(\mathfrak{m}_{z}=(\varpi,t_{1},\cdots,t_{n-1})\) and is determined up to some units in the ring \(W[\![t_{1},t_{2},\cdots,t_{n-1}]\!]\). For an element \(x\in\mathbb{V}^{u}\) (resp. \(x\in\mathbb{V}^{o}\)) such that \((x,x)\in\mathcal{O}_{E}\backslash\{0\}\) (resp. \(q_{\mathbb{V}^{o}}(x)\in\mathcal{O}_{F}\backslash\{0\}\)), we have a natural closed immersion of closed formal schemes \(\mathcal{Z}(\varpi^{-1}x)\hookrightarrow\mathcal{Z}(x)\) by Definition 2.3.1. The closed immersion \(\mathcal{Z}(\varpi^{-1}x)\hookrightarrow\mathcal{Z}(x)\) impies the divisibility \(f_{\varpi^{-1}x,z}|f_{x,z}\) at every point \(z\in\mathcal{N}^{u}(\mathbb{F})\) (resp. \(\mathcal{N}^{o}(\mathbb{F})\)). **Definition 2.3.4**.: Let \(x\in\mathbb{V}^{u}\) (resp. \(x\in\mathbb{V}^{o}\)) be an element such that \((x,x)\in\mathcal{O}_{E}\backslash\{0\}\) (resp. \(q_{\mathbb{V}^{o}}(x)\in\mathcal{O}_{F}\backslash\{0\}\)). Define the difference divisor associated to \(x\) to be the following Cartier divisor on \(\mathcal{N}^{u}\) (resp. \(\mathcal{N}^{o}\)), \[\mathcal{D}(x)\coloneqq\mathcal{Z}(x)-\mathcal{Z}(\varpi^{-1}x).\] i.e., at a point \(z\in\mathcal{Z}(x)(\mathbb{F})\), suppose \(\mathcal{Z}(x)\) is cut out by \(f_{x,z}\in\mathcal{O}_{\mathcal{N}^{u},z}\) and \(\mathcal{Z}(\varpi^{-1}x)\) is cut out by \(f_{\varpi^{-1}x,z}\in\mathcal{O}_{\mathcal{N}^{u},z}\) (resp. \(\mathcal{O}_{\mathcal{N}^{o},z}\)), then \(\mathcal{D}(x)\) is cut out by \(d_{x,z}\coloneqq f_{x,z}/f_{\varpi^{-1}x,z}\in\mathcal{O}_{\mathcal{N}^{u},z}\) (resp. \(\mathcal{O}_{\mathcal{N}^{o},z}\)). ## 3. Deformation theory ### Preliminaries on linear algebra **Definition 3.1.1**.: Let \(R\) be a commutative ring, let \(D\) be a free \(R\)-module of finite rank \(n\), a hyperplane \(P\) in \(D\) is a direct summand of \(D\) which is free of rank \(n-1\) over \(R\), a line \(L\) in \(D\) is a direct summand of \(D\) which is free of rank \(1\) over \(R\). **Lemma 3.1.2**.: _Let \(R\) be a commutative ring, let \(D\) be a free \(R\)-module of finite rank \(n\), then we have a bijection,_ \[K:\{\text{\rm Lines in }D^{\vee}\coloneqq\text{\rm Hom}_{R}(D,R).\} \longrightarrow\{\text{\rm Hyperplanes in }D.\},\] \[L=R\cdot l \longmapsto P=\ker(l).\] Proof.: This can be proved by purely linear algebra. ### The Dieudonne module of an \(\mathbb{F}\)-point of \(\mathcal{N}^{u}\) A point \(z\in\mathcal{N}^{u}(\mathbb{F})\) corresponds to a \(p\)-divisible group \(X\) of signature \((1,n-1)\) over \(\mathbb{F}\) which is isogenous to the basic framing object \(\mathbb{X}\), the dimension of \(\mathbb{X}\) is \(n\) and the height of \(\mathbb{X}\) is \(2n\), hence the Dieudonne module \(\mathbb{D}(X)\) of \(X\) is a free \(W\)-module of rank \(2n\). Let \(D\coloneqq\mathbb{D}(X)\), it is equipped with the following exact sequence, \[0\longrightarrow\mathbf{F}^{1}D_{\mathbb{F}}\longrightarrow D_{\mathbb{F}} \longrightarrow\text{\rm Lie}(X)\longrightarrow 0.\] where both \(\mathbf{F}^{1}D_{\mathbb{F}}\) and \(\text{\rm Lie}(X)\) are \(\mathbb{F}\)-vector spaces of dimension \(n\). The \(W\)-module module \(D\) admits \(\mathcal{O}_{E}\otimes W\simeq W\times W\) action through \(\iota:\mathcal{O}_{E}\to\text{\rm End}(X)\) and decomposes into \[D=\mathbb{D}_{0}(X)\oplus\mathbb{D}_{1}(X),\] where both \(\mathbb{D}_{0}(X)\) and \(\mathbb{D}_{1}(X)\) are free \(W\)-modules of rank \(n\), and \(\mathcal{O}_{E}\) acts on \(\mathbb{D}_{0}(X)\) (resp. \(\mathbb{D}_{1}(X)\)) by multiplication through \(\phi_{0}:\mathcal{O}_{E}\to W\) (resp. \(\phi_{1}:\mathcal{O}_{E}\to W\)). For simplicity, let \(\mathbf{D}\coloneqq\mathbb{D}_{0}(X)\) and \(\overline{\mathbf{D}}\coloneqq\mathbb{D}_{1}(X)\). The signature \((1,n-1)\) condition implies that the \(\mathbb{F}\)-vector space \(\mathbf{F}^{1}D_{\mathbb{F}}\) decomposes as \[\mathbf{F}^{1}D_{\mathbb{F}}=\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{ \mathbb{F}}\oplus\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{ \mathbb{F}}.\] where \(\dim_{\mathbb{F}}\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{\mathbb{F}}=n-1\) and \(\dim_{\mathbb{F}}\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{ \mathbb{F}}=1\). The \(p\)-principal polarization \(\lambda:X\to X^{\vee}\) induces a non-degenerate symplectic form \(\langle\cdot,\cdot\rangle\) on the rank \(2n\) free \(W\)-module \(\mathbb{D}(X)\), the compatibility with \(\mathcal{O}_{E}\)-action implies that both \(\mathbf{D}\) and \(\overline{\mathbf{D}}\) are totally isotropic subspaces of \(\mathbb{D}(X)\) under this symplectic form, hence there is a prefect pairing \[\mathbf{D}\times\overline{\mathbf{D}}\stackrel{{\langle\cdot, \cdot\rangle}}{{\longrightarrow}}W,\] i.e., we have isomorphisms \(\mathbf{D}\simeq\overline{\mathbf{D}}^{\vee}\coloneqq\text{\rm Hom}_{W}( \overline{\mathbf{D}},W)\) and \(\overline{\mathbf{D}}\simeq\mathbf{D}^{\vee}\coloneqq\text{\rm Hom}_{W}( \mathbf{D},W)\), both two isomorphisms are induced by the symplectic form \(\langle\cdot,\cdot\rangle\). The pairing \(\langle\cdot,\cdot\rangle_{\mathbb{F}}:\mathbf{D}_{\mathbb{F}}\times \overline{\mathbf{D}}_{\mathbb{F}}\to\mathbb{F}\) is also perfect. Moreover, the hyperplane \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{\mathbb{F}}\subset\mathbf{D}_{ \mathbb{F}}\) and the line \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}\subset \overline{\mathbf{D}}_{\mathbb{F}}\) are annihilators of each other under this pairing. The Frobenius action \(\Phi\) on \(\mathbb{D}(X)\) is degree \(1\), i.e., \(\Phi(\mathbf{D})\subset\overline{\mathbf{D}}\) and \(\Phi(\overline{\mathbf{D}})\subset\mathbf{D}\), hence \(\Phi^{2}(\mathbf{D})\subset\mathbf{D}\). The isocrystal \((\mathbf{D}[\frac{1}{\varpi}],\Phi)\) is basic since the framing object \(\mathbb{X}\) is supersingular, let \(C\) be the \(\varpi^{-1}\Phi^{2}\)-invariant elements in the \(K\)-space \(\mathbf{D}[\frac{1}{\varpi}]\), i.e., \(C=(\mathbf{D}[\frac{1}{\varpi}])^{\varpi^{-1}\Phi^{2}}\), it is a \(n\)-dimensional \(E\)-vector space. Fix \(\delta\in\mathcal{O}_{E}^{\times}\) such that \(\sigma(\delta)=-\delta\), define a non-degenerate \(\sigma\)-Hermitian form on \(C\) by \[\{x,y\}\coloneqq(\varpi\delta)^{-1}\langle x,\Phi y\rangle.\] then we have an isomorphism of \(\sigma\)-Hermitian spaces by [11, Lemma 3.9] (see also [17, SS2.3]), \[i_{\text{\rm crys},z}:\mathbb{V}^{u}=\text{\rm Hom}_{\mathcal{O}_{E}}^{\circ}( \overline{\mathbb{Y}},\mathbb{X})\stackrel{{\sim}}{{\to}}C.\] Therefore we may view elements of \(C\) as special quasi-homomorphisms. If \(z\in\mathcal{Z}(x)(\mathbb{F})\) for some \(x\in\mathbb{V}^{u}\), then \(x_{\operatorname{crys},z}:=i_{\operatorname{crys},z}(x)\in C\). In fact, we have \(x_{\operatorname{crys},z}\in C\cap\mathbf{D}\), and its image \(\overline{x_{\operatorname{crys},z}}\) in \(\mathbf{D}_{\mathbb{F}}\) is contained in the hyperplane \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{\mathbb{F}}\), i.e., orthogonal to the line \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}\) under the pairing \(\langle\cdot,\cdot\rangle_{\mathbb{F}}\). ### The Dieudonne module of an \(\mathbb{F}\)-point of \(\mathcal{N}^{o}\) Let \(S\) be an affine \(W\)-scheme such that \(\mathcal{O}_{S}\) is a noetherian \(\varpi\)-adically complete \(W\)-algebra, for any point \(z\in\mathcal{N}^{o}(S)\), there is an \(\mathcal{O}_{S/W}^{\operatorname{crys}}\)-module crystal \(\mathbf{V}_{\operatorname{crys},z}\) of rank \(m\) constructed in [11, SS4.5, SS4.6] satisfying that \(\bullet\) The projective \(S\)-module \(\mathbf{V}_{\operatorname{crys},z}(S)\) contains a canonical isotropic line \(\operatorname{Fil}^{1}\mathbf{V}_{\operatorname{crys},z}(S)\). \(\bullet\) For any surjection \(R\to S\) in \(\operatorname{Alg}_{W}\) whose kernel admits divided powers, we have \(\mathbf{V}_{\operatorname{crys},z}(R)\) a projective \(R\)-module of rank \(m\). It is equipped with a non-degenerate symmetric \(R\)-bilinear form. \(\bullet\) Suppose \(z\in\mathcal{Z}(x)(S)\) for some \(x\in\mathbb{V}^{o}\), then the crystalline realization \(x_{\operatorname{crys},z}\) of the special quasi-homomorphism \(x\) belongs to \(\mathbf{V}_{\operatorname{crys},z}(S)\) and orthogonal to the line \(\operatorname{Fil}^{1}\mathbf{V}_{\operatorname{crys},z}(S)\). \(\bullet\) Let \(S=\operatorname{Spec}\mathbb{F}\). The evaluation \(\mathbf{V}\coloneqq\mathbf{V}_{\operatorname{crys},z}(W)\) at the divided power thickening morphism \(\operatorname{Spec}\mathbb{F}\hookrightarrow\operatorname{Spec}W\), viewed as a \(W\)-module, has the structure of a self-dual \(W\)-quadratic lattice which can also be viewed as a sub-lattice of \(V_{K}\), hence has the Frobenius \(\Phi\)-action. #### 3.3.1. Strong divisibility Let \(z\in\mathcal{N}^{o}(\mathbb{F})\) be a point, let \(z^{\prime}\in\mathcal{N}^{o}(W)\) be a lift of \(z\) to \(W\). The \(W\)-module \(\mathbf{V}=\mathbf{V}_{\operatorname{crys},z}(W)\) is a quadratic \(W\)-lattice containing an isotropic line \(L^{1}=\operatorname{Fil}^{1}\mathbf{V}_{\operatorname{crys},z}(W)\). Let \(L^{0}\coloneqq(L^{1})^{\perp}\subset\mathbf{V}\), we have a filtration \(0\subset L^{0}\subset L^{1}\subset\mathbf{V}\), the following lemma proves that this filtration is strongly divisible. **Lemma 3.3.1**.: _The \(\Phi\)-action on \(V_{K}\) induces the following Frobenius-linear isomorphism of \(W\)-lattices,_ \[\Phi:\varpi^{-1}L^{1}+L^{0}+\varpi\mathbf{V}\stackrel{{\sim}}{{ \longrightarrow}}\mathbf{V}\] Proof.: This is proved in [11, SS4.8]. **Lemma 3.3.2**.: _Let \(\nu_{\varpi}:K\to\mathbb{Z}\cup\{\infty\}\) be the \(\varpi\)-adic valuation of \(K\). Let \((\cdot,\cdot)\) be the bilinear pairing induced by the self-dual quadratic form \(q_{\mathbf{V}}\) on \(\mathbf{V}\), let \(l\in\mathbf{V}\) be a generator of the isotropic line \(L^{1}\), then_ \[L^{0}+\varpi\mathbf{V}=\{x\in\mathbf{V}:\nu_{\varpi}((x,l))\geq 1\}.\] Proof.: By the definition of \(L^{0}\), we have \(\nu_{\varpi}((x,l))\geq 1\) for any \(x\in L^{0}+\varpi\mathbf{V}\). Now if \(x\in\mathbf{V}\) satisfies the condition that \(\nu_{\varpi}((x,l))\geq 1\), there exists \(x^{\prime}\in\mathbf{V}\) such that \((l,x^{\prime})=\varpi^{-1}(l,x)\) since \(l\notin\varpi\mathbf{V}\) and \(\mathbf{V}\) is a self-dual quadratic lattice, therefore \(x-\varpi\cdot x^{\prime}\in L^{0}\), hence \(x\in L^{0}+\varpi\mathbf{V}\). ### Deformation theory **Definition 3.4.1**.: Let \(\mathscr{C}\) be the following category: \(\bullet\) Objects in \(\mathscr{C}\) are triples \((\mathcal{O},\mathcal{O}\to\mathbb{F},\delta)\), where \(\mathcal{O}\) is a local Artinian \(W\)-algebra, \(\mathcal{O}\to\mathbb{F}\) is a \(W\)-algebra map, and \(\delta\) is a nilpotent divided power structure on \(\ker(\mathcal{O}\to\mathbb{F})\) (cf. [1, Definitions 3.1, 3.27]). \(\bullet\) Morphisms in \(\mathscr{C}\) are \(W\)-algebra maps that are compatible with the structure maps to \(\mathbb{F}\) and the divided power structures. #### 3.4.1. The unitary case For an \(\mathbb{F}\)-point \(z\in\mathcal{N}^{u}(\mathbb{F})\) which corresponds to a \(p\)-divisible group \(X\), let \(D=\mathbb{D}(X)\) be the Dieudonne module of \(\mathbb{X}\), let \(\mathbf{D}:=\mathbb{D}_{0}(X)\) and \(\overline{\mathbf{D}}:=\mathbb{D}_{1}(X)\) be the two sublattices of \(D\) as in SS3.2. Let \(\hat{\mathcal{N}}_{z}^{u}\) be the completion of the formal scheme \(\mathcal{N}^{u}\) at \(z\). Let \(\mathcal{O}\in\mathscr{C}\), an element \(\tilde{z}\in\hat{\mathcal{N}}_{z}^{u}(\mathcal{O})\) corresponds to a \(p\)-divisible group of signature \((1,n-1)\) over \(\mathcal{O}\) deforming that over \(\mathbb{F}\) defined by \(z\). Let \(D_{\mathcal{O}}=D\otimes_{W}\mathcal{O}\), the \(p\)-divisible group corresponding to \(\tilde{z}\) gives rise to a filtration \(\mathbf{F}_{\tilde{z}}^{1}D_{\mathcal{O}}\subset D_{\mathcal{O}}\) by Grothendieck-Messing theory. Define \(f_{\mathcal{O}}(\tilde{z})\) to be the intersection \(\mathbf{F}_{\tilde{z}}^{1}D_{\mathcal{O}}\cap\mathbf{D}_{\mathcal{O}}\) inside \(D_{\mathcal{O}}\). By the signature \((1,n-1)\) condition, \(f_{\mathcal{O}}(\tilde{z})\) is a hyperplane in \(\mathbf{D}_{\mathcal{O}}\). It also lifts \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{\mathbb{F}}\) by construction. Thus we have defined a map \[f_{\mathcal{O}}:\hat{\mathcal{N}}_{z}^{u}(\mathcal{O})\to\{\text{Hyperplanes in $\mathbf{D}_{\mathcal{O}}$ lifting $\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{\mathbb{F}}$}\}.\] By Lemma 3.1.2, we have the following bijection \[\{\text{Lines in $\overline{\mathbf{D}}_{\mathcal{O}}$ lifting the line $\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}$}\subset \overline{\mathbf{D}}_{\mathbb{F}}.\}\overset{\sim}{\to}\] \[\{\text{Hyperplanes in $\mathbf{D}_{\mathcal{O}}$ lifting the hyperplane $\mathbf{F}^{1}D_{\mathbb{F}}\cap\mathbf{D}_{\mathbb{F}}\subset\mathbf{D}_{ \mathbb{F}}$}.\}\] **Lemma 3.4.2**.: _Let \(\mathcal{O}\in\mathscr{C}\), let \(z\in\mathcal{N}^{u}(\mathbb{F})\) be an \(\mathbb{F}\)-point, then there is a natural bijection_ \[\left\{\text{Lifts $z^{\prime}\in\widehat{\mathcal{N}}_{z}^{u}(\mathcal{O})$ of $z _{.}$}\right\}\overset{\sim}{\longleftrightarrow}\left\{\text{Lines in $\overline{\mathbf{D}}_{\mathcal{O}}$ lifting the line $\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}$}\right\}.\] _Let \(L\subset\mathbb{V}^{u}\) be an \(\mathcal{O}_{F}\)-lattice of rank \(r\geq 1\). Let \(z\in\mathcal{Z}(L)(\mathbb{F})\) and \(\widehat{\mathcal{Z}(L)}_{z}\) be the completion of \(\mathcal{Z}(L)\) at \(z\), then there is a natural bijection_ \[\left\{\text{Lifts $z^{\prime}\in\widehat{\mathcal{Z}(L)}_{z}(\mathcal{O})$ of $z _{.}$}\right\}\overset{\sim}{\longleftrightarrow}\left\{\begin{array}{l} \text{Lines in $\overline{\mathbf{D}}_{\mathcal{O}}$ lifting $\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}$}\\ \text{ and orthogonal to $x_{\text{\rm{cys}},z}$ for any $x\in L$}.\end{array}\right\}.\] Proof.: This is proved by combining [10, Theorem 3.1.3] and Lemma 3.1.2. #### 3.4.2. The GSpin case Let \(z\in\mathcal{N}^{o}(\mathbb{F})\) be an \(\mathbb{F}\)-point, we use \(\mathbf{V}\) to denote the \(\mathcal{O}_{\mathbb{F}/W}^{\text{cys}}\)-module constructed in SS3.3, it is equipped with a filtration \(0\subset\text{Fil}^{1}\mathbf{V}\subset\mathbf{V}\). Let \(\widehat{\mathcal{N}}_{z}\) be the completion of \(\mathcal{N}\) at \(z\). For an object \(\mathcal{O}\in\mathscr{C}\), let \(\mathbf{V}_{\mathcal{O}}=\mathbf{V}\otimes_{W}\mathcal{O}\). **Lemma 3.4.3**.: _Let \(\mathcal{O}\in\mathscr{C}\), let \(z\in\mathcal{N}^{o}(\mathbb{F})\) be an \(\mathbb{F}\)-point, then there is a natural bijection_ \[\left\{\text{Lifts $z^{\prime}\in\widehat{\mathcal{N}}_{z}^{o}(\mathcal{O})$ of $z _{.}$}\right\}\overset{\sim}{\longleftrightarrow}\left\{\text{Isotropic $\mathcal{O}$-lines in $\mathbf{V}_{\mathcal{O}}$ lifting $\text{Fil}^{1}\mathbf{V}(\mathbb{F})$}.\right\}.\] _Let \(L\subset\mathbb{V}^{o}\) be a \(\mathbb{Z}_{p}\)-lattice of rank \(r\geq 1\). Let \(z\in\mathcal{Z}(L)(\mathbb{F})\) and \(\widehat{\mathcal{Z}(L)}_{z}\) be the completion of \(\mathcal{Z}(L)\) at \(z\), then there is a natural bijection_ \[\left\{\text{Lifts $z^{\prime}\in\widehat{\mathcal{Z}(L)}_{z}(\mathcal{O})$ of $z _{.}$}\right\}\overset{\sim}{\longleftrightarrow}\left\{\begin{array}{l} \text{Isotropic $\mathcal{O}$-lines in $\mathbf{V}_{\mathcal{O}}$ lifting $\text{Fil}^{1}\mathbf{V}(\mathbb{F})$}\\ \text{ and orthogonal to $x_{\text{\rm{cys}},z}$ for any $x\in L$}.\end{array}\right\}.\] Proof.: This is proved in [11, Proposition 5.16] (see also [10, Lemma 4.6.2]). ## 4. Regularity of the difference divisor ### Formally smooth locus of the special cycle \(\mathcal{Z}(L)\) **Lemma 4.1**.: _Let \(\mathcal{O}\in\mathscr{C}\) be a \(\mathbb{F}\)-point, let \(\mathcal{O}\in\mathscr{C}\) be a \(\mathbb{F} **Lemma 4.1.1**.: _Let \(n\geq 2\) be an integer, let \(R=\mathcal{O}[[t_{1},\cdots,t_{n-1}]]\) where \(\mathcal{O}\) is a discrete valuation ring of characteristic \((0,p)\) with uniformizer \(\pi\) and residue field \(\mathbf{k}\), the ring \(R\) has maximal ideal \(\mathfrak{m}_{R}:=(\pi,t_{1},\cdots,t_{n-1})\). Let \(f_{1},f_{2},\cdots,f_{r}\) be \(r\) elements in \(\mathfrak{m}_{R}\), then the quotient ring \(\overline{R}=R/(f_{1},\cdots,f_{r})\) is formally smooth over \(\mathcal{O}\) of relative dimension \(n-r-1\) if and only if its base change \(\overline{R}_{\mathbf{k}}=\overline{R}\otimes_{\mathcal{O}}\mathbf{k}\) is formally smooth over \(\mathbf{k}\) of relative dimension \(n-r-1\)._ Proof.: Let \(\mathbf{0}:R\rightarrow\mathcal{O}\) be the continuous homomorphism which sends all the \(t_{j}\) to \(0\). Let \(J=(\frac{\partial f_{i}}{\partial t_{j}}(\mathbf{0}))_{\begin{subarray}{c}1 \leq i\leq r\\ 1\leq j\leq n-1\end{subarray}}\) be the Jacobian matrix of \(f_{1},f_{2},\cdots,f_{r}\) at \(\mathbf{0}\). The quotient ring \(\overline{R}\) is formally smooth over \(\mathcal{O}\) of relative dimension \(n-r-1\) if and only if the matrix \(J\) has a \(r\times r\)-minor which is invertible in \(\mathcal{O}\), because in this case we can rearrange a system of parameters \(t_{1}^{\prime},\cdots,t_{n-1}^{\prime}\) of \(R\) such that \(t_{i}^{\prime}=f_{i}\) for \(1\leq i\leq r\). Let \(J_{\mathbf{k}}=J\otimes_{\mathcal{O}}\mathbf{k}\) be the base change of \(J\) to \(\mathbf{k}\), the matrix \(J\) has a \(r\times r\)-minor which is invertible in \(\mathcal{O}\) is equivalent to the rank of \(J_{\mathbf{k}}\) is \(r\), which is further equivalent to \(\overline{R}_{\mathbf{k}}\) is formally smooth over \(\mathbf{k}\) of relative dimension \(n-r-1\), hence the quotient ring \(\overline{R}=R/(f_{1},\cdots,f_{r})\) is formally smooth over \(\mathcal{O}\) of relative dimension \(n-r-1\) if and only if \(\overline{R}_{\mathbf{k}}\) is formally smooth over \(\mathbf{k}\) of relative dimension \(n-r-1\). #### 4.1.1. The unitary case For a point \(z\in\mathcal{N}^{u}(\mathbb{F})\), let \(\widehat{\mathcal{N}}_{z}^{u}\) be the completion of the formal scheme \(\mathcal{N}^{u}\) at \(z\). Let \(D=\mathbb{D}(X)\) be the Dieudonne module of \(\mathbb{X}\), let \(\mathbf{D}\coloneqq\mathbb{D}_{0}(X)\) and \(\overline{\mathbf{D}}\coloneqq\mathbb{D}_{1}(X)\) be the two sublattices of \(D\) as in SS3.2. Let \(L\subset\mathbb{V}^{u}\) be a \(\mathcal{O}_{E}\)-lattice and suppose that \(z\in\mathcal{Z}(L)(\mathbb{F})\), then we have an isometric map \(i_{\mathrm{crys},z}:L_{W}:=L\otimes_{\mathcal{O}_{E}}W\rightarrow\mathbf{D}\) by extending \(W\)-linearly the map \(i_{\mathrm{crys},z}:\mathbb{V}^{u}\xrightarrow{\sim}C\subset\mathbf{D}[ \frac{1}{\varpi}]\) and restricting to \(L_{W}\). We say the isometric map \(i_{\mathrm{crys},z}\) is primitive at \(z\) if the induced map \(\overline{i_{\mathrm{crys},z}}:L_{\mathbb{F}}\rightarrow\mathbf{D}_{\mathbb{F}}\) is injective. **Lemma 4.1.2**.: _Let \(L\subset\mathbb{V}^{u}\) be an \(\mathcal{O}_{E}\)-lattice of rank \(1\leq r\leq n-1\) and \(z\in\mathcal{Z}(L)(\mathbb{F})\), then \(\mathcal{Z}(L)\) is formally smooth over \(W\) of relative dimension \(n-r-1\) at \(z\) if and only if the isometric map \(i_{\mathrm{crys},z}\) is primitive at \(z\)._ Proof.: The special cycle \(\mathcal{Z}(L)\) is cut out by \(r\) equations in the deformation space \(\widehat{\mathcal{N}}_{z}^{u}\simeq\mathrm{Spf}\,\mathcal{O}_{\mathcal{N}^{u},z}\) by Proposition 2.3.3, therefore the special cycle \(\mathcal{Z}(L)\) is formally smooth over \(W\) of relative dimension \(n-r-1\) at \(z\) if and only if \(\mathcal{Z}(L)_{\mathbb{F}}\coloneqq\mathcal{Z}(L)\times_{W}\mathbb{F}\) is formally smooth over \(\mathbb{F}\) of relative dimension \(n-r-1\) at \(z\) by Lemma 4.1.1. Let \(\mathcal{N}_{\mathbb{F}}^{u}=\mathcal{N}^{u}\times_{W}\mathbb{F}\). Let \(l\in\overline{\mathbf{D}}\) be an element such that its image \(\overline{l}\) in \(\overline{\mathbf{D}}_{\mathbb{F}}\) is a generator of the line \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}\). Let \(\mathbb{F}[\epsilon]=\mathbb{F}[X]/X^{2}\). By Schlessinger's criterion in [11, Theorem 2.11], the tangent space \(\mathrm{Tgt}_{z}(\mathcal{N}_{\mathbb{F}}^{u})\) of \(\mathcal{N}_{\mathbb{F}}^{u}\) at \(z\) can be identified with the set \(\widehat{\mathcal{N}}_{z}^{u}(\mathbb{F}[\epsilon])\), this set has a natural bijection to the set of lines in \(\overline{\mathbf{D}}\otimes_{W}\mathbb{F}[\epsilon]\) which lifts the line \(\mathbb{F}\cdot\overline{l}\) in \(\overline{\mathbf{D}}_{\mathbb{F}}\) by Lemma 3.4.2, therefore any such line has a generator of the form \(\overline{l}+\epsilon\cdot w\) for some element \(w\in\overline{\mathbf{D}}_{\mathbb{F}}\). Two elements \(\overline{l}+\epsilon\cdot w\) and \(\overline{l}+\epsilon\cdot w^{\prime}\) generate the same line if and only if \(w^{\prime}-w^{\prime}\in\mathbb{F}\cdot\overline{l}\), hence \(\mathrm{Tgt}_{z}(\mathcal{N}_{\mathbb{F}}^{u})\simeq\overline{\mathbf{D}}_{ \mathbb{F}}/\mathbb{F}\cdot\overline{l}\), and \(\mathrm{dim}_{\mathbb{F}}\mathrm{Tgt}_{z}(\mathcal{N}_{\mathbb{F}}^{u})=n-1\). The tangent space \(\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})\) of \(\mathcal{Z}(L)_{\mathbb{F}}\) at \(z\) can be identified with the set \(\widehat{\mathcal{Z}(L)}_{z}(\mathbb{F}[\epsilon])\). For any \(x\in L\), let \(\overline{x_{\mathrm{crys},z}}\) be the image of \(x_{\mathrm{crys},z}\) in \(\mathbf{D}_{\mathbb{F}}\). The set \(\widehat{\mathcal{Z}(L)}_{z}(\mathbb{F}[\epsilon])\) has a natural bijection to the set of lines in \(\overline{\mathbf{D}}\otimes_{W}\mathbb{F}[\epsilon]\) which lifts the line \(\mathbb{F}\cdot\overline{l}\) in \(\overline{\mathbf{D}}_{\mathbb{F}}\) and orthogonal to \(\overline{x_{\mathrm{crys},z}}\) for any \(x\in L\). Let \(\overline{l}+\epsilon\cdot w\) be a generator of a line in the set \(\widehat{\mathcal{Z}(L)}_{z}(\mathbb{F}[\epsilon])\), then \(\langle\overline{x_{\mathrm{crys},z}},\overline{l}+\epsilon\cdot w\rangle= \epsilon\langle\overline{x_{\mathrm{crys},z}},w\rangle=0\), hence \(\langle\overline{x_{\mathrm{crys},z}},w\rangle=0\), therefore \(\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})\) is the subspace of \(\overline{\mathbf{D}}_{\mathbb{F}}/\mathbb{F}\cdot\overline{l}\) orthogonal to the image of \(L_{W}\) in \(\mathbf{D}_{\mathbb{F}}\) under the pairing \(\langle\cdot,\cdot\rangle_{\mathbb{F}}\). Let \(\overline{L}_{W}:=(i_{\mathrm{crys},z}(L_{W})+p\mathbf{D})/p\mathbf{D}\) be the image of \(L_{W}\) in \(\mathbf{D}_{\mathbb{F}}\). Since the pairing \(\langle\cdot,\cdot\rangle_{\mathbb{F}}\) is non-degenerate, we have \[\dim_{\mathbb{F}}\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})=n-1-\dim_{ \mathbb{F}}(\overline{L}_{W}). \tag{1}\] therefore \(\mathcal{Z}(L)\) is formally smooth over \(W\) of relative dimension \(n-r-1\) at \(z\) if and only if \(\dim_{\mathbb{F}}(\overline{L}_{W})=r\), which is equivalent to the fact that the map \(\overline{i_{\mathrm{crys},z}}:L_{\mathbb{F}}\to\mathbf{D}_{\mathbb{F}}\) is injective, i.e., the isometric map \(i_{\mathrm{crys},z}\) is primitive. #### 4.1.2. The GSpin case For a point \(z\in\mathcal{N}(\mathbb{F})\), let \(\widehat{\mathcal{N}}^{o}_{z}\) be the completion of the formal scheme \(\mathcal{N}^{o}\) at \(z\). Let \(\mathbf{V}=\mathbf{V}_{\mathrm{crys},z}(W)\) be the \(W\)-module defined in SS3.3. Let \(L\subset\mathbb{V}^{o}\) be an \(\mathcal{O}_{F}\)-lattice and suppose that \(z\in\mathcal{Z}(L)(\mathbb{F})\), then the crystalline realization \(x_{\mathrm{crys},z}\) belongs to \(\mathbf{V}\), therefore we have an isometric map \(i_{\mathrm{crys},z}:L\to\mathbf{V}\). We say the isometric map \(i_{\mathrm{crys},z}\) is primitive at \(z\) if the induced map \(\overline{i_{\mathrm{crys},z}}:L_{\mathbb{F}}\to\mathbf{V}_{\mathbb{F}}\) is injective. **Lemma 4.1.3**.: _Let \(L\subset\mathbb{V}^{o}\) be an \(\mathcal{O}_{F}\)-lattice of rank \(r\geq 1\) and \(z\in\mathcal{Z}(L)(\mathbb{F})\), then \(\mathcal{Z}(L)\) is formally smooth over \(W\) of relative dimension \(n-r-1\) at \(z\) if and only if the following two assertions hold, (i) The isometric map \(i_{\mathrm{crys},z}\) is primitive at \(z\); (ii) There exists a lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(L)(W/\varpi^{2})\)._ Proof.: The special cycle \(\mathcal{Z}(L)\) is cut out by \(r\) equations in the deformation space \(\widehat{\mathcal{N}}^{o}_{z}\simeq\mathrm{Spf}\,\mathcal{O}_{\mathcal{N},z}\) by Proposition 2.3.3, therefore the special cycle \(\mathcal{Z}(L)\) is formally smooth over \(W\) of relative dimension \(n-r-1\) at \(z\) if and only if \(\mathcal{Z}(L)_{\mathbb{F}}\coloneqq\mathcal{Z}(L)\times_{W}\mathbb{F}\) is formally smooth over \(\mathbb{F}\) of relative dimension \(n-r-1\) at \(z\) by Lemma 4.1.1. Let \(\mathcal{N}^{o}_{\mathbb{F}}=\mathcal{N}^{o}\times_{W}\mathbb{F}\). Let \(l\in\mathbf{V}\) be an isotropic element such that its image \(\overline{l}\in\mathbf{V}_{\mathbb{F}}\) generates the line \(\mathrm{Fil}^{1}(\mathbf{V}_{\mathbb{F}})\). Let \((\cdot,\cdot)\) be the bilinear form on \(\mathbf{V}\) and \((\cdot,\cdot)_{\mathbb{F}}\) be the bilinear form on \(\mathbf{V}_{\mathbb{F}}\). Let \(\mathbb{F}[\epsilon]=\mathbb{F}[X]/X^{2}\). By Schlessinger's criterion in [1, Theorem 2.11], the tangent space \(\mathrm{Tgt}_{z}(\mathcal{N}^{o}_{\mathbb{F}})\) (resp. \(\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})\)) of \(\mathcal{N}^{o}_{\mathbb{F}}\) (resp. \(\mathcal{Z}(L)_{\mathbb{F}}\)) at \(z\) can be identified with the set \(\widehat{\mathcal{N}}^{o}_{z}(\mathbb{F}[\epsilon])\), this set has a natural bijection to the set of isotropic lines in \(\mathbf{V}\otimes_{W}\mathbb{F}[\epsilon]\) which lifts the line \(\mathbb{F}\cdot\overline{l}\) in \(\mathbf{V}_{\mathbb{F}}\) by Lemma 3.4.3, therefore any such line has a generator of the form \(\overline{l}+\epsilon\cdot w\) for some \(w\in\mathbf{V}_{\mathbb{F}}\), note that \[0=q_{\mathbf{V}\otimes_{W}\mathbb{F}[\epsilon]}(\overline{l}+\epsilon\cdot w)=q _{\mathbf{V}_{\mathbb{F}}}(\overline{l})+\epsilon(\overline{l},w)_{\mathbb{F} }=\epsilon(\overline{l},w)_{\mathbb{F}},\] hence the fact that the line generated by \(\overline{l}+\epsilon\cdot w\) is isotropic is equivalent to the vector \(w\in\mathbf{V}_{\mathbb{F}}\) is orthogonal to \(\overline{l}\). Two elements \(\overline{l}+\epsilon\cdot w\) and \(\overline{l}+\epsilon\cdot w^{\prime}\) generate the same line if and only if \(w^{\prime}-w\in\mathbb{F}\cdot\overline{l}\), hence \(\mathrm{Tgt}_{z}(\mathcal{N}^{o}_{\mathbb{F}})\simeq\{\overline{l}\}^{\perp} /\mathbb{F}\cdot\overline{l}\), and \(\mathrm{dim}\mathrm{Tgt}_{z}(\mathcal{N}^{o}_{\mathbb{F}})=n-1\). There is a bilinear pairing on the space \(\{\overline{l}\}^{\perp}/\mathbb{F}\cdot\overline{l}\) which is induced by \((\cdot,\cdot)_{\mathbb{F}}\), we use \((\overline{\cdot,\cdot})_{\mathbb{F}}\) to denote it. It is a non-degenerate since the bilinear pairing \((\cdot,\cdot)_{\mathbb{F}}\) on \(\mathbf{V}_{\mathbb{F}}\) is non-degenerate. The tangent space \(\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})\) of \(\mathcal{Z}(L)_{\mathbb{F}}\) at \(z\) can be identified with the set \(\widehat{\mathcal{Z}(L)}_{z}(\mathbb{F}[\epsilon])\). For any \(x\in L\), let \(\overline{x_{\mathrm{crys},z}}\) be the image of \(x_{\mathrm{crys},z}\) in \(\mathbf{V}_{\mathbb{F}}\). The set \(\widehat{\mathcal{Z}(L)}_{z}(\mathbb{F}[\epsilon])\) has a natural bijection to the set of isotropic lines in \(\mathbf{V}\otimes_{W}\mathbb{F}[\epsilon]\) which lifts the line \(\mathbb{F}\cdot\overline{l}\) in \(\mathbf{V}_{\mathbb{F}}\) and orthogonal to \(\overline{x_{\mathrm{crys},z}}\) for any \(x\in L\). Let \(\overline{l}+\epsilon\cdot w\) be a generator of a line in the set \(\widehat{\mathcal{Z}(L)}_{z}(\mathbb{F}[\epsilon])\), then \((\overline{x_{\mathrm{crys},z}},\overline{l}+\epsilon\cdot w)=\epsilon( \overline{x_{\mathrm{crys},z}},w)=0\), hence \((\overline{x_{\mathrm{crys},z}},w)=0\), therefore \(\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})\) is the subspace of \(\{\overline{l}\}^{\perp}/\mathbb{F}\cdot\overline{l}\) orthogonal to the image of \(L_{W}\) in \(\mathbf{V}_{\mathbb{F}}\) under the pairing \((\cdot,\cdot)_{\mathbb{F}}\), let \(\overline{L}_{W}\coloneqq(i_{\mathrm{crys},z}(L_{W})+p\mathbf{V})/p\mathbf{V}\) be the image of \(L_{W}\) in \(\mathbf{V}_{\mathbb{F}}\), it is contained in the subspace \(\{\overline{l}\}^{\perp}\) of \(\mathbf{V}_{\mathbb{F}}\), we have \(\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})=\big{(}\big{(}\overline{L}_{W}+ \mathbb{F}\cdot\overline{l}\big{)}/\mathbb{F}\cdot\overline{l}\big{)}^{\perp} \subset\{\overline{l}\}^{\perp}/\mathbb{F}\cdot\overline{l}\). Since the pairing \(\overline{(\cdot,\cdot)_{\mathbb{F}}}\) is non-degenerate on \(\{\overline{l}\}^{\perp}/\mathbb{F}\cdot\overline{l}\), \(\mathrm{dim}_{\mathbb{F}}\,\mathrm{Tgt}_{z}(\mathcal{Z}(L)_{\mathbb{F}})=n-1- \mathrm{dim}_{\mathbb{F}}\,\big{(}\big{(}\overline{L}_{W}+\mathbb{F}\cdot \overline{l}\big{)}/\mathbb{F}\cdot\overline{l}\big{)}\), therefore \(\mathcal{Z}(L)\) is formally smooth over \(W\) of relative dimension \(n-r-1\) at \(z\) if and only if \(\dim_{\mathbb{F}}\left((\overline{L}_{W}+\mathbb{F}\cdot\overline{l})/\mathbb{F} \cdot\overline{l}\right)=r\), which is further equivalent to \(\dim_{\mathbb{F}}\overline{L}_{W}=r\) and \(\overline{l}\notin\overline{L}_{W}\). The condition \(\dim_{\mathbb{F}}\overline{L}_{W}=r\) is equivalent to (i) which says that the isometric map \(i_{\text{crys},\text{z}}\) is primitive at \(z\). Let's assume (i) and (ii) are true, we want to show that the element \(\overline{l}\) doesn't belong to the space \(\overline{L}_{W}\). Let \(z^{\prime}\in\mathcal{Z}(L)(W/\varpi^{2})\) be a lift of \(z\) to \(W/\varpi^{2}\), and \(\tilde{z}\in\widehat{\mathcal{N}}_{z}(W)\) be a lift of \(z^{\prime}\) to \(W\). The point \(\tilde{z}\) corresponds to an isotropic line \(L^{1}\), let \(l\in\mathbf{V}\) be a generator of \(L^{1}\) whose image in \(\mathbf{V}_{\mathbb{F}}\) is \(\overline{l}\). Let's assume the contrary, then there exists \(x\in L_{W}\) and \(v\in\mathbf{V}\) such that \[l=x_{\text{crys},z}+\varpi\cdot v\in\mathbf{V}. \tag{2}\] Let \(\{x_{i}\}_{i=1}^{r}\) be an \(\mathcal{O}_{F}\)-basis of \(L\) and \(x=\sum\limits_{i=1}^{r}a_{i}x_{i}\) for some \(a_{i}\in W\), then \(\overline{l}=\sum\limits_{i=1}^{r}\overline{a_{i}}\cdot\overline{x_{i,\text{ crys},z}}\neq 0\), hence there exists at least one \(i\) such that \(a_{i}\in W^{\times}\) and \(x_{i,\text{crys},z}\notin\varpi\mathbf{V}\). Let \(\Phi\) be the Frobenius action on \(\mathbf{V}\). If \(v\in L^{0}+\varpi\mathbf{V}\), then \(\Phi(v)\in\mathbf{V}\) by Lemma 3.3.1, the same lemma also implies that \(\Phi(l)\in\varpi\mathbf{V}\), therefore \(\Phi(x_{\text{crys},z})=\Phi(l)-\varpi\Phi(v)\in\varpi\mathbf{V}\). However, \[\Phi(x_{\text{crys},z})=\sum\limits_{i=1}^{r}\sigma(a_{i})\cdot\Phi(x_{i, \text{crys},z})=\sum\limits_{i=1}^{r}\sigma(a_{i})\cdot x_{i,\text{crys},z},\] hence \(\Phi(i_{\text{crys},z}(x))\notin\varpi\mathbf{V}\) because there exists an integer \(1\leq i\leq r\) such that \(\sigma(a_{i})\in W^{\times}\) and \(x_{i,\text{crys},z}\notin\varpi\mathbf{V}\), hence \(v\notin L^{0}+\varpi\mathbf{V}\), therefore \((l,v)\in W^{\times}\) by Lemma 3.3.2. Recall that \(\nu_{\varpi}\) is the \(\varpi\)-adic valuation on \(K\). The point \(z^{\prime}\in\mathcal{Z}(L)(W/\varpi^{2})\) is a lift of \(z\) to \(W/\varpi^{2}\) implies that \(\nu_{\varpi}((l,x_{i,\text{crys},z}))\geq 2\) by Lemma 3.4.3, hence \(\nu_{\varpi}((l,x_{\text{crys},z}))=\nu_{\varpi}(\sum\limits_{i=1}^{r}a_{i}(l, x_{i,\text{crys},z}))\geq 2\). However, the pairing \((l,x_{\text{crys},z})=(l,l-\varpi\cdot v)=-\varpi(l,v)\) by (2), hence \(\nu_{\varpi}((l,x_{\text{crys},z}))=1\) since \((l,v)\in W^{\times}\) by previous discussion, this is a contradiction. Therefore the element \(\overline{l}\) doesn't belong to the space \(\overline{L}_{W}\) if (i) and (ii) are true. ### Regularity of the difference divisor **Lemma 4.2.1**.: _Let \(n\geq 2\) be an integer, let \(R=\mathcal{O}[[t_{1},\cdot\cdot\cdot,t_{n-1}]]\) where \(\mathcal{O}\) is a discrete valuation ring of characteristic \((0,p)\) with uniformizer \(\pi\) and \(\pi\)-adic valuation \(\nu_{\pi}\), the ring \(R\) has maximal ideal \(\mathfrak{m}_{R}:=(\pi,t_{1},\cdot\cdot\cdot,t_{n-1})\). In the following the symbol \((\mathrm{unit})\) means an element in \(R^{\times}\)._ \(\bullet\)_Let \(g\in\mathfrak{m}_{R}\). If for any continuous ring homomorphism \(f:R\to\mathcal{O}\), we have \(\nu_{\pi}(f(g))=1\), then \(g\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\)._ \(\bullet\)_Let \(d\in\mathfrak{m}_{R}\backslash\mathfrak{m}_{R}^{2}\), let \(h\) be an element in \(\mathfrak{m}_{R}\) such that \(h\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\). Then there exists a continuous ring homomorphism \(f:R\to\mathcal{O}\) such that \(f(d)\neq 0\) and \(\nu_{\pi}(f(h))\geq 2\)._ Proof.: For the first assertion, let \(g\equiv a_{0}\pi+\sum\limits_{j=1}^{n-1}a_{j}t_{j}\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\) where \(a_{j}\in\mathcal{O}\) for every \(0\leq j\leq n-1\). If \(\nu_{\pi}(a_{0})\geq 1\), consider the continuous homomorphism \(f:R\to\mathcal{O}\) such that \(f(t_{j})=\pi^{2}\) for every \(0\leq j\leq n-1\), then \(\nu_{\pi}(f(g))\geq 2\), which is a contradiction, therefore \(\nu_{\pi}(a_{0})=0\). If \(g\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\), then there exists at least one \(j\geq 1\) such that \(\nu_{\pi}(a_{j})=0\). Suppose that \(\nu_{\pi}(a_{j_{0}})=0\) for some \(j_{0}\geq 1\), consider the continuous homomorphism \(f:R\to\mathcal{O}\) such that \(f(t_{j_{0}})=-a_{j_{0}}^{-1}a_{0}\pi\) and \(f(t_{j})=\pi^{2}\) for \(j\neq j_{0}\), then \(\nu_{\pi}(f(g))\geq 2\), which is a contradiction, hence \(g\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\). For the second assertion, we consider the following two cases. Case 1: \(d\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\). Since \(h\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\), there exists a continuous ring homomorphism \(f:R\to\mathcal{O}\) such that \(\nu_{\pi}(f(h))\geq 2\) by the proof of the first assertion, for this \(f\), we have \(\nu_{\pi}(f(d))=1\), hence \(f(d)\neq 0\). Case 2: \(d\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\), then we can choose another system of uniformizers \(t_{1},\cdots,t_{n-1}\) such that \(d=t_{1}\). Let \(h\equiv b_{0}\pi+\sum\limits_{j=1}^{n-1}b_{j}t_{j}\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\) where \(b_{j}\in\mathcal{O}\) for every \(0\leq j\leq n-1\). If \(\nu_{\pi}(b_{0})\geq 1\), consider the continuous homomorphism \(f:R\to\mathcal{O}\) such that \(f(t_{j})=\pi^{2}\) for every \(0\leq j\leq n-1\), then \(\nu_{\pi}(f(h))\geq 2\) and \(f(d)=f(t_{1})\neq 0\). If \(\nu_{\pi}(b_{0})=0\), then there exists at least one \(j\geq 1\) such that \(\nu_{\pi}(b_{j})=0\) since \(h\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\). Suppose that \(\nu_{\pi}(b_{j_{0}})=0\) for some \(j_{0}\geq 1\), consider the continuous homomorphism \(f:R\to\mathcal{O}\) such that \(f(t_{j_{0}})=-b_{j_{0}}^{-1}b_{0}\pi\) and \(f(t_{j})=\pi^{2}\) for \(j\neq j_{0}\), then \(\nu_{\pi}(f(g))\geq 2\) and \(f(d)=f(t_{1})\neq 0\). **Lemma 4.2.2**.: _Let \(n\geq 2\) be an integer, let \(R=\mathcal{O}[[t_{1},\cdot\cdot\cdot,t_{n-1}]]\) where \(\mathcal{O}\) is a discrete valuation ring of characteristic \((0,p)\) with uniformizer \(\pi\) and \(\pi\)-adic valuation \(\nu_{\pi}\). Let \((d_{a})_{a\geq 0}\) be a sequence of elements in \(R\) such that \(d_{a}\in\mathfrak{m}_{R}:=(\pi,t_{1},\cdot\cdot\cdot,t_{n-1})\) for any \(a\geq 0\) and \(d_{0}\in\mathfrak{m}_{R}\backslash\mathfrak{m}_{R}^{2}\). Let \(f_{a}=\prod\limits_{i=0}^{a}d_{i}\) and \(\mathcal{Z}(f_{a}):=\mathrm{Spf}R/f_{a}\) be the closed formal subscheme of \(\mathrm{Spf}R\). For any morphism \(z:\mathrm{Spf}\,\mathcal{O}\to\mathrm{Spf}R\), we use \(z^{\sharp}\) to denote the corresponding ring homomorphism \(R\to\mathcal{O}\). If for any \(a\geq 1\) and any morphism \(z:\mathrm{Spf}\,\mathcal{O}\to\mathrm{Spf}R\), there exists a Cartesian diagram_ _then \(d_{a}\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\) for all \(a\geq 1\)._ Proof.: Let's assume the contrary that there exists \(a\geq 1\) such that \(d_{a}\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\), let \(k\geq 1\) be the least integer such that \(d_{i}\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\) for all \(1\leq i<k\) and \(d_{k}\not\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\), then by Lemma 4.2.1 there exists a morphism \(z:\mathrm{Spf}\,\mathcal{O}\to\mathrm{Spf}R\) such that \(z^{\sharp}(d_{0})\neq 0\) and \(\nu_{\pi}(z^{\sharp}(d_{k}))\geq 2\), then \(\mathcal{Z}(f_{k})\times_{\mathrm{Spf}R,z}\mathrm{Spf}\,\mathcal{O}\) is cut out in \(\mathrm{Spf}\,\mathcal{O}\) by \((\pi^{m})\) for some integer \(m\) such that \(k+\nu_{\pi}(z^{\sharp}(d_{0}))+1\leq m<\infty\), hence the fiber product \(\mathcal{Z}(f_{k})\times_{\mathrm{Spf}R,z}\mathrm{Spf}\,\mathcal{O}\) is not isomorphic to \(\mathrm{Spec}\,\mathcal{O}/(\pi^{k}\cdot z^{\sharp}(d_{0}))\), which is a contradiction, therefore such \(k\) doesn't exist, hence \(d_{a}\equiv(\mathrm{unit})\cdot\pi\ (\mathrm{mod}\ \mathfrak{m}_{R}^{2})\) for all \(a\geq 1\). **Lemma 4.2.3**.: _Let \(x\in\mathbb{V}^{u}\) (resp. \(x\in\mathbb{V}^{o}\)) be a special quasi-homomorphism such that \((x,x)\in\mathcal{O}_{E}\backslash\{0\}\) (resp. \(q_{\mathbb{V}^{o}}(x)\in\mathcal{O}_{F}\backslash\{0\}\)). Let \(z\in\mathcal{Z}(x)(\mathbb{F})\) be a point, let \(\tilde{z}\in\mathcal{N}^{u}(W)\) (resp. \(\mathcal{N}^{o}(W)\)) be a lift of \(z\) to \(W\) and \(\tilde{z}^{\sharp}:\mathcal{O}_{\mathcal{N}^{u},z}\) (resp. \(\mathcal{O}_{\mathcal{N}^{o},z}\)) \(\to W\) be the corresponding continuous homomorphism. Let \(f_{x,z}\in\mathcal{O}_{\mathcal{N}^{u},z}\) (resp. \(\mathcal{O}_{\mathcal{N}^{o},z}\)) be the equation of the Cartier divisor \(\mathcal{Z}(x)\) at \(z\). \(\bullet\) In the unitary case, let \(\mathbf{D}\) and \(\overline{\mathbf{D}}\) be the two \(W\)-lattices of rank \(n\) corresponding to \(z\) defined in SS3.2, let \(\langle\cdot,\cdot\rangle:\mathbf{D}\times\overline{\mathbf{D}}\to W\) be the pairing induced by the \(p\)-principal polarization. The lift \(\tilde{z}\) corresponds to a line \(L^{1}\subset\overline{\mathbf{D}}\) which has a generator \(l\), then_ \[(\tilde{z}^{\sharp}(f_{x,z}))=(\langle x_{\mathrm{cys},z},l\rangle)\subset W. \tag{3}\] \(\bullet\) _In the GSpin case, let \(\mathbf{V}=\mathbf{V}_{\mathrm{cys},z}(W)\) be the \(W\)-module defined in SS3.3, let \((\cdot,\cdot):\mathbf{V}\times\mathbf{V}\to W\) be the bilinear pairing induced by the quadratic form \(q_{\mathbf{V}}\) on \(\mathbf{V}\). The lift \(\tilde{z}\) corresponds to an isotropic line \(L^{1}\subset\mathbf{V}\) which has a generator \(l\), then_ \[(\tilde{z}^{\sharp}(f_{x,z}))=((x_{\mathrm{crys},z},l))\subset W. \tag{4}\] Proof.: We only give the proof in the unitary case, the proof of the GSpin case is similar. Let \(\widehat{\mathcal{Z}(x)}_{z}\) be the completion of the special cycle \(\mathcal{Z}(x)\) at \(z\), there exists a Cartesian diagram, Let \(m=\nu_{\varpi}(\tilde{z}^{\sharp}(f_{x,z}))\in\mathbb{Z}\cup\{\infty\}\), \(t=\nu_{\varpi}(\langle x_{\mathrm{crys},z},l\rangle)\in\mathbb{Z}\cup\{\infty\}\), the equality in (3) is equivalent to \(t=m\). The morphism \(W/(\tilde{z}^{\sharp}(f_{x,z}))\to\mathbb{F}\) has a natural divided power structure, therefore the fact that the special quasi-homomorphism \(x\) can be lifted to \(\operatorname{Spec}W/(\tilde{z}^{\sharp}(f_{x,z}))\) implies that \(\langle x_{\mathrm{crys},z},l\rangle\equiv 0\ (\mathrm{mod}\ \varpi^{m})\) by Lemma 3.4.2, hence \(t\geq m\). On the other hand, the morphism \(W/(\varpi^{t})\to\mathbb{F}\) has a natural divided power structure, the quasi-homomorphism \(x\) can be lifted to \(W/(\varpi^{t})\) because \(\langle x_{\mathrm{crys},z},l\rangle\equiv 0\ (\mathrm{mod}\ \varpi^{t})\) by Lemma 3.4.2, therefore we have a morphism \(\operatorname{Spec}W/(\varpi^{t})\to\widehat{\mathcal{Z}(x)}_{z}\) which also factors through \(\tilde{z}\), hence there is a morphism \(\operatorname{Spec}W/(\varpi^{t})\to\operatorname{Spec}W/(\tilde{z}^{\sharp}(f_ {x,z}))=\operatorname{Spec}W/(\varpi^{m})\), so \(m\geq t\). **Theorem 4.2.4**.: _Let \(x\in\mathbb{V}^{u}\) (resp. \(x\in\mathbb{V}^{o}\)) be a special quasi-homorphism such that \((x,x)\in\mathcal{O}_{E}\backslash\{0\}\) (resp. \(q_{\mathbb{V}^{o}}(x)\in\mathcal{O}_{F}\backslash\{0\}\)). Let \(z\in\mathcal{N}^{u}(\mathbb{F})\) (resp. \(z\in\mathcal{N}^{o}(\mathbb{F})\)) be a point such that \(z\in\mathcal{Z}(x)(\mathbb{F})\) but \(x\notin\mathcal{Z}(\varpi^{-1}x)(\mathbb{F})\)._ \(\bullet\) _If there is no lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\), then \(z\in\mathcal{D}(\varpi^{a}x)(\mathbb{F})\) for any integer \(a\geq 0\), and the difference divisor \(\mathcal{D}(\varpi^{a}x)\) is regular but not formally smooth over \(W\) at \(z\)._ \(\bullet\) _If there exists a lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\), then the special divisor \(\mathcal{Z}(x)\) is formally smooth over \(W\) at \(z\), and \(z\in\mathcal{D}(\varpi^{a}x)(\mathbb{F})\) for any integer \(a\geq 0\), and the difference divisor \(\mathcal{D}(\varpi^{a}x)\) is regular at \(z\). Furthermore, the difference divisor \(\mathcal{D}(x)\) is formally smooth over \(W\) at \(z\), while \(\mathcal{D}(\varpi^{a}x)\) is not formally smooth over \(W\) at \(z\) for \(a\geq 1\)._ Proof of the unitary case.: Let \(\mathbf{D}\coloneqq\mathbb{D}_{0}(X)\) and \(\overline{\mathbf{D}}\coloneqq\mathbb{D}_{1}(X)\) be the two \(W\)-lattices of rank \(n\) corresponding to \(z\) defined in SS3.2, let \(\langle\cdot,\cdot\rangle:\mathbf{D}\times\overline{\mathbf{D}}\to W\) be the pairing induced by the \(p\)-principal polarization. Let \(\tilde{z}\in\mathcal{N}^{u}(W)\) be an arbitrary lift of \(z\) to \(W\), it corresponds to a line \(L^{1}\subset\overline{\mathbf{D}}\) which lifts \(\mathbf{F}^{1}D_{\mathbb{F}}\cap\overline{\mathbf{D}}_{\mathbb{F}}\), it also corresponds to a continuous homomorphism \(\tilde{z}^{\sharp}:\mathcal{O}_{\mathcal{N}^{u},z}\simeq W[[t_{1},\cdots,t_{n- 1}]]\to W\). Let \(l\) be a generator of the line \(L^{1}\) and \(f_{\varpi^{a}x}\) be the equation defined by the quasi-isogeny \(\varpi^{a}x\) in the ring \(\mathcal{O}_{\mathcal{N}^{u},z}\) for any \(a\geq 0\), we have the following equality of ideals of \(W\) by the formula (3) in Lemma 4.2.3, \[(\tilde{z}^{\sharp}(f_{\varpi^{a}x}))=(\langle\varpi^{a}\cdot x_{\mathrm{crys},z},l\rangle)=(\varpi^{a}\cdot\langle x_{\mathrm{crys},z},l\rangle)=(\varpi^{a }\cdot\tilde{z}^{\sharp}(f_{x})).\] Let \(\mathfrak{m}_{z}=(\varpi,t_{1},\cdots,t_{n-1})\) be the maximal ideal of \(\mathcal{O}_{\mathcal{N}^{u},z}\). We first prove that \(d_{x}=f_{x}\in\mathfrak{m}_{z}\backslash\mathfrak{m}_{z}^{2}\). \(\bullet\) If there is no lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\), then \(\nu_{\varpi}(\langle x_{\mathrm{crys},z},l\rangle)=1\), because otherwise \(\nu_{\varpi}(\langle x_{\mathrm{crys},z},l\rangle)\geq 0\). 2, so \(z\) can be lifted to some \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\), therefore \(\nu_{\varpi}(\tilde{z}^{\sharp}(f_{x}))=1\) for any continuous homomorphism \(\tilde{z}^{\sharp}:W[[t_{1},\cdots,t_{n-1}]]\to W\), hence \(f_{x}\equiv(\mathrm{unit})\cdot\varpi\ (\mathrm{mod}\ \mathfrak{m}_{z}^{2})\) by Lemma 4.2.1. Note that \(z\notin\mathcal{Z}(\varpi^{-1}x)\) implies that \(f_{\varpi^{-1}x}=1\) by our convention, then \(d_{x}=f_{x}\equiv(\mathrm{unit})\cdot\varpi\ (\mathrm{mod}\ \mathfrak{m}_{z}^{2})\). \(\bullet\)If there exists a lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\), let \(\tilde{z}\in\mathcal{N}^{u}(W)\) be a lift of \(z^{\prime}\) to \(W\) whose corresponding line in \(\overline{\mathbf{D}}\) has a generator \(l\), then \(\nu_{\varpi}(\langle x_{\mathrm{crys},z},l\rangle)\geq 2\). We must have \(x_{\mathrm{crys},z}\notin\varpi\mathbf{D}\) because otherwise \(\varpi^{-1}x_{\mathrm{crys},z}\in\mathbf{D}\) and \(\nu_{\varpi}(\langle\varpi^{-1}x_{\mathrm{crys},z},l\rangle)\geq 1\), hence \(z\in\mathcal{Z}(\varpi^{-1}x)(\mathbb{F})\) by Lemma 3.4.2, which is a contradiction, hence \(x_{\mathrm{crys},z}\notin\varpi\mathbf{D}\). The element \(x_{\mathrm{crys},z}\) doesn't belong to the lattice \(\varpi\mathbf{D}\) is equivalent to the fact that the isometric map \(i_{\mathrm{crys},z}:\mathcal{O}_{E}\cdot x\to\mathbf{D}\) is primitive at \(z\), therefore by Lemma 4.1.2 the special cycle \(\mathcal{Z}(x)\) is formally smooth over \(W\) of relative dimension \(n-2\) at \(z\), so \(d_{x}=f_{x}\in\mathfrak{m}_{z}\backslash\mathfrak{m}_{z}^{2}\). Let \(\widehat{\mathcal{N}}_{z}^{u}\) (resp. \(\mathcal{Z}(\varpi^{a}x)_{z}\)) be the completion of the formal scheme \(\mathcal{N}^{u}\) (resp. \(\mathcal{Z}(\varpi^{a}x)\)) at \(z\), there exists a Cartesian diagram by (3), Recall that by the definition of difference divisors, we have \(f_{\varpi^{a}x}=\prod\limits_{i=0}^{a}d_{\varpi^{i}x}\) for any integer \(a\geq 0\), hence Lemma 4.2.2 implies that \(d_{\varpi^{a}x}\equiv(\mathrm{unit})\cdot\varpi\ (\mathrm{mod}\ \mathfrak{m}_{z}^{2})\) for any \(a\geq 1\) since \(d_{x}\in\mathfrak{m}_{z}\backslash\mathfrak{m}_{z}^{2}\). Proof of the GSpin case.: The proof is almost identical to the unitary case, we use the same notation as SS3.3 and replace the lattice \(\mathbf{D}\) by the lattice \(\mathbf{V}\), replace the pairing \(\langle\cdot,\cdot\rangle:\mathbf{D}\times\overline{\mathbf{D}}\to W\) by the symmetric pairing \((\cdot,\cdot):\mathbf{V}\times\mathbf{V}\to W\) induced by the quadratic form \(q_{\mathbf{V}}\), the element \(l\in\mathbf{V}\) is taken to be the generator of the isotropic line \(L^{1}\subset\mathbf{V}\) which corresponds to a lift of \(z\) to \(W\) as in SS3.3.1. **Corollary 4.2.5**.: _Let \(x\in\mathbb{V}^{u}\) (resp. \(x\in\mathbb{V}^{o}\)) be an element such that \((x,x)\in\mathcal{O}_{E}\backslash\{0\}\) (resp. \(q_{\mathbb{V}^{o}}(x)\in\mathcal{O}_{F}\backslash\{0\}\)), then \(\mathcal{D}(x)(\mathbb{F})=\mathcal{Z}(x)(\mathbb{F})\), and the difference divisor \(\mathcal{D}(x)\) is regular. Moreover, the difference divisor \(\mathcal{D}(x)\) is formally smooth over \(W\) at a point \(z\in\mathcal{Z}(x)(\mathbb{F})\) if and only if \(z\notin\mathcal{Z}(\varpi^{-1}x)(\mathbb{F})\) and there exists a lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\)._ Proof.: Obviously \(\mathcal{D}(x)(\mathbb{F})\subset\mathcal{Z}(x)(\mathbb{F})\). Conversely, if \(z\in\mathcal{Z}(x)(\mathbb{F})\), there exists an integer \(a\geq 0\) such that \(z\in\mathcal{Z}(\varpi^{-a}x)(\mathbb{F})\) but \(z\notin\mathcal{Z}(\varpi^{-a-1}x)(\mathbb{F})\), Theorem 4.2.4 implies that the local equation of \(\mathcal{D}(x)\) at \(z\) is nontrivial and \(\mathcal{D}(x)\) is regular at \(z\), and \(\mathcal{D}(x)\) is formally smooth over \(W\) at a point \(z\in\mathcal{Z}(x)(\mathbb{F})\) if and only if \(z\notin\mathcal{Z}(\varpi^{-1}x)(\mathbb{F})\) and there exists a lift of \(z\) to \(z^{\prime}\in\mathcal{Z}(x)(W/\varpi^{2})\).
2307.03890
Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots
High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research. To support the research on corner cases of visual SLAM systems, this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously. To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences. We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at https://github.com/sjtuyinjie/Ground-Challenge.
Jie Yin, Hao Yin, Conghui Liang, Zhengyou Zhang
2023-07-08T03:46:28Z
http://arxiv.org/abs/2307.03890v1
# Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots ###### Abstract High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research. To support the research on corner cases of visual SLAM systems, this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously. To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences. We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at [https://github.com/sjtuyinjie/Ground-Challenge](https://github.com/sjtuyinjie/Ground-Challenge). Data Sets for SLAM, Data Sets for Robotic Vision ## I Introduction Intelligent ground robots have been widely used in industrial production and daily life, such as logistics, cleaning, warehouses, security, and food delivery. And navigation is the fundamental capability for these robots to execute these diverse tasks. To achieve reliable navigation, visual SLAM (Simultaneous Localization and Mapping) problem has been researched for decades, with quite a few classical methods proposed [1]. A recent developing trend in visual SLAM is low-cost multi-sensor fusion, which has been verified to be a practical approach [2] to enhance the robustness to diverse scenarios. Different sensors can complement each other, maximizing the perceptual awareness of environments. One of the best example is that visual-inertial odometry (VIO) algorithms can significantly improve the tracking stability and accuracy in aggressive motion and textureless scenarios. While VIO systems have performed well in most cases, [3] has proven that this does not apply to ground vehicles. For generic movement patterns, a VIO system has only four unobservable directions (three for global translation and one for global yaw). However, ground vehicles are restricted from moving in a 2D plane, mostly along a straight line or a circular arc, and thus the IMU is not sufficiently activated. Therefore, the VIO system on the ground robot will suffer from additional DoF unobservability, such as the scale. To address this issue, [4] extends VINS-Mono [5] to incorporate low-frequency wheel-encoder data and keep the scale observable. Similarly, [6] proposes a RGB-D Encoder SLAM system for differential-drive robots. Most recently, [7] proposes an optimization-based visual-inertial-wheel tightly coupled odometry, which claims to work robustly in dark or overexposed conditions. Nonetheless, its performance has not been tested on any public dataset with ground truth trajectories. We believe that progress in SLAM, like in the AI field, is highly data-driven [8]. Although there have been extensive public datasets available to evaluate different SLAM algorithms, most of these datasets are outdated and do not challenge cutting-edge SLAM algorithms. In our opinion, those datasets focusing on challenging cases can more efficiently reveal the defects and limitations of existing algorithms. We notice that corner case detection in autonomous driving receive extensive concern from researchers [9][10] because such cases could easily cause the navigation system to drift. Similarly, once the localization module of the robot fails, it might cause industrial accidents and even pose potential threats to human safety as well. Nonetheless, to our knowledge, there is currently not much literature discussing the corner cases of robot navigation, which is not conducive to the safety of real-world robot applications. To fill this gap, we present a novel SLAM dataset for ground robots, which aims to challenge existing cutting-edge SLAM systems with corner cases and thus promotes the progress of the multi-sensor fusion SLAM algorithm. The challenges of our datasets lie in two areas: specific movement patterns and sensor failures, which will be elaborated in subsequent sections. Some scenarios covered in our datasets are visualized in Figure 1. Our major contributions are summarized as follows: * We collect a novel visual SLAM dataset for ground robots with a rich pool of sensors in diverse environments both indoors and outdoors. Particularly, the dataset covers a series of challenging sequences including sensor failures and specific movement patterns. * State-of-the-art SLAM algorithms of different settings are tested on our benchmark. And the results indicate these systems are not robust enough for situations such as sensor failures. * To facilitate the research on corner cases of robot navigation, we will release the full dataset with ground truth trajectories and the configuration file of each tested algorithm upon paper publication. ## II Related Works ### _SLAM Datasets for Ground Robots_ Most existing SLAM datasets are collected by UAVs [11] or cars [12], but only a few are targeted at ground robots. For instance, Rawseeds [13] and UTIAS [14] provide RGB images only, thus making them unsuitable for evaluating multi-sensor fusion systems. The Rosario dataset [15] is rich in sensor variety, yet is specifically designed for agricultural environments. M2DGR [2] captures diverse indoor and outdoor scenarios, including some challenging Fig. 1: Diverse scenarios included in our datasets: (a). A room under the motion capture system. (b). a richly-textured and well-lit office. (c). A wall that lacks texture. (d). A hall with smooth floors. (e). A narrow corridor. (f). An outdoor slope. (g). Aisles with carpet. (h). Hanging the robot on a bracket. scenes like elevators and darkrooms, but doesn't contain wheel odometer information which is essential for multi-sensor fusion SLAM algorithms due to its low cost and high precision. OpenLORIS [16] offers rich sensor types in visual challenging scenarios such as highly dynamic markets and poorly exposed corridors, but wheel challenges or motion challenges are not included. ### _Corner Cases_ Corner cases, i.e., extreme and non-predictable situations, are a popular research topic in autonomous driving [20]. Although infrequent, these cases can potentially threaten the security and reliability of autonomous navigation systems. Corner cases exist in robot navigation tasks as well. To address such challenging scenarios, researchers have proposed various methods, such as RGB-D SLAM [21] and DS-SLAM [22], to handle dynamic environments, and GVINS [23] to deal with degenerate cases including low-speed movement, less than four visible satellites, and GNSS-denial environments. Additionally, [24] proves that their method is robust in aggressive motions and a visual texture-less white wall. Nonetheless, we note that there are still plenty of corner cases that tend to be overlooked, such as wheel slippage, motion blur, and complete visual occlusion. There is a lack of SLAM datasets specifically designed for studying these corner cases, which is a gap yet to be filled. To sum up, it is urgent and critical to collect a novel SLAM dataset with rich sensor types, precise calibration, and sufficient challenge to support studies on corner cases, particularly sensor failures. ## III The Ground-Challenge DATASET ### _Sensor setup_ We construct a ground robot for data collection and the sensor locations on the robot are shown in Figure 2. The chassis is equipped with a front-view VI-Sensor (Visual-Inertial Sensor) that captures RGB and depth images along with 6-axis IMU's measurements. Driven by two driving wheels providing odometer information and four assisting wheels, the robot also has a high-precision 9-axis Xsens IMU and a 16-beam 3D LiDAR. The ground truth trajectories and point clouds are generated by the Velodyne LiDAR and the Xsens IMU using Fast-LIO2 [25], a state-of-the-art LiDAR-based SLAM system. To evaluate its performance, we compared the high-precision trajectories generated by a motion capture system with 16 infrared cameras to those generated by Fast-Lio2. The experiment revealed that Fast-LIO2 can reach a positioning accuracy of 3cm in a small-scale (15m x 15m) indoor room. Additionally, as reported Fig. 2: Our ground robot for data collection. Red is x-axis, green is y-axis, and blue is z-axis. in [25], Fast-LIO2 can achieve less than 0.1m end-to-end error in an outdoor trajectory spanning 1000 meters. Thus, considering that it is difficult for visually-based SLAM algorithms to achieve similar accuracy in challenging scenarios, we use the result of Fast-LIO2 as the pseudo-ground-truth trajectory. ### _Synchronization and Calibration_ We capture all the data using the ROSbag tool in the Robot Operating System (ROS). The RGB camera and 6-axis IMU embedded in the Realsense D435I are hard-synchronized, while the depth images are pixel-by-pixel aligned to the RGB images. The 3D LiDAR and 9-axis IMU are software-synchronized by triggering data capture at the same instance. To calculate the camera intrinsics of pinhole cameras, we use the MATLAB Camera Calibration Toolbox. To calibrate the internal parameters of the IMU, we use the toolbox from [28], which includes the white noise and random walk of both the gyroscopic and accelerometer measurements. We choose the IMU frame as the reference to calibrate the extrinsic parameters (relative poses) between sensors, and employ the toolbox from [29] for calibrating the extrinsic parameters between cameras and IMU. ### _Data collection_ We provide an overview of our dataset in Table III. All data was captured using the Rosbag tool within the Robot Operating System (ROS). The recording process is as follows: First, we recorded Office and Room sequences, where the robot moves slowly in a well-lit and textured office or room respectively, to test the performance of different algorithms in normal situations. Subsequently, we designed a series of corner case experiments from three aspects: visual challenge, wheel odometer challenge, and particular movement pattern, which are presented as follows: #### Iv-C1 Visual Challenge In our experiments, we manipulate the robot to move in a room with poor illumination (Darkroom sequences), back and forth in front of walls lacking texture (Wall sequences), and through scenarios of varying degrees of occlusion (Occlusion sequences). Figure 3 (a) shows sequences Occlusion1\(\sim\)2, which involves a person walking in front of the robot and causing intermittent partial occlusion. Figure 3 (b) displays sequence Occlusion3, in which the camera is covered with the palm repeatedly. In sequence Occlusion4 (Figure 3 (c)), a piece of black tape is attached to the camera's lens to completely block its view, disabling feature extraction and matching for visual SLAM. Furthermore, Motionblur sequences are generated by rapidly translating and rotating the robot, creating motion blur for cameras (Figure 3 (d)). #### Iii-C2 Wheel Odometer Challenge The Hall and Loop sequences are collected in a hall with smooth ground and a heavily carpeted aisle loop, respectively, where the wheels slip significantly. Moreover, we record Roughroad sequences to test the performance of the localization algorithm on rough roads. #### Iii-C3 Particular Moving Patterns In the Sequences Corridor1 and Corridor2, the robot moves forward in a zigzag shape and straight forward, respectively. In the zigzag route, motion blur and less overlapping between adjacent image frames will lead to errors in feature matching. In the Rotation sequence, the robot only rotates and hardly translates, which makes it difficult for vision-based algorithms to estimate the depth of feature points by triangulation. In the Static sequences, the robot stands still on a bracket, and we control its wheels to move in different directions through the handle. This experiment aims to test whether SLAM systems coupled with the wheel odometer can work well when the robot wheel is suspended. Finally, we operate the robot from a flat surface to another, passing through a slope. In this experiment, since the wheel odometer only provides two-dimensional speed observations, it could be misleading to estimate three-dimensional trajectories. ## IV Evaluation The features of all the sequences are described on our project website. We evaluated some SLAM systems with different sensor configurations on twelve representative sequences from our dataset. The tested algorithms are ORB-SLAM3 [30], an optimization-based SLAM system; VINS-Mono [5], one of the state-of-the-art monocular visual-inertial systems; VINS-RGBD [26], a fusion algorithm of RGB-D and IMU information based on the VINS-Mono [5] framework; and VIW-Fusion [7], a tightly-coupled visual-inertial-wheel system featuring online extrinsic calibration and wheel-aided initialization. Also, we use an EKF algorithm [27] for fusion of IMU and wheel odometer. The EVO tool [31] was used to align all the estimated trajectories with ground truth trajectories Fig. 3: (a) Moving feet. (b) occluding the camera with a palm. (c) complete occlusion. (d) motion blur to obtain the ATE RMSE [17]. The quantitative results are shown in Table IV, with the estimated trajectories in 2D plotted in Figure 4. Since most of the selected sequences are highly challenging (even with sharp turns), ORB-SLAM3 (both monocular-inertial and RGBD-inertial version) performed poorly on most of our test sequences, with frequent tracking failures (less than 50% of successfully tracked frames), initialization failure, or scale drift. In contrast, SLAM algorithms with multi-sensor fusion (like VIW-Fusion [7]) achieved better localization results but failed in some specific scenarios as well. We discuss the experiment results in detail as follows: Normal SituationThe ATE RMSE results on Sequence Office3 indicate that existing localization methods can perform well when the motion mode matches the assumptions of these algorithms and all the sensors work well. Vision ChallengeIn Sequence Darkroom2 and Motionblur3, VINS-Mono [5] and VINS-RGBD [26] drift a lot due to visual failures, while Wheel odometer based algorithms work more robustly in this case. In Sequence Occlusion4, all the vision-based methods including VIW-Fusion [7] fail to initialize because of poor feature extraction. This finding indicates that VIW-Fusion [7] has not been adequately designed to handle adverse conditions. A more prudent strategy may be to combine the wheel odometer and IMU to output a trajectory when a visual sensor failure is detected. Wheel Odometer ChallengeIn the sequences Roughroad3 and Slope1, vision-based systems perform worse than wheel odometer-based algorithms due to inaccurate scale estimation in aggressive motion. In Sequence Hall1, VINS-Mono [5] and VINS-RGBD [26] drift significantly due to ground reflection and faraway feature points. Fig. 4: Estimated and ground-truth (GT) trajectories of 12 sample sequences are visualized on the x-y plane. Here, VIW-Fusion [7] maintains satisfactory positioning performance even with slight wheel slippage, demonstrating the advantages and necessity of multi-sensor fusion in complex scenarios. However, when the wheels slip more severely in Sequence Loop2, the significant deviation caused by the wheel odometer increases the localization error of estimated trajectories. This can be attributed to two main reasons: current algorithms lack the ability to detect wheel slippage, and the angular velocity provided by the wheel speedometer is not accurate, leading to the long-term divergence of the estimated trajectory. To reduce the accumulation of errors, it is suggested that IMU's angular velocity measurement be used instead of the wheel odometer's. Particular Movement PatternsIn Sequence Corridor1, the zigzag movement of the robot not only fails the feature extraction but also leads to severe wheel slippage. Therefore, all the tested algorithms cannot accurately estimate the trajectory. In Sequence Rotation1, pure rotation causes severe errors in depth estimation by VINS-Mono's triangulation, while the remaining tested systems perform well thanks to measurements from other sensors. Finally, in Sequence Static1, VIO systems cannot be initialized successfully due to the lack of IMU excitation. Since the wheels are still moving after suspension, the wheel odometer-based methods mistake the robot being in motion. In summary, VINS-Mono [5] is most likely to generate catastrophic localization results in corner cases, and VINS-RGBD [26] can also inevitably fail when severe camera failures occur. We have noticed that the wheel odometer alone can achieve good results in most situations, except for severe wheel slippage. Integrating the IMU and the wheel odometer through the EKF [27] can achieve higher accuracy than the raw odometer. Nonetheless, the trajectory of the EKF can shake violently in the initialization phase due to the inaccuracy in the initial covariance estimation (this part was manually eliminated in our experiment). VIW-Fusion [7] can achieve satisfying accuracy and robustness in most sequences, but its initialization in visual failure needs improvement. Furthermore, it lacks consideration for wheel slippage, and its adopted dead reckoning model will diverge in a long trajectory due to inaccurate angular velocity estimates. The experiments conducted demonstrate the validity and value of our dataset as a benchmark for existing SLAM systems. The results further suggest that there is still much room for improvement in current cutting-edge multi-sensor fusion algorithms for real-world applications. Sensor failures, such as complete occlusion and wheel suspension, can be fatal for single-sensor-based methods; however, multi-sensor fusion systems should be designed to be more robust in these cases. For instance, we posit that a reliable visual-IMU-wheel system should be able to explicitly identify scenarios where visual observations are inaccurate and respond accordingly (e.g. disable visual information and rely only on wheel odometer and IMU). Nevertheless, to our knowledge, corner case identification and troubleshooting have been scarcely addressed in prior work. Therefore, we provide this dataset to support relevant researches. ## V Conclusion We present Ground-Challenge, a novel ground robot dataset to encourage breakthroughs in multi-sensor fusion SLAM algorithms. Specifically, we have crafted a series of corner case experiments, including sensor failures in diverse environments, to challenge current cutting-edge SLAM systems. We have tested these systems on our dataset and analyzed their limitations in various scenarios, thus providing potential developing directions for SLAM. We are committed to continually updating our benchmark dataset. Specifically, we will mount 2D and 3D LiDAR on the robot, design experiments to invoke corner cases, and utilize higher-precision equipment such as motion capture systems to ensure accurate ground truth for LiDAR SLAM in our future work. **Acknowledgement** Thank Tencent Robotics X Lab for support to this work.
2307.12745
Concept-based explainability for an EEG transformer model
Deep learning models are complex due to their size, structure, and inherent randomness in training procedures. Additional complexity arises from the selection of datasets and inductive biases. Addressing these challenges for explainability, Kim et al. (2018) introduced Concept Activation Vectors (CAVs), which aim to understand deep models' internal states in terms of human-aligned concepts. These concepts correspond to directions in latent space, identified using linear discriminants. Although this method was first applied to image classification, it was later adapted to other domains, including natural language processing. In this work, we attempt to apply the method to electroencephalogram (EEG) data for explainability in Kostas et al.'s BENDR (2021), a large-scale transformer model. A crucial part of this endeavor involves defining the explanatory concepts and selecting relevant datasets to ground concepts in the latent space. Our focus is on two mechanisms for EEG concept formation: the use of externally labeled EEG datasets, and the application of anatomically defined concepts. The former approach is a straightforward generalization of methods used in image classification, while the latter is novel and specific to EEG. We present evidence that both approaches to concept formation yield valuable insights into the representations learned by deep EEG models.
Anders Gjølbye, William Lehn-Schiøler, Áshildur Jónsdóttir, Bergdís Arnardóttir, Lars Kai Hansen
2023-07-24T12:36:05Z
http://arxiv.org/abs/2307.12745v2
# Concept-based Explainability for an EEG Transformer Model ###### Abstract Deep learning models are complex due to their size, structure, and inherent randomness in training procedures. Additional complexity arises from the selection of datasets and inductive biases. Addressing these challenges for explainability, Kim et al. (2018) introduced Concept Activation Vectors (CAVs), which aim to understand deep models' internal states in terms of human-aligned concepts. These concepts correspond to directions in latent space, identified using linear discriminants. Although this method was first applied to image classification, it was later adapted to other domains, including natural language processing. In this work, we attempt to apply the method to electroencephalogram (EEG) data for explainability in Kostas et al.'s BENDR (2021), a large-scale transformer model. A crucial part of this endeavor involves defining the explanatory concepts and selecting relevant datasets to ground concepts in the latent space. Our focus is on two mechanisms for EEG concept formation: the use of externally labeled EEG datasets, and the application of anatomically defined concepts. The former approach is a straightforward generalization of methods used in image classification, while the latter is novel and specific to EEG. We present evidence that both approaches to concept formation yield valuable insights into the representations learned by deep EEG models. Anders Gjolbye Madsen\({}^{\star\dagger}\)+ William Theodor Lehn-Schioler\({}^{\star\dagger}\)+ Ashildur Jonsdottir\({}^{\star}\) Bergdis Arnardottir\({}^{\star}\) Lars Kai Hansen\({}^{\star}\)+\({}^{\star}\)Technical University of Denmark Department of Applied Mathematics and Computer Science 2800 Kgs. Lyngby, Denmark Footnote †: This work is supported by The Pioneer Centre for AL, DNRF grant number P1, The Novo Nordisk Foundation grant NNF22OC0076907 ”Cognitive spaces - Next generation explainability”, and travel grants from The Danish Data Science Academy awarded to AGM and WIS. Footnote †: thanks: This work is supported by The Pioneer Centre for AL, DNRF grant number P1, The Novo Nordisk Foundation grant NNF22OC0076907 ”Cognitive spaces - Next generation explainability”, and travel grants from The Danish Data Science Academy awarded to AGM and WIS. Footnote †: This work is supported by the Pioneer Centre for AL, DNRF grant number P1, The Novo Nordisk Foundation grant NNF22OC0076907 ”Cognitive spaces - Next generation explainability”, and travel grants from The Danish Data Science Academy awarded to AGM and WIS. Footnote †: thanks: This work is supported by the Pioneer Centre for AL, DNRF grant number P1, The Novo Nordisk Foundation grant NNF22OC0076907 ”Cognitive spaces - Next generation explainability”, and travel grants from The Danish Data Science Academy awarded to AGM and WIS. Footnote †: This work is supported by the Pioneer Centre for AL, DNRF grant number P1, The Novo Nordisk Foundation grant NNF22OC0076907 ”Cognitive spaces - Next generation explainability”, and travel grants from The Danish Data Science Academy awarded to AGM and WIS. Explainable AI, EEG Concepts, TCAV, BENDR ## 1 Introduction We investigate representations of electroencephalogram (EEG) data obtained by self-supervised learning methods. Self-supervision is motivated by the lack of labeling in large-scale EEG datasets as labeling is both time-consuming and requires highly specialised EEG expertise. Self-supervised models, such as BERT-inspired Neural Data Representations (BENDR) [1], have the potential to overcome this challenge by learning informative representations from raw, unlabeled data. Such models can subsequently be fine-tuned for downstream classification tasks. We apply the Testing Concept Activation Vectors (TCAV) approach of Kim et al. [2], an interpretability method introduced in 2018, to BENDR-based models, to provide insights into their structure and decision-making processes. See Figure 1 for a conceptual overview. A better understanding of EEG transformer models using TCAV could support the use of these models as diagnostic support tools for identifying EEG abnormalities, such as seizures. However, the question that arises is, what constitutes human-friendly concepts in this context? To address this, we present the following scientific contributions: * The first TCAV workflows for EEG data, proposing concepts based on human-annotated data as well concepts defined by human anatomy and EEG frequency ranges. * Sanity checks for TCAV to ensure valid explanations in simple EEG settings. * Two practical applications: seizure prediction and brain-computer interfacing. All code used in this research, along with references to the datasets, have been made publicly accessible for validation and replication1. Footnote 1: [https://github.com/AndersGHadsen/TCAV-BENDR](https://github.com/AndersGHadsen/TCAV-BENDR) ## 2 Theory ### BERT-inspired Neural Data Representations BENDR [1] is inspired by language modeling techniques that have found success also outside text analysis, in self-supervised end-to-end speech recognition and image recognition. It aims to develop EEG models for better brain-computer interface (BCI) classification, diagnosis support, and other EEG-based analyses. Importantly, the approach being based on self-supervision can learn from any EEG data using only unlabeled data. The main goal of BENDR is to create self-supervised representations with minimal robust to context boundaries like datasets and human subjects. The approach is expected to be transferable to future unseen EEG datasets recorded from unseen subjects, different hardware, and different tasks. It can be used as-is or fine-tuned for various downstream EEG classification tasks. The architecture is based on wav2vec 2.0 [3] developed for speech processing and consists of two stages. The first stage takes raw data, and down-samples it using a stack of short-receptive field 1D convolutions, resulting in a sequence of vectors called BENDR. The second stage uses a transformer encoder [4] to map BENDR to a new sequence related to the target task. Down-sampling is achieved through strides, and the transformer follows the standard implementation with some modifications. The entire sequence is then classified, with a fixed token implemented as the first input for downstream tasks [5]. BENDR differs from the speech-specific architecture in two ways: (1) BENDR is not quantized for pre-training targets, and (2) it has many incoming channels, unlike wav2vec 2.0 which uses quantization and is based on a single channel of raw audio. The 1D convolutions are preserved in BENDR, to reduce complexity. We note that BENDR down-samples at a lower factor than wav2vec 2.0, here resulting in an effective sampling rate of \(\approx 2.67\) Hz equivalent to a feature window of \(\approx 375\) ms. ### Linear Head BenDR For downstream fine-tuning, we use a version where the pre-trained transformer modules are ignored, such that the pre-trained convolu tional BENDR stage is used as representation, see [1]. A consistent-length representation is created by dividing the BENDRs into four contiguous sub-sequences, averaging each sub-sequence, and concatenating them. A new linear layer with softmax activation is added to classify the downstream targets based on this concatenated vector of averaged BENDR. We call this the Linear Head BENDR (LHB) model and the structure is illustrated in Figure 2. The final LHB architecture consists of the following components: 1. **Feature encoder:** Fine-tunes the pre-trained parameters and uses six convolution blocks, each containing a temporal convolution, group normalization, and a GELU activation function to produce a BENDR of length 512. 2. **Encoding augment:** Involves masking and contextualizing the BENDR, with 10% of the BENDR masked and 10% of the channels dropped, while relative positional embeddings from the pre-trained task are added to the BENDR and further preprocessed. 3. **Summarizer:** Applies adaptive average pooling to create four contiguous sub-sequences, averaging each sub-sequence to ensure the model's independence from the input length of EEG recordings. 4. **Extended classifier:** Flattens the four sub-sequences, passes them through a fully connected layer to reduce their dimension, applies a dropout layer, uses a ReLU activation function, and normalizes the output using batch normalization. 5. **Classifier:** Consists of a linear layer with a softmax activation function, which performs the classification task. ### Testing with Concept Activation Vectors (TCAV) Testing with Concept Activation Vectors (TCAV) is a technique used to quantify the degree to which layers of neural networks align with human-defined concepts [2]. The method is general in the sense that it is not confined to the particular structure of the network nor to the data type. In its essence, TCAV can be broken down into five steps First, the process involves defining human-aligned concepts and representing them in the data. Alongside these, data from the target class must also be present for evaluation purposes. Furthermore, to establish the directions of the concept activation vector in the latent space, it is necessary to have a collection of concept-negative or random examples. Second, the layer activations of the concept input and the random input, respectively, are collected and separated by training a binary linear classifier. Then, the concept activation vector, \(v_{c}^{l}\) is defined as the normal vector to the hyperplane that separates the two classes (concept vs. random). Third, for a layer \(l\) in the network, the directional derivatives for the target class \(k\) along the learned activation vector for concept \(C\) is used to calculate how sensitive the prediction of the network is to changes in the input data in the direction of \(C\). We can quantify the sensitivity by \[S_{C,k,l}(\mathbf{x})=\nabla h_{l,k}(f_{l}(\mathbf{x}))\cdot\mathbf{v}_{C}^{l}, \tag{1}\] where \(h_{l,k}\) is defined as the function that maps activations in layer \(l\) through the remaining network and predicts class \(k\). Fourth, computing the sensitivity for several target examples, \(\mathbf{x}\in X_{k}\), the TCAV score is defined as the ratio of examples that have positive sensitivity, i.e., \[\text{TCAV}_{C,k,l}=\frac{\left|\{\mathbf{x}\in X_{k}:S_{C,k,l}(\mathbf{x})>0\}\right| }{\left|X_{k}\right|}. \tag{2}\] In this way, concept activation vectors that are positively aligned with target activations have a TCAV score close to 1 and concept activation vectors that are negatively aligned with target activations have a TCAV score close to 0. Fifth and final, collecting samples of TCAV scores over several training runs, a suitable statistical test is used to assess the statistical significance of concept activation vectors aligning with the activation of target examples. The null hypothesis of the test is that half of the examples have positive sensitivity and the other half have negative or zero sensitivity, i.e., \[H_{0}:\text{TCAV}_{C,k,l}=0.5. \tag{3}\] Concepts \(C\) for which the null hypothesis is rejected thus relate to the target class prediction, and may bring positive or negative evidence for the given target \(k\). ### Source localization Source localization for EEG data involves mapping electrical signals recorded on the scalp surface to corresponding regions on the cortical surface of the brain. This process uses a head model and the EEG data collected from electrodes placed on the scalp. The reconstruction is a grid of dipolar sources. The solution to this ill-posed problem is called the lead field and there exist many different Figure 1: An overview of using the TCAV method for EEG classification tasks with the Linear Head BENDR model: (1) Explanatory concepts are defined as either event-based EEG labels or frequency-based cortical activity, (2) Layer activations are extracted from a fine-tuned Linear Head BENDR, (3) Concept Activation Vectors (CAV) are defined as the normal vector to the hyperplane separating layer activations for concept data from those of random examples, and (4) The sensitivity of class data for a specific bottleneck of a concept is defined as the directional derivative in the direction of the respective CAV. ways to obtain this solution. In this work, we use the exact low-resolution electromagnetic tomography (eLORETA) method implemented in the MNE library [6]. The eLORETA approach presupposes that the EEG measurements of the electric field present on the scalp reflect dipolar sources located in the cerebral cortex. These are conceptually modeled as a three-dimensional distribution of dipoles. The spatial resolution of eLORETA is relatively coarse, which can make pinpointing exact cortical sources challenging. However, for our purpose of estimating aggregated source activity over broadly defined brain regions, such reduced resolution is not an issue. ## 3 Methods ### Data EEG is a non-invasive technique to record the brain's electrical activity. EEG data in this paper refers to these measurements, used often in research and healthcare to identify neurological conditions. In this work, we use five publicly accessible datasets, namely TUH EEG Corpus [7], TUH EEG Artifact (TUAR) Corpus, TUH EEG Events (TUEV) Corpus, TUH EEG Seizure (TUSZ) Corpus [8] and the EEG Motor Movement/Imagery (MMIDB) Dataset [9]. The TUH EEG Corpus contains 69,652 clinical and unlabeled EEG recordings obtained from Temple University Hospital (TUH). The TUH EEG Artifact Corpus, a labeled subset of the TUH EEG Corpus, includes annotations for five distinct artifacts including eye movement artifact (_eyem_). The TUEV is a subset of the TUH EEG Corpus and includes annotations of event-based EEG segments. There are numerous categories, but we primarily focus on five key classes: (1) technical artifacts (_artf_), (2) background (_bckg_), (3) generalized periodic epileptiform discharge (_gpred_), (4) periodic lateralized epileptiform discharge (_pled_), and (5) spike and slow wave (_spsw_). The TUSZ contains EEG signals with manually annotated data for seizure events. The MMIDB EEG dataset consists of data from 109 participants who are performing or imagining specific motor tasks; our main interest is the moments when subjects either close or imagine closing their left or right fist following a visual cue. We are excluding participants S088, S090, S092, and S100 due to missing data, resulting in 105 participants. In the construction of brain anatomy concepts, it is imperative to obtain an extensive collection of resting-state EEG data. Due to the limited availability of public datasets with the requisite size and reliability, we utilized The TUH EEG Corpus and source localization to develop a dedicated anatomically labeled resting-state dataset. A set of predefined criteria were employed, including the number of EEG channels, minimum duration, minimum sampling frequency, scaling, and the exclusion of extreme values, which led to the elimination of approximately 90% of the initial EEG recordings. Following this, a manual examination of a part of the remaining data was performed, ultimately yielding 200 human-verified resting-state EEG recordings, corresponding to an aggregate of about 70 hours of EEG data. In the process of downstream fine-tuning and concept formation, we employ 19 EEG channels, namely _Fp1_, _Fp2_, _F7_, _F3_, _Fz_, _F4_, _F8_, _T7_, _C3_, _Cz_, _C4_, _T8_, _T5_, _P3_, _Pz_, _P4_, _T6_, _O1_, and _O2_ (see the MNE documentation [6] for more information). These channels originate from the initial pre-training of BENDR using The TUH EEG Corpus. In instances where the datasets lack these channels, we establish the following mapping: \(T3\mapsto T7\), \(T4\mapsto T8\), \(P7\mapsto T5\), and \(P8\mapsto T6\). We also resample the corresponding EEG data to a 256 Hz sampling frequency and apply a high-pass FIRWIN filter with a 0.1 Hz cutoff, a low-pass FIRWIN filter with a 100.0 Hz cutoff, and a 60 Hz FIRWIN notch filter to eliminate powerline noise. In situations where preprocessing cannot be performed, the EEG is excluded. Finally, we scale each trial to the range \([-1,1]\) and append a relative amplitude channel, see [1], resulting in a total of 20 channels. ### Training Pre-training of BENDR is based on the large set of unlabelled EEG data from The TUH EEG Corpus. The pre-training procedure is largely based on wav2vec 2.0 and involves two main stages: The convolutional stage and the transformer stage. The convolutional stage generates a sequence of representations (BENDRs) that summarize the original input. This sequence is then fed into the transformer stage, which adjusts its output to be most similar to the encoded representation at each position. The layers affected during pre-training are the feature encoder and the transformer. Kostas et al. [1] kindly made the pre-trained weights of the encoder and contextualizer publicly available, and this is the model that we have employed here. The LHB model architecture described in Figure 2 is used for downstream fine-tuning. We aim to optimize the model for two distinct binary classification objectives. First, the model is fine-tuned for the differentiation between _seizure_ and _non-seizure_ events, using Figure 2: The Linear Head BENDR (LHB) model architecture illustrated. The model consists of (1) Feature encoder of six confrontational blocks, (2) Encoding augment comprised of masking and convolutional contextualizer, (3) Summarizer using Adaptive Average Pooling, (4) Extended Classifier for dimensionality reduction, and (5) Classifier. the TUSZ Corpus with 60-second window segments. The hyperparameters are determined using Bayesian optimization to maximize the validation \(F_{1}\)-score. The fine-tuning employs a batch size of 80, a learning rate of \(1\times 10^{-4}\), and \(30\) epochs. This results in a model with a balanced accuracy of \(0.73\pm 0.07\). In our second fine-tuning example, the model is adapted for the differentiation between _Left Fist Movement_ versus _Right Fist Movement_, using the MMIDB EEG Dataset with 4-second window segments. We are using both the imaginary and performed task data from the 105 participants. We train the model for 7 epochs with a batch size of \(4\) and a learning rate of \(1\times 10^{-5}\). The hyperparameters were chosen based on the best validation balanced accuracy from leave-one-subject-out cross-validation where the model was trained for 50 epochs and the best model was retained. The specific hyperparameter configuration aligns with the optimal hyperparameters found by the original authors [1] and we find a similar balanced accuracy of \(0.83\pm 0.02\). ### Constructing Concepts To construct human-aligned explanatory EEG concepts, a number of initial investigations were conducted. The data processing involved follows the methodology previously mentioned. In this section, we provide a general pipeline overview and discuss several choices made throughout the process. **Concepts from Labeled EEG Data**: Using the labeled EEG data from the TUAR and TUEV Corpus and the MMIDB EEG Dataset, we create concepts representing activities within specific time windows. Each annotated segment of the EEG data is divided into windows of predetermined length and assigned the corresponding label. In the TUEV Corpus, we define concepts for the spike/short wave (_sps_w), periodic lateralized epileptic discharge (_pled_), general period epileptic discharge (_gped_), technical artifact (_artf_), and background (bckg) with 60-second windows. This approach aligns with the length of the _seizure_ classifier. Lastly, we examine the eye movement (_eyem_) from the TUAR Corpus and _Left Fist Movement_ and _Right Fist Movement_ from the MMIDB EEG Dataset, both using 4-second windows. These different-sized windows then constitute examples of concepts defined based on their labels. **Anatomical Concepts from Unlabeled EEG Data**: The objective is to identify concepts representing specific frequency bands within distinct areas of the cortex, e.g. _alpha activity in pre-motor cortex_ or _gamma activity in early visual cortex_. To obtain a non-task-specific representation of each cortical area, we utilize resting-state EEG data, as it spontaneously generates activity throughout the cortex. For this purpose, we use a subset of The TUH EEG Corpus, as described above. To define anatomical concepts, EEG data is segmented into 4-second windows, with the first and last 5 seconds of each sequence excluded to minimize artifact contamination. The data is then divided into five frequency bands with a FIRWIN bandpass filter: _delta_ (0.5-4Hz), _theta_ (4-8Hz), _alpha_ (8-12Hz), _beta_ (12-30Hz), and _gamma_ (30-70Hz). The inverse operator for the forward model is computed using eLORETA [6] via the MNE Python library. Since the spatial resolution is not critical, minimal regularization of \(1\times 10^{-4}\) is applied. Using the combined version of the multi-modal parcellation of the human cerebral cortex, HCPMMP1 [10] and the inverse operator, the average power of electrical activity in 23 cortical areas for each hemisphere is determined. Our interest lies in cortical areas exhibiting the greatest deviation from typical activity within a specific frequency band. However, cortical areas are not equidistant from the scalp or consistent in baseline activity across bands. To normalize for these differences in the distribution of cortical activity, we compute the mean and standard deviation of the power in each cortical area for each frequency band on an EEG session level, which will be employed in various ways. We call these the baseline mean and the baseline standard deviation. We explore possible approaches to how the baseline means and standard deviation for each EEG session could be used to normalize the power of 4-second windows within that session. The options include dividing by the baseline standard deviation to account for scalp source variation, subtracting or dividing by the baseline mean to identify the cortical area with the greatest deviation, taking the absolute difference or not, and selecting a single cortical area across all frequency bands or only within a specific band. Identifying a single frequency and cortical area for each 4-second window of EEG data is a challenging task without prior work to guide the process, and each method presents its own limitations. We specifically look for _alpha_ desynchronization in the cerebral cortex during imagined or actual movement and closed or open eyes in the MMIDB EEG dataset, i.e., that _alpha_ activity in cortical areas decreases when activated. Using a paired t-test to examine the presence of lateralization in cortical activities for different methods, we found that the preferred approach is to choose the area which maximizes the absolute difference between the given time window's power and the baseline mean, divided by the baseline standard deviation, only within specific frequency bands. **Random Concepts:** Construction of CAVs calls for data examples that are considered random with respect to the concept of interest. In all experiments, random concepts consisting of 4-second or 60-second windows were randomly sampled from resting-state data obtained from the subset of the TUH EEG Corpus and unannotated sections of the TUAR dataset. ### Experiments We investigate two approaches for defining explanatory concepts in EEG data. The TCAV method is then employed to evaluate whether the LHB model uses specifically defined human-aligned concepts of EEG data. For all concepts, the resulting activation vectors for all five bottlenecks in the LHB model architecture are examined to determine if they significantly align with the latent representations of class data in the model. We conduct the following experiments: 1. **Sanity Checks:** We verify the TCAV method and construction of concepts function as intended through a series of sanity checks when classifying _Left Fist Movement_. 2. **Event-based Concepts:** We assess whether the LHB model leverages specific EEG events in the classification of _seizure_. 3. **Anatomy/Frequency-based Concepts:** We investigate if the LHB model employs lateralization in cortical activity in the _alpha_ band for classifying _Left Fist Movement_. The chosen cortical areas are based on their relevance to the classification task. In the experiments, we use the TCAV method with a regularized linear model and stochastic gradient descent (SGD) learning, setting the regularization parameter \(\alpha=0.1\) to learn the decision boundary between explanatory and random concepts. We employ 50 random concepts and a maximum of 40 examples per concept. These parameters were chosen to increase statistical power. The mean TCAV scores for the target concept examples and the random examples are compared using the non-parametric Mann-Whitney U Rank test, as opposed to the t-test used in the original TCAV method, as we observed a clear violation of the normality assumption for the TCAV scores. To mitigate Type 1 errors, the p-values are corrected for each experiment employing the conservative Bonferroni method, after which we claim significance if the corrected p-value is below \(0.05\). ## 4 Results ### Sanity Checks We first provide evidence that the TCAV method can be applied to explain EEG data and the LHB model. In Figure 3, the high significance of class data as concepts (_Left First Movement_ with positive evidence and _Right Fist Movement_ with negative evidence) confirms this. Furthermore, concepts based on maximal activity in either the left or right hemisphere for the _alpha_ frequency band strongly indicate that lateralized cortical activity is detected by several layers in the model, as expected. Moreover, the negative alignment of a concept based on labeled artifacts with the model representation of motor task data implies that artifacts in EEG data significantly influence classification tasks. We find that _veem_ has a negative impact on the classification of _Left Fist Movement_. Note that this does _not_ mean that _veem_ positively affects the opposite class, that is _Right Fist Movement_, as the TCAV Score is specific to the "_Left Fist Movement_ dataset". Conversely, _veem_ could negatively affect the classification of both _Left Fist Movement_ and _Right Fist Movement_, due to the lower signal-to-noise ratio for classification when artifacts are present. ### Event-based concepts We next investigate whether fine-tuning the LHB model for seizure classification on the TUSZ dataset and using explanatory concepts defined with labeled data from TUEV aligns with the model's internal representation for data labeled as containing seizures. The target of the investigation is the _seizure_ label and we test all bottlenecks in the LHB model. The results of this experiment are shown in Figure 4. When compared to EEG data labeled as containing seizures, the epilepsy-related concepts _pled_, which is present in certain brain areas, and _gped_, which is present in most of the brain, exhibit high and positive evidence in nearly all bottlenecks. This observation aligns with existing literature that associates epileptiform discharges with seizures [11], and it is expected that the LHB model will use these properties for classification. The _spsw_ concept also demonstrates significant positive evidence in the _encoder_ bottleneck but not in the further downstream bottlenecks. Similarly, the _bckg_ concept shows negative evidence in the _encoder_ bottleneck but not in the further downstream bottlenecks. It is interesting that these concepts only come to be significant in the initial bottleneck. A possible explanation is that the technical artifacts _artf_ and _bckg_ are not significant for the classification, but BENDR effectively identifies seizure-related concepts and filters out noise. The results also suggest that the model's _classifier_ and _extended classifier_ can be further optimized, as _artf_ is near-significant level in these bottlenecks and, as a result, the noise has not been completely removed. In conclusion, these examples indicate that concept-based explainability can provide valuable model design information. ### Anatomy/Frequency-Based Concepts We have demonstrated that labeled EEG data can generate human-aligned concepts, which are integrated into the LHB model for seizure classification. This comes quite naturally as labeled data is labeled by humans and tend to align with human-relatable concepts. We then present evidence that defining explanatory concepts based on cortical activity in frequency bands may uncover patterns corresponding to the model's internal representations. In particular, for a motor classification task using the MMIDB EEG dataset and targeting the _Left Fist Movement_ class, we show that cortical activity in the _alpha_ band aligns with the model's internal representation. In Figure 5, we find that the CAV for _Somatosensory and Motor Cortex_ in the right hemisphere positively aligns with the activations of _Left Fist Movement_ class data across all bottlenecks in the model. The mean TCAV scores are also consistently positively significant. At the same time, the TCAV scores for the same cortical area in the _Left Hemisphere_ are either negatively significant or insignificant. These results strongly suggest that the model's internal representation incorporates lateralization, reflecting the fact that one hemisphere exhibits more electrical activity than the other. It is noteworthy that lateralization is most significant in the _Encoding Augment_ and _Summarizer_ bottlenecks, indicating that it is captured early in the network. Additionally, we observe that the _Primary Visual Cortex (V1)_ areas do not exhibit lateralization, and their TCAV scores are insignificant across all bottlenecks and for both hemispheres. This further Figure 4: The results of utilizing TCAV to assess whether event-based EEG labels align with the internal representation of the _seizure_ class data in the LHB model at the five bottlenecks are presented. From the right, the concepts are defined as (1) technical artifacts (_artf_), (2) background (_bckg_), (3) generalized periodic epileptic discharge (_gped_), (4) periodic lateralized epileptic discharge (_pled_), and (5) spike and short wave (_spsw_). Stars indicate either positive (a score above 0.5) or negative (a score below 0.5) statistical significance. Figure 3: Sanity checks for applying the TCAV method to EEG data and the bottlenecks of the LHB model. The figure presents the results of TCAV for the _Left Fist Movement_ class in a binary classification task using the MMIDB EEG dataset. From right to left, concepts are defined as follows: (1) _Left Fist Movement_ and (2) _Right Fist Movement_ class data, maximal mean activity in the alpha frequency band for (3) _Left Hemisphere_ and (4) _Right Hemisphere_, respectively, and (5) _Eye Movement_ artifacts. Stars indicate either positive (a score above 0.5) or negative (a score below 0.5) statistical significance. supports the conclusion that the LHB model utilizes specific cortical areas in its classification rather than all areas indiscriminately. While no apparent lateralization is present in the _Premotor Cortex_, this part of the cortex is negatively significant in the _Encoder_ and _Summarizer_ bottlenecks for both the left and right hemispheres. A possible explanation is that the instances we examine involve participants _performing_ movements; therefore, there may not necessarily be relevant activity in the _Premotor Cortex_, which is primarily involved in movement planning [12]. Lastly, we observe significance in the _Classifier_ bottleneck for _Early Visual Cortex_ and _Dorsal Stream Visual Cortex_. We note that the movement is activated by a visual cue; however, further experiments would be required to fully clarify the effect. ## 5 Conclusion Concept-based explainability has proven to be valuable in various domains, such as image classification and natural language understanding, where concepts are naturally defined using labeled data. In this study, we have explored the definition of concepts for EEG models for the first time. We presented two new workflows for concept-based explainability within the TCAV framework for EEG data. First, we adopted an approach akin to the original work of Kim et al. [2], in which concepts are derived from labeled data. In this case, we utilized various annotated EEG databases, e.g., data from the Temple University Hospital EEG database. The second workflow is based on the source location of resting-state EEG data also from the Temple University Hospital database. This enables us to generate datasets for TCAV derived from anatomical brain areas and for specific frequency bands, e.g., the _alpha_ band. We demonstrated a proof of concept through several "sanity check" experiments to verify expected responses in elementary EEG settings, such as EEG lateralization during left- or right-hand movement. Lastly, we examined two practical applications: A case study involving seizure prediction, where TCAV reveals the role of fundamental spike patterns, and a brain-computer interface case, hinting at how the TCAV method can assist in debugging and offer valuable insights into classifier design for EEG data.
2306.05110
Relativistic Mean Field Model parameterizations in the light of GW170817, GW190814, and PSR J0740 + 6620
Three parameterizations DOPS1, DOPS2, and DOPS3 (named after the Department of Physics Shimla) of the Relativistic Mean Field (RMF) model have been proposed with the inclusion of all possible self and mixed interactions between the scalar-isoscalar (\sigma), vector-isoscalar (\omega) and vector-isovector (\rho) mesons up to quartic order. The generated parameter sets are in harmony with the finite and bulk nuclear matter properties. A set of Equations of State (EOSs) composed of pure hadronic (nucleonic) matter and nucleonic with quark matter (hybrid EOSs) for superdense hadron-quark matter in \beta-equilibrium is obtained. The quark matter phase is calculated by using the three-flavor Nambu-Jona-Lasinio (NJL) model. The maximum mass of a non-rotating neutron star with DOPS1 parameterization is found to be around 2.6 M$\odot$ for the pure nucleonic matter which satisfies the recent gravitational wave analysis of GW190814 Abbott et al.,(2020) with possible maximum mass constraint indicating that the secondary component of GW190814 could be a non-rotating heaviest neutron star composed of pure nucleonic matter. EOSs computed with the DOPS2 and DOPS3 parameterizations satisfy the X-Ray observational data and the recent observations of GW170817 maximum mass constraint of a stable non-rotating neutron star in the range 2.01 \pm 0.04 - 2.16 \pm 0.03 M\odot and also in good agreement with constraints on mass and radius measurement for PSR J0740+6620 (NICER) Riley et al., L27 (2021)}, Miller et al., (2021). The hybrid EOSs obtained with the NJL model also satisfy astrophysical constraints on the maximum mass of a neutron star from PSR J1614-2230 and Demorest et al., (2010) .We also present the results for dimensionless tidal deformability, ${\Lambda}$ which are consistent with the waveform models analysis of GW170817.
Virender Thakur, Raj Kumar, Pankaj Kumar, Vikesh Kumar, B. K. Agrawal, Shashi K. Dhiman
2023-06-08T11:18:59Z
http://arxiv.org/abs/2306.05110v1
Relativistic Mean Field Model parameterizations in the light of GW170817, GW190814, and PSR J0740 + 6620 ###### Abstract Three parameterizations DOPS1, DOPS2, and DOPS3 (named after the Department of Physics Shimla) of the Relativistic Mean Field (RMF) model have been proposed with the inclusion of all possible self and mixed interactions between the scalar-isoscalar (\(\sigma\)), vector-isoscalar (\(\omega\)) and vector-isovector (\(\rho\)) mesons up to quartic order. The generated parameter sets are in harmony with the finite and bulk nuclear matter properties. A set of Equations of State (EOSs) composed of pure hadronic (nucleonic) matter and nucleonic with quark matter (hybrid EOSs) for superdense hadron-quark matter in \(\beta\)-equilibrium is obtained. The quark matter phase is calculated by using the three-flavor Nambu-Jona-Lasinio (NJL) model. The maximum mass of a non-rotating neutron star with DOPS1 parameterization is found to be around 2.6 M\({}_{\odot}\) for the pure nucleonic matter which satisfies the recent gravitational wave analysis of GW190814 [Abbott et al., Astrophys. J. Lett. **896**, L44 (2020)] with possible maximum mass constraint indicating that the secondary component of GW190814 could be a non-rotating heaviest neutron star composed of pure nucleonic matter. EOSs computed with the DOPS2 and DOPS3 parameterizations satisfy the X-Ray observational data [Steiner et al., Astrophys. J. **722**, 33 (2010)] and the recent observations of GW170817 maximum mass constraint of a stable non-rotating neutron star in the range 2.01 \(\pm\) 0.04 - 2.16 \(\pm\) 0.03 M\(\odot\) [Rezzolla et. al., Astrophys. J. Lett. **852**, L25 (2018)] and also in good agreement with constraints on mass and radius measurement for PSR J0740 + 6620 (NICER) [Riley et al., Astrophys. J. Lett. **918**, Riley et al., L27 (2021),Miller et al., Astrophys. J. Lett. **918**, L28 (2021)]. The hybrid EOSs obtained with the NJL model also satisfy astrophysical constraints on the maximum mass of a neutron star from PSR J1614-2230 [Demorest et al., Nature **467**, 1081 (2010)]. We also present the results for dimensionless tidal deformability, \(\Lambda\) which are consistent with the waveform models analysis of GW170817. ## I Introduction The knowledge of neutron star properties is necessary to probe the high density behavior of the equations of state (EOSs) for the baryonic matter in the beta equilibrium. Neutron stars are the densest manifestations of massive objects in the observable universe and sound knowledge of EOSs of dense matter is required to understand the properties of neutron stars. The precise gravitational mass and radius measurements of the neutron stars are effective ways to constrain the EOSs of high dense matter in its interiors. The mass measurement of MSP J0740+6620 [1] with \(2.14^{+0.10}_{-0.09}M_{\odot}\), is likely to be the most massive neutron star yet observed. Recently, the simultaneous measurements of gravitational mass M and equatorial circumferential radius R\({}_{eq}\) of PSR J0030+0451 from NICER data by Miller et al. [2] and Riley et al. [3] by using independent methods to actual map of the hot region of pulsar, have inferred [M = \(1.44^{+0.15}_{-0.14}\)M\({}_{\odot}\), R\({}_{eq}\) = \(13.02^{+1.24}_{-1.06}\)km] and [M = \(1.34^{+0.15}_{-0.16}\)M\({}_{\odot}\), R\({}_{eq}\) = \(12.71^{+1.14}_{-1.19}\)km], respectively. Theoretically, the investigations of the observed masses and radii of Compact Stars (CS) reveals the particle composition and phase transition of dense nuclear matter at high densities. Several attempts [4; 5; 6; 7] have been made to construct the EOSs comprising of nucleons, hyperons and quarks under the constraint of global \(\beta\)-equilibrium. The inclusion of hyperons and/or quarks in EOSs softens the high density behavior, leading to the reduction of maximum gravitational masses of CS. Recently, there are many EOS models that include hyperons as well as quark matter [8; 6] and maximum gravitational mass calculated from them is compatible with \(\approx\) 2M\({}_{\odot}\). The theory of strong interactions, quantum chromodynamics (QCD), and ultra-relativistic heavy-ion collisions predict that at high densities, the hadronic matter may undergo a deconfinement phase consisting of quarks and gluons. Therefore, recently, it is an open question whether the inner core of compact stars (CS) consists of quark matter [9; 10; 11; 12; 13]. However, this has been suggested currently that the dense nuclear matter in the interior of stable compact stars with maximum gravitational masses M\(\approx\) 2.0M\({}_{\odot}\) may exhibit evidence for the presence of quark matter cores [7]. Therefore, the hybrid stars phenomenology offers a unique tool to address the challenge of understanding the phase transition in dense quantum chromodynamics. The nuclear theory studies [14; 15; 16] are mainly focusing on understanding the dense matter of compact stars (CS). The recent observations with LIGO and Virgo of GW170817 event [17; 18] of Binary Neutron Stars merger and the discovery of CS with masses around \(2M_{\odot}\)[2; 3; 19; 20; 21; 22] have intensified the interest in these astonishing objects. The analysis of GW170817 has demonstrated the potential of gravitational wave (GW) observations to yield new information relating to the limits on CS tidal deformability. In addition to these astrophysical observations [5; 23; 24; 25; 26], the measurements of rotation frequencies of the pulsar can be employed to constraints the particle composition and behavior of EOSs of the dense nuclear matter. However, the direct measurement of radius and quark matter interior core of CS is still a great challenge from astrophysical interests. In many papers, the properties of cold quark matter have been studied in terms of the phenomenological MIT quark bag model and EOSs at zero temperature have been obtained; these are the basis of calculations of the characteristics of hybrid hadron-quark stars, as well as of strange quark stars [27; 28; 29; 30; 31]. The NJL model [32; 33] has recently often been used to describe quark matter; it was originally proposed for explaining the origin of the nucleon mass taking the spontaneous violation of chiral symmetry into account and was later reformulated for the description of quark matter [34; 35]. This model successfully reproduces many features of QCD [36; 37]. Combining different modifications of the NJL quark model with different models for describing hadron matter, several authors have constructed hybrid EOSs of cold matter and used these to study the properties of neutron stars containing quark matter [38; 39; 40]. The quark matter phase of EOSs have been treated by employing phenomenological models with some basic features of QCD, such as, the MIT bag models [41; 42; 43] with a bag constant and appropriate perturbative QCD corrections and Nambu-Jona-Lasinio with chiral symmetry and its breaking [44], Non-local chiral quark model [45] and constant speed of sound model [46]. The motivation of the present work is to compute a set of EOSs where the hadronic phase has been calculated within the framework of energy density functionals based on the RMF theory [4] and, the quark matter phase of EOS is computed by using three flavor Nambu-Jona-Lasinio (NJL) model with scalar-isovector and vector-isovector couplings. A plausible set of EOSs for hadron-quark matter is employed to study the structural properties of non-rotating neutron stars which satisfies the astrophysical constraints of GW170817, GW190814, PSR J0740+6620, and other available observational data. The RMF model used in the present work includes all possible self and mixed interaction terms for the \(\sigma\), \(\omega\), and \(\rho\) mesons. The \(\omega\) meson self-coupling term enables one to vary the high density behavior of EOS without affecting the bulk nuclear matter properties at saturation density. Mixed interaction terms involving \(\rho\) mesons allow ones to significantly vary the density dependence of the symmetry energy coefficient which plays a crucial role in determining the cooling mechanism of a neutron star. We used the RMF model with three newly generated parameter sets DOPS1, DOPS2, and DOPS3 to calculate various EOSs composed of nucleons and nucleons with quarks. The generated parameter sets of the model are calibrated by using the available experimental data [47] on the total binding energy and the charge rms radii for a few closed shell nuclei. We also used the value of neutron skin thickness for the \({}^{208}Pb\) nucleus in our calibration procedure. We employ our EOSs to study the structural properties of non-rotating compact stars (CSs). The manuscript has been organized as, in section II, we described the theoretical framework which is used to construct the various EOSs for pure nucleonic matter and nucleonic with quark matter. RMF model has been employed to describe the nucleonic phase and the quark matter phase has been obtained from the NJL model. The coexisting phase of hybrid EOSs is obtained by using Glendenning construction based on Gibbs conditions of equilibrium. In section III, we present our new parameterizations for RMF model. In section IV, we present our results for finite nuclei and bulk nuclear matter properties at saturation density. In this section, we also discuss the quality of fits to finite nuclei for the newly generated parameterizations. In section V, we present the set of EOSs generated and the results for the various properties of non-rotating neutron stars are also discussed. The summary is presented in section VI. ## II Theoretical formalism In this section, we discuss the theoretical model employed to calculate various EOSs of dense nuclear matter in different phases. The newly generated parameter sets DOPS1, DOPS2, and DOPS3 of the RMF model have been successfully applied in describing the properties of finite nuclei and bulk nuclear matter at saturation density. These model parameters have been used to construct neutron stars and hybrid CSs. The quark matter phase of the EOS has been calculated by using the NJL model. The final hybrid EOS is comprised of two separate EOSs for each phase of matter, which are combined by utilizing a Glendenning phase transition construction. [48; 49]. ### Hadronic Equation of State In the RMF model, the effective Lagrangian density consists of self and mixed interaction terms for \(\sigma\), \(\omega\) and \(\rho\) mesons up to the quartic order in addition to the exchange interaction of baryons with \(\sigma\), \(\omega\) and \(\rho\) mesons. The \(\sigma\), the \(\omega\), and the \(\rho\) mesons are responsible for the ground state properties of the finite nuclei ranging from low mass to heavy mass region in the periodic table. The mixed interactions terms containing the \(\rho\)-meson field enable us to vary the density dependence of the symmetry energy coefficient and neutron skin thickness in heavy nuclei over a wide range without affecting the other properties of the finite nuclei [50; 51]. In particular, the contri bution from the self-interaction of \(\omega\)-meson determines the high density behavior of EOS and structure properties of CSs. [4; 52]. The inclusion of self-interaction of \(\rho\)-meson hardly affects the ground state properties of heavy nuclei and compact stars [52]. The effective lagrangian density for the RMF model generally describes the interaction of the baryons via the exchange of \(\sigma\), \(\omega\) and \(\rho\) mesons upto the quartic order. The lagrangian density for the RMF model [4; 53] is given by \[\mathcal{L} = \sum_{B}\overline{\Psi}_{B}[i\gamma^{\mu}\partial_{\mu}-(M_{B}-g_ {\sigma B}\sigma)-(g_{\omega B}\gamma^{\mu}\omega_{\mu} \tag{1}\] \[+ \frac{1}{2}g_{\rho B}\gamma^{\mu}\pi_{B}.\rho_{\mu})]\Psi_{B}+ \frac{1}{2}(\partial_{\mu}\sigma\partial^{\mu}\sigma-m_{\sigma}^{2}\sigma^{2})\] \[- \frac{\overline{\kappa}}{3!}g_{\sigma N}^{3}\sigma^{3}-\frac{ \overline{\lambda}}{4!}g_{\sigma N}^{4}\sigma^{4}-\frac{1}{4}\omega_{\mu\nu} \omega^{\mu\nu}+\frac{1}{2}m_{\omega}^{2}\omega_{\mu}\omega^{\mu}\] \[+ \frac{1}{4!}\zeta q_{\omega N}^{4}(\omega_{\mu}\omega^{\mu})^{2}- \frac{1}{4}\rho_{\mu\nu}\rho^{\mu\nu}+\frac{1}{2}m_{\rho}^{2}\rho_{\mu}\rho^{\mu}\] \[+ \frac{1}{4!}\xi g_{\rho N}^{4}(\rho_{\mu}\rho^{\mu})^{2}\] \[+ g_{\sigma N}g_{\omega N}^{2}\sigma\omega_{\mu}\omega^{\mu}\left( a_{1}+\frac{1}{2}a_{2}\sigma\right)\] \[+ g_{\sigma N}g_{\rho N}^{2}\sigma\rho_{\mu}\rho^{\mu}\left(b_{1}+ \frac{1}{2}b_{2}\sigma\right)\] \[+ \frac{1}{2}c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega_{\mu}\omega^ {\mu}\rho_{\mu}\rho^{\mu}\] The energy density of the uniform matter within the framework of RMF model is given by; \[\mathcal{E} =\sum_{j=B,\ell}\frac{1}{\pi^{2}}\int_{0}^{k_{j}}k^{2}\sqrt{k^{2} +M_{j}^{*2}}dk \tag{2}\] \[+\sum_{B}g_{\omega B}\omega\rho_{B}+\sum_{B}g_{\rho B}\tau_{3B} \rho_{B}\rho+\frac{1}{2}m_{\sigma}^{2}\sigma^{2}\] \[+\frac{\overline{\kappa}}{6}g_{\sigma N}^{3}\sigma^{3}+\frac{ \overline{\lambda}}{24}g_{\sigma N}^{4}\sigma^{4}-\frac{\zeta}{24}g_{\omega N} ^{4}\omega^{4}\] \[-\frac{\xi}{24}g_{\rho N}^{4}\rho^{4}-\frac{1}{2}m_{\omega}^{2} \omega^{2}-\frac{1}{2}m_{\rho}^{2}\rho^{2}\] \[-a_{1}g_{\sigma N}g_{\omega N}^{2}\sigma\omega^{2}-\frac{1}{2}a_{ 2}g_{\sigma N}^{2}g_{\omega N}^{2}\sigma^{2}\omega^{2}\] \[-b_{1}g_{\sigma N}g_{\rho N}^{2}\sigma\rho^{2}-\frac{1}{2}b_{2}g_ {\sigma N}^{2}g_{\rho N}^{2}\sigma^{2}\rho^{2}\] \[-\frac{1}{2}c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega^{2}\rho^{2}.\] The pressure of the uniform matter is given by \[P =\sum_{j=B,\ell}\frac{1}{3\pi^{2}}\int_{0}^{k_{j}}\frac{k^{4}dk}{ \sqrt{k^{2}+{M_{j}^{*2}}^{2}}}-\frac{1}{2}m_{\sigma}^{2}\sigma^{2} \tag{3}\] \[-\frac{\overline{\kappa}}{6}g_{\sigma N}^{3}\sigma^{3}-\frac{ \overline{\lambda}}{24}g_{\sigma N}^{4}\sigma^{4}+\frac{\zeta}{24}g_{\omega N }^{4}\omega^{4}\] \[+\frac{\xi}{24}g_{\rho N}^{4}\rho^{4}+\frac{1}{2}m_{\omega}^{2} \omega^{2}+\frac{1}{2}m_{\rho}^{2}\rho^{2}\] \[+a_{1}g_{\sigma N}g_{\omega N}^{2}\sigma\omega^{2}+\frac{1}{2}a_{ 2}g_{\sigma N}^{2}g_{\omega N}^{2}\sigma^{2}\omega^{2}\] \[+b_{1}g_{\sigma N}g_{\rho N}^{2}\sigma\rho^{2}+\frac{1}{2}b_{2}g_ {\sigma N}^{2}g_{\rho N}^{2}\sigma^{2}\rho^{2}\] \[+\frac{1}{2}c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega^{2}\rho^{2}.\] Here, the sum is taken over nucleons and leptons. The composition of nuclear matter species i=n, p, e\({}^{-}\) and \(\mu^{-}\) at fixed baryon number density \(\rho_{B}\)=\(\sum_{i}B_{i}\rho_{i}\) is determined in such a way that the charge neutrality condition, \[\sum_{i}q_{i}\rho_{i}=0, \tag{4}\] and the chemical equilibrium conditions \[\mu_{i}=B_{i}\mu_{n}-q_{i}\mu_{e}, \tag{5}\] are satisfied, where B\({}_{i}\) and q\({}_{i}\) denote baryon number and electric charge of the species i. ### Quark Matter Equation of State We use the NJL model [54; 55]to calculate the EOS for the quark phase. By introducing the scalar-isovector and vector-isovector couplings, the largrangian of the three flavour NJL model can be written as \[\mathcal{L}_{NJL} = \overline{q}(i\not{\partial}-\tilde{m})q+\frac{G_{S}}{2}\sum_{a= 0}^{8}[(\overline{q}\lambda_{a}q)^{2}+(\overline{q}\nu\gamma_{5}\lambda_{a}q)^{ 2}] \tag{6}\] \[+ \frac{G_{V}}{2}\sum_{a=0}^{8}[(\overline{q}\gamma_{\mu}\lambda_{ a}q)^{2}+(\overline{q}\gamma_{5}\gamma_{\mu}\lambda_{a}q)^{2}]\] \[- Kdet[\overline{q}(1+\gamma_{5})q]+det[\overline{q}(1-\gamma_{5})q]\] \[+ G_{IS}\sum_{a=1}^{3}[(\overline{q}\lambda_{a}q)^{2}+(\overline{q} \gamma_{5}\lambda_{a}q)^{2}]\] \[+ G_{IV}\sum_{a=1}^{3}[(\overline{q}\gamma_{\mu}\lambda_{a}q)^{2}+( \overline{q}\gamma_{5}\gamma_{\mu}\lambda_{a}q)^{2}]\] Here q denotes the quark field with three flavours u,d and s, and three colours; \(\tilde{m}\)=diag\((m_{u},m_{d},m_{s})\) is the current quark mass matrix in three flavour space; \(\lambda_{a}\) are the flavour SU(3) Gell-Mann matrices with \(\sqrt{\frac{2}{3}}\)I; \(G_{S}\) and \(G_{V}\) are the strength of the scalar and vector coupling, respectively; and K term represents the six-point Kobayashi-Maskawa-t'Hooft (KMT) interaction that breaks the axial \(U(1)_{A}\) symmetry. Since the Gell-Mann matrices with a = 1-3 are identical to the Pauli matrices in u and d space, the last two terms represent the scalar-isovector and vector-isovector coupling breaking the SU(3) asymmetry while keeping the isospin symmetry, with \(G_{IS}\) and \(G_{IV}\) the corresponding coupling strength. In the present study, we employ the parameters \(m_{u}\)= \(m_{d}\) = 3.6 MeV, \(m_{s}\) = 87 MeV, \(G_{S}\)A\({}^{2}\) = 3.6, \(K\Lambda^{5}\) = 8.9, and the cut off value in the momentum integral \(\Lambda\) = 750 MeV which is taken from the references [54; 56; 57]. In the present work, we have used vector coupling \(G_{V}\) = 0 in order to describe the astrophysical constraints (Mass/Radius) of MSP 0740+6620, PSR J1614-2230 [1; 7; 19; 91] as hybrid stars. However, the larger value of vector coupling \(G_{V}\) can stiffen the resulting EOSs and may lead to different neutron star properties [96]. In the NJL model, the quark masses are dynamically generated as solutions of the gap equation, obtained by imposing that the potential be stationary with respect to variations in the quark condensate \(<\overline{q_{i}}q_{i}>\), thus finding \[M_{i}=m_{i}-2G_{S}\sigma_{i}+2K\sigma_{j}\sigma_{k}-2G_{IS}\tau_{3i}(\sigma_{u }-\sigma_{d}) \tag{7}\] where \(\sigma_{i}\) = \(<\overline{q_{i}}q_{i}>\) stands for the quark condensate with (i,j,k) being any permutation number of (u,d,s), and \(\tau_{3i}\) is the isospin quantum number of quark, i.e., \(\tau_{3u}\) = 1, \(\tau_{3d}\) = -1 and \(\tau_{3s}\) = 0. As shown in the Eq. (7), \(\sigma_{d}\) and \(\sigma_{s}\) contribute to the u quark mass through the KMT interaction as well as the scalar-isovector coupling, called the flavor mixing [58; 59] in the constituent quark mass. The quark condensate \(<\overline{q_{i}}q_{i}>\) and the quark number density \(\rho_{i}\) are given respectively as below \[<\overline{q_{i}}q_{i}>=-2N_{c}\int\frac{d^{3}p}{(2\pi)^{3}}\frac {M_{i}}{E_{i}} \tag{8}\] \[\rho_{i}=2N_{c}\int_{0}^{\Lambda}\frac{d^{3}p}{(2\pi)^{3}}\] The above Eq.(8) has to be evaluated self-consistently with Eq.(7), forming a set of six coupled equations for the constituent masses \(M_{i}\). Once the self-consistent solutions are found, we can calculate the energy density and the pressure in the following form [55], \[\epsilon_{NJL} = -2N_{c}\int_{0}^{\Lambda}\frac{d^{3}p}{(2\pi)^{3}}E_{i}+G_{S}({ \sigma_{u}}^{2}+{\sigma_{d}}^{2}+{\sigma_{s}}^{2}) \tag{9}\] \[- 4K\sigma_{u}\sigma_{d}\sigma_{s}+G_{V}({\rho_{u}}^{2}+{\rho_{d}} ^{2}+{\rho_{s}}^{2})\] \[+ G_{IS}(\sigma_{u}-\sigma d)^{2}+G_{IV}(\sigma_{u}-\sigma d)^{2}- \epsilon_{0}\] In Eq. (9), \(\epsilon_{0}\) is introduced to ensure that \(\epsilon_{NJL}\) = 0 in the vacuum. The pressure for the cold quark matter can be calculated from the following equation \[P=\sum_{i=u,d,s}\mu_{i}\rho_{i}-\epsilon_{NJL} \tag{10}\] ### Coexisting Phase We construct the EOS of coexisting phase (CP) made up of the hadron phase (HP) and quark matter phase for the hybrid compact star by implementing the Glendenning construction [48; 49]. The evolution of coexisting phase is favored when the surface tension between Coulomb interaction hadronic and quark matter is smaller [60] and negligible. The calculation of surface tension is very model dependent [60; 61]. For the higher values of surface tension, the phase transition is sharp and this is to be constructed with Maxwell construction and in the same way, low values of the phase transition is continuous and that is to be constructed with Glendenning construction. Since the value of surface tension is not established yet, both of the methods of coexisting phase construction are equally valid. But, we adopted the Glendenning construction based on the Gibbs condition of equilibrium. The equilibrium chemical potential of the coexisting phase corresponding to the intersection of the two surfaces representing hadron and quark matter phase can be calculated for mechanical and chemical equilibrium at zero temperature with the following relation, \[P_{HP}(\mu_{e},\mu_{n})=P_{NJL}(\mu,\mu_{e})=P_{CP}, \tag{11}\] where \(P_{HP}\), \(P_{NJL}\), and \(P_{CP}\) are the pressures of the hadron phase, quark phase, and coexisting phase, respectively. In coexisting phase, we have considered chemical equilibrium at the hadron-quark interface as well as inside each of the phases [62], so that Eq.(5) implies \[\mu_{u}+\mu_{e}=\mu_{d}=\mu_{s}, \tag{12}\] \[\mu_{p}+\mu_{e}=\mu_{n}=\mu_{u}+2\mu_{d}. \tag{13}\] In the coexisting phase, the local charge neutrality condition is replaced by the global charge neutrality which means that both hadron and quark matter is allowed to be charged separately. The condition of the global charge neutrality determines the volume fraction \(\chi\) of the quark phase and that can be obtained by using, \[\chi\rho_{c}^{NJL}+(1-\chi)\rho_{c}^{HP}=0, \tag{14}\] where, the \(\rho_{c}^{NJL}\) and \(\rho_{c}^{HP}\) are the charge densities of the NJL phase and hadron phase of dense matter, respectively. The value of the \(\chi\) increases from zero in the pure hadron phase to \(\chi\) = 1 in the pure quark phase. The energy density \(\mathcal{E}_{CP}\) and the baryon density \(\rho_{CP}\) of the coexisting phase can be calculated as, \[\mathcal{E}_{CP}=\chi\mathcal{E}_{NJL}+(1-\chi)\mathcal{E}_{HP}, \tag{15}\] \[\rho_{CP}=\chi\rho_{NJL}+(1-\chi)\rho_{HP}. \tag{16}\] The coexisting phase of EOSs has been computed by employing the procedure explained above and Eqs.(11-15). ### Tidal deformability The tidal influences of its companion in BNS system will deform CS in binary system and, the resulting change in the gravitational potential modifies the BNS orbital motion and its corresponding gravitational wave (GW) signal. This effect on GW phasing can be parameterized by the dimensionless tidal deformability parameter, \(\Lambda_{i}=\lambda_{i}/M_{i}^{5}\), i = 1, 2. For each CS, its quadrupole moment \(\mathcal{Q}_{j,k}\) must be related to the tidal field \(\mathcal{E}_{j,k}\) caused by its companion as, \(\mathcal{Q}_{j,k}=-\lambda\mathcal{E}_{j,k}\), where, \(j\) and \(k\) are spatial tensor indices. The dimensionless tidal deformability parameter \(\Lambda\) of a static, spherically symmetric compact star depends on the neutron star compactness parameter C and a dimensionless quadrupole Love number k\({}_{2}\) as, \(\Lambda\)=(2k\({}_{2}\)/3)\(C^{-5}\). The \(\Lambda\) critically parameterizes the deformation of CS under the given tidal field, therefore it should depend on the EOS of nuclear dense matter. When the orbital separation is very small at the frequencies in the BNS systems, the tidal corrections are added to the tidal energy and luminosity linearly to the point-particle energy and luminosity. The leading-order tidal corrections are Newtonian effects and, are known as 5PN (Post Newtonian) and next-to-leading-order 6PN corrections to the energy and luminosity [63, 64]. These leading-order tidal corrections are required to be included in the waveform model employed for analysis of GW signals from advanced LIGO and Virgo GW detectors at the high frequencies, as discussed for the various waveforms by Abbott et al, [18]. To measure the Love number k\({}_{2}\) along with the evaluation of the TOV equations we have to compute y\({}_{2}\) = y(R) with initial boundary condition y(0) = 2 from the first-order differential equation [65, 66, 67, 68] simultaneously, \[y^{\prime}=\frac{1}{r}[-r^{2}Q-ye^{\lambda}\{1+4\pi Gr^{2}(P-\mathcal{E})\}-y ^{2}], \tag{17}\] where Q \(\equiv\) 4\(\pi\)Ge\({}^{\lambda}\)(5\(\mathcal{E}\)+9P+\(\frac{\mathcal{E}+\mathcal{E}}{c_{s}^{2}}\)) -6\(\frac{\mathrm{e}^{\lambda}}{r^{2}}\)-\(\nu^{\prime^{2}}\) and \(\mathrm{e}^{\lambda}\equiv(1-\frac{2Gm}{\tau})^{-1}\) and, \(\nu^{\prime}\equiv\) 2G e\({}^{\lambda}\) (\(\frac{m+4\pi Pr^{2}}{r^{2}}\)). First, we get the solutions of Eq.(17) with boundary condition, y\({}_{2}\) = y(R), then the electric tidal Love number k\({}_{2}\) is calculated from the expression as, \[k_{2}=\frac{8}{5}C^{5}(1-2C)^{2}[2C(y_{2}-1)-y_{2}+2]\{2C(4(y_{ 2}+1)C^{4}\] \[+(6y_{2}-4)C^{3}+(26-22y_{2})C^{2}+3(5y_{2}-8)C-3y_{2}+6)\] \[-3(1-2C)^{2}(2C(y_{2}-1)-y_{2}+2)\log(\frac{1}{1-2C})\}^{-1}. \tag{18}\] ## III New RMF model parameterization There are several relativistic mean field models in which energy density functional consists of nonlinear \(\sigma\), \(\omega\) and \(\rho\) terms and mixed interaction terms. These models are used to construct the EOSs composed of nucleonic matter [69] and nucleonic along with hyperonic matter [70, 71] and accosted with the constraints of nuclear matter properties and astrophysical observations of CS masses [20, 22, 3]. Only RMF models BSR [4] with \(\zeta=0\) and NL3\(\omega\delta\)[72] can sustain the condition of maximum mass M \(\geq\) 2.0M\({}_{\odot}\) when hyperons are included in the EOSs with appropriate meson-hyperon couplings, otherwise, the inclusion of hyperons may lead for the famous hyperon puzzle. However, many RMF models [73] without the inclusion of hyperons satisfy the constraints of astrophysical observations obtained from binary neutron star merger event GW170817. In the present work, we search for the best fit parameters of RMF model by using simulated anealing method to minimise the \(\chi^{2}\)[74, 75] which is given by \[\chi^{2}=\frac{1}{N_{d}-N_{p}}\sum_{i=1}^{N_{d}}\left(\frac{M_{i}^{exp}-M_{i} ^{th}}{\delta_{i}}\right)^{2} \tag{19}\] where, \(N_{d}\) is the number of experimental data points and \(N_{p}\) the number of fitted parameters. The \(\delta_{i}\) stands for theoretical error [97] and \(M_{i}^{exp}\) and \(M_{i}^{th}\) are the experimental and the corresponding theoretical values, respectively, for a given observable. Since \(M_{i}^{th}\) in Eq. (19) are calculated by using RMF model, the value of \(\chi^{2}\) depends on the parameters appearing in Eq. (1). The theoretical errors \(\delta_{i}\) in Eq. (19) are taken to be 1.0 MeV for total binding energies, 0.02 fm for the charge rms radii and 0.005 fm for the neutron skin thickness. Three new parameter sets namely DOPS1, DOPS2, and DOPS3 have been generated by including all possible self and mixed interaction terms for \(\sigma\), \(\omega\) and \(\rho\) mesons up to quartic order for a fixed value of \(\omega\) meson self-coupling parameter \(\zeta\) = 0.00, 0.01 and 0.02. The remaining coupling parameters are determined by fitting the RMF results to the available experimental data for total binding energies for \({}^{16,24}\)O, \({}^{40,48}\)Ca, \({}^{56,78}\)Ni, \({}^{88}\)Sr, \({}^{90}\)Zr, \({}^{100,116,132}\)Sn, and \({}^{208}\)Pb nuclei and charge rms radii for \({}^{16}\)O, \({}^{40,48}\)Ca, \({}^{56}\)Ni, \({}^{88}\)Sr, \({}^{90}\)Zr, \({}^{116}\)Sn, and \({}^{208}\)Pb nuclei as per the available experimental data [47]. In addition, we also fit the value of neutron skin thickness for the \({}^{208}\)Pb nucleus which is a very important physical observable. Recently extracted values of neutron skin thickness for the \({}^{208}\)Pb nucleus from isospin diffusion data lie within \(0.16-0.33fm\) indicating large uncertainties [76, 77, 98]. It is also shown in ref. [99] that neutron skin thickness of \(\approx\) 0.18 fm in the \({}^{208}Pb\) nucleus is required to adequately reproduce the centroid energies of isoscalar giant monopole and isovector giant dipole resonances. We include in our fit, the value of neutron skin thickness \(\Delta\)r=0.18 fm for the \({}^{208}Pb\) nucleus to constrain the linear density dependence of the symmetry energy coefficient. The DOPSs parameter sets have been generated for a fixed value of \(\omega\)-meson self-coupling parameter \(\zeta\) = 0.00, 0.01, and 0.02 in the light of the recent observation of GW190814, GW170817, and PSR J0740+6620. The coupling parameter \(\zeta\) affects the high density behavior of the EOS. A large value of \(\zeta\) makes the EOS softer and a smaller value stiffens the EOS. The value of maximum mass for GW190814 event lies in the range 2.50-2.67 \(M_{\odot}\)[90], which requires stiff EOS and hence a very small value of \(\zeta\) which we have taken equal to zero for DOPS1 parameterization. The astrophysical events GW170817 and PSR J0740+6620 have maximum mass \(\approx 2\)\(M_{\odot}\) and require relatively softer EOSs. For this, we have fixed the value of \(\zeta\) equal to 0.01 and 0.02 for DOPS2 and DOPS3 parameter sets respectively. The \(\rho\) meson self interaction has not been included as it hardly affects the properties of finite nuclei and neutron star [52]. In Table 1, the newly generated parameter sets DOPS1, DOPS2 and DOPS3 are listed. We also display the value of parameters for NL3 [78], FSUGarnet [79], IOPB-1 [80] and Big Apple [77]. The effective field theory imposes the condition of naturalness [81] on the parameters or expansion coefficients appearing in the energy density functional Eq. (2). According to naturalness, the coefficients of various terms in energy density functional should be of same size when expressed in appropriate dimensionless ratio. The dimensionless ratios are obtained by dividing Eq. (2) by \(M^{4}\) and expressing each term in powers of \(\frac{g_{a}\sigma}{M}\), \(\frac{g_{a}\omega}{M}\) and \(2\frac{g_{a}\rho}{M}\). This means that the dimensionless and the dimensionless \(\frac{1}{2C_{2}^{2}M^{2}}\),\(\frac{1}{2C_{2}^{2}M^{2}}\), \(\frac{1}{8C_{2}^{2}M^{2}}\), \(\frac{\overline{\pi}}{6M}\), \(\frac{\overline{\lambda}}{24M}\), \(\frac{\zeta}{24}\), \(\frac{g_{1}}{M}\), \(\frac{g_{2}}{2}\), \(\frac{b_{1}}{4M}\), \(\frac{b_{2}}{8}\) and \(\frac{c_{1}}{8}\) should be roughly of same size, where \({c_{i}}^{2}=\frac{g_{a}^{2}}{m_{i}z}\), i denotes \(\sigma\), \(\omega\) and \(\rho\) mesons. In Table 2, we present the overall naturalness behavior of various parameterizations i.e. the value of these parameters when expressed in dimensionless ratios as shown just above. We also display the corresponding values for NL3, FSUGarnet, IOPB-1, and Big Apple parameter sets. It is clear from the table that the DOPS1, DOPS2, and DOPS3 parameterizations closely favor the naturalness behavior. It can also be seen from table 2, that the value of parameter \(c_{1}\) (mixed interaction term of \(\omega^{2}\rho^{2}\)) is very large and equal to 10.75, 6.0 and 11.75 for FSUGarnet, IOPB-1 and Big Apple parameterizations respectively when expressed in appropriate dimensionless ratio. The large value of \(c_{1}\) gives rise to the deviation from the naturalness behavior and this deviation might be attributed to the fact of not including all possible mixed interaction terms of \(\sigma\), \(\omega\) and \(\rho\) mesons in these respective parameterizations,unlike DOPSs parameterizations. As far as NL3 parameterization is concerned, the naturalness behavior is favored very well but it does not include any cross interaction terms of sigma, omega, and rho mesons which are very important for constraining the symmetry energy and its density dependence. DOPS1, DOPS2, and DOPS3 parameterizations show better naturalness behavior as compared to other parameterizations displayed in the table. The naturalness behavior of parameters can be further improved by considering the next higher order terms containing the gradient of fields [81]. ## IV Finite nuclei and infinite nuclear matter In this section, we discuss our results for finite nuclei and infinite nuclear matter. The newly generated parameterizations DOPS1, DOPS2 and DOPS3 give equally good fit to the properties of finite nuclei. In Fig. (1), we display the value of absolute error in binding energy per nucleon which is defined as, \[\delta E=|BE^{exp}-BE^{th}| \tag{20}\] Here, \(BE^{exp}\) and \(BE^{th}\) are the experimental and theoretical values for the binding energy per nucleon respectively. Results for \(\delta E\) are calculated for DOPSs parameterizations. The mean absolute errors in the binding energy per nucleon calculated with the DOPS1, DOPS2, and DOPS3 parameterizations for the finite nuclei used in the fit are 0.027, 0.031, and 0.027 MeV respectively. We also display similar results for NL3, IOPB-1 parameter sets. It is evident that binding energies obtained using DOPSs parameterizations are in good agreement with the available experimental data [47]. In Fig. (2), we present our results for absolute error \(\delta R_{ch}=|R_{ch}^{exp}-R_{ch}^{th}|\) for charge rms radii and also compare them with NL3 and IOPB-1 parameter sets. The value of charge rms radii calculated for various parameterizations displayed in Fig. (2) are more or less same. The mean absolute error in the charge rms radii for DOPS1, DOPS2 and DOPS3 parameterizations for the finite nuclei used in the fit are 0.019, 0.022 and 0.023 fm respectively. We have also calculated the rms errors in the total binding energy and charge radii for the nuclei considered in our fit. The root mean square (rms) errors in total binding energy for all the nuclei considered in our fit are found Figure 1: (Color online) Absolute error in the binding energy per nucleon (\(\delta E\)) plotted against the mass number (A) for newly generated parameter sets DOPS1,DOPS2 and DOPS3. For comparison, the values of \(\delta E\) obtained with parameters NL3 and IOPB-1 are also displayed. to be 1.58, 1.63, 1.61, 2.41, and 1.93 MeV for DOPS1, DOPS2, DOPS3, NL3, and IOPB-1 parameterizations respectively. Similarly, the root mean square (rms) errors in charge radii for all nuclei taken in our fit are 0.020, 0.023, 0.024, 0.020, and 0.022 fm for DOPS1, DOPS2, DOPS3, NL3, and IOPB-1 parameter sets respectively. In Table 3, we present our results for the symmetric nuclear matter (SNM) properties such as binding energy per nucleon (E/A), incompressibility (K), symmetry energy coefficient (J), density dependence of symmetry energy (L) and the ratio of effective mass to the mass of nucleon (\(M^{*}/M\)) at the saturation density (\(\rho_{0}\)). These properties are very important for constructing the EOS for nuclear matter. The value of E/A is \(\approx\) -16 MeV for all DOPSs parameterizations. For all newly generated parameterizations, the value of J and L are consistent with the constraints from observational analysis J = 31.6 \(\pm\) 2.66 MeV and L = 58.9 \(\pm\) 16 MeV [82]. The value of K lies in the range 227.5 - 232.733 MeV which is in agreement with the value of K = 240 \(\pm\) 20 MeV determined from isoscalar giant monopole resonance (ISGMR) for \({}^{90}Zr\) and \({}^{208}Pb\) nuclei [83; 84]. The ratio of effective mass to \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Parameters** & **DOPS1** & **DOPS2** & **DOPS3** & **NL3** & **FSUGarnet** & **IOPB-1** & **Big Apple** \\ \hline \(\frac{1}{2C_{g}^{2}\text{M}^{2}}\) & 1.3806 & 1.2076 & 1.2935 & 1.4028 & 1.2690 & 1.3086 & 1.4698 \\ \(\frac{1}{2C_{g}^{2}\text{M}^{2}}\) & 2.0931 & 1.7407 & 1.8959 & 2.0970 & 1.8508 & 1.9383 & 2.2819 \\ \(\frac{1}{8C_{g}^{2}\text{M}^{2}}\) & 0.4207 & 0.3787 & 0.3917 & 1.0306 & 0.4278 & 0.6670 & 0.4121 \\ \(\frac{\pi}{\text{M}\text{M}}\) & 0.9177 & 1.0536 & 0.6908 & 0.6855 & 0.5787 & 0.6499 & 0.9168 \\ \(\frac{\lambda}{24\text{M}}\) & -0.6984 & -0.0329 & -0.0476 & -0.6630 & -0.1472 & -0.3146 & -0.9024 \\ \(\frac{\pi}{24}\) & - & 0.4166 & 0.8333 & - & 0.9785 & 0.7267 & 0.0291 \\ \(\frac{\pi}{\text{M}}\) & 0.2169 & 0.8832 & 0.1911 & - & - & - & - \\ \(\frac{\pi_{2}}{2}\) & 0.0893 & 0.2318 & 0.1349 & - & - & - & - \\ \(\frac{\pi_{1}}{4\text{M}}\) & 1.8388 & 1.9818 & 1.9315 & - & - & - & - \\ \(\frac{\pi_{2}}{8}\) & 1.2318 & 1.2373 & 1.1198 & - & - & - & - \\ \(\frac{\pi_{1}}{4}\) & 1.0864 & 0.7964 & 0.8794 & - & 10.7500 & 6.0000 & 11.7500 \\ \hline \hline \end{tabular} \end{table} Table 2: The values of parameters expressed as dimensionless ratios corresponding to naturalness behavior. All values have been multiplied by \(10^{3}\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Parameters** & **DOPS1** & **DOPS2** & **DOPS3** & **NL3** & **FSUGarnet** & **IOPB-1** & **Big Apple** \\ \hline \(\mathbf{g}_{\sigma}\) & 10.20651 & 10.67981 & 10.51853 & 10.21743 & 10.50315 & 10.41851 & 9.67810 \\ \(\mathbf{g}_{\omega}\) & 12.87969 & 14.12312 & 13.53456 & 12.86762 & 13.69695 & 13.38412 & 12.33541 \\ \(\mathbf{g}_{\rho}\) & 14.13399 & 14.89809 & 14.64808 & 8.94880 & 13.87880 & 11.11560 & 14.14256 \\ \(\overline{\kappa}\) & 2.62033 & 3.00823 & 1.97233 & 1.95734 & 1.65229 & 1.85581 & 2.61776 \\ \(\overline{\lambda}\) & -1.67616 & -0.07894 & -0.11438 & -1.59137 & -0.35330 & -0.75516 & -2.16586 \\ \(\zeta\) & 0.00000 & 0.01000 & 0.02000 & 0.00000 & 0.23486 & 0.017442 & 0.000699 \\ \(\mathbf{a_{1}}\) & 0.02169 & 0.08832 & 0.01911 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ \(\mathbf{a_{2}}\) & 0.01785 & 0.04637 & 0.02699 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ \(\mathbf{b_{1}}\) & 0.73554 & 0.79273 & 0.77259 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ \(\mathbf{b_{2}}\) & 0.98545 & 0.98986 & 0.89590 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ \(\mathbf{c_{1}}\) & 0.86916 & 0.63710 & 0.70353 & 0.00000 & 8.60000 & 4.80000 & 9.40000 \\ \(\mathbf{m}_{\sigma}\) & 503.61992 & 492.84789 & 502.37396 & 508.19400 & 496.93900 & 500.48700 & 492.73000 \\ \(\mathbf{m}_{\omega}\) & 782.50000 & 782.50000 & 782.50000 & 782.50100 & 782.50000 & 782.18700 & 782.50000 \\ \(\mathbf{m}_{\rho}\) & 770.00000 & 770.00000 & 770.00000 & 763.00000 & 762.46800 & 763.00000 \\ \hline \hline \end{tabular} \end{table} Table 1: Newly generated parameter sets DOPS1, DOPS2 and DOPS3 for the Lagrangian of RMF model as given in Eq.(1). The parameters \(\overline{\kappa}\), \(a_{1}\), and \(b_{1}\) are in fm\({}^{-1}\). The masses \(m_{\sigma}\), \(m_{\omega}\) and \(m_{\rho}\) are in MeV. The mass for nucleon is taken as \(M_{N}=939MeV\). The values of \(\overline{\kappa}\), \(\overline{\lambda}\), \(a_{1}\), \(a_{2}\), \(b_{1}\), \(b_{2}\), and \(c_{1}\) are multiplied by \(10^{2}\). The parameter sets NL3, FSUGarnet, IOPB-1 and Big Apple are also presented. the nucleon mass is found to be similar for all DOPSs parameterizations as shown in Table 3. The SNM properties calculated with NL3, FSUGarnet, IOPB-1, and Big Apple are also shown for comparison. In Fig. (3 and 4), we plot the EOS i.e. pressure as a function of baryon density (\(\frac{\rho}{\rho_{0}}\)) for SNM and pure neutron matter (PNM) using DOPS1,DOPS2 and DOPS3 parameterizations which is in good agreement and lie in the allowed region with the EOS extracted from the analysis of particle flow in heavy-ion collision [85]. These results are also compared with the NL3 and IOPB-1 parameterizations. It can be easily seen that the EOSs for SNM and PNM obtained from DOPS1 and NL3 parameterizations are very stiff and are ruled out by heavy ion collision data. The stiffness of the EOSs for DOPS1 and NL3 parameter sets may be due to the fact that the coupling parameter \(\zeta\) which varies the high density behavior of EOS is taken to be equal to zero. The stiff EOS obtained by DOPS1 is required to account for the predicted supermassive neutron star in GW190814 event. The EOSs calculated using DOPS2 and DOPS3 parameter sets are much softer and lie in the allowed region of heavy ion collision data [85]. The softness of EOSs is attributed to the large value of \(\zeta\). DOPS1 gives Stiffest EOS among DOPS parameter sets and hence a large value of energy density and pressure at a given baryon density. The parameter sets DOPS2 and DOPS3 give relatively softer EOSs and comparatively smaller value of pressure and energy density. Due to this Figure 3: (Color online) Variation of Pressure as a function of baryon density for symmetric nuclear matter (SNM) computed with DOPS1, DOPS2 and DOPS3 parameterizations along with NL3 and IOPB-1. The shaded region represents the experimental data taken from the reference [85]. Figure 2: (Color online) Absolute error in the charge root mean square radii (\(\delta R_{ch}\)) plotted against the mass number (A) for newly generated parameter sets DOPS1,DOPS2 and DOPS3. For comparison, the values obtained with parameters NL3 and IOPB-1 are also displayed. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Parameters** & **DOPS1** & **DOPS2** & **DOPS3** & **NL3** & **FSUGarnet** & **IOPB-1** & **Big Apple** \\ \hline \(\rho_{0}\) (\(\mathbf{fm^{-3}}\)) & 0.150 & 0.148 & 0.148 & 0.148 & 0.153 & 0.149 & 0.155 \\ **E/A** (\(\mathbf{MeV}\)) & -16.073 & -16.073 & -16.037 & -16.248 & -16.229 & -16.099 & -16.339 \\ **K** (\(\mathbf{MeV}\)) & 231.204 & 232.733 & 227.500 & 271.565 & 229.623 & 222.571 & 227.093 \\ **J** (\(\mathbf{MeV}\)) & 31.894 & 31.767 & 31.843 & 37.400 & 30.983 & 33.303 & 31.410 \\ **L** (\(\mathbf{MeV}\)) & 65.590 & 66.018 & 66.743 & 118.563 & 50.925 & 63.850 & 40.339 \\ **M\({}^{\star}\)/M** & 0.604 & 0.611 & 0.605 & 0.595 & 0.578 & 0.595 & 0.608 \\ \hline \hline \end{tabular} \end{table} Table 3: The SNM properties at saturation density for the parameter sets DOPS1, DOPS2 and DOPS3 are compared with that obtained using NL3, FSUGarnet, IOPB-1 and Big Apple parameter sets. \(\rho_{0}\), E/A, K, J, L and \(M^{\star}/M\) denotes the saturation density, Binding Energy per nucleon, Nuclear Matter incompressibility coefficient, Symmetry Energy coefficient, density dependence of symmetry energy and ratio of effective nucleon mass to the nucleon mass respectively. fact, the lines for model DOPS1 are so much different than that of DOPS2 and DOPS3 model parameterization as shown in Fig. (3 and 4). ## V Equation of state and neutron star properties Here we discuss the results for the properties of nonrotating neutron stars for a set of EOSs obtained using different parameterizations in the hadronic phase, uds quark phase, and coexisting phase. We employed the Baym-Pethick-Sutherland (BPS) [86] EOS for low density regime from outer crust baryon density (\(\rho\) = 6.3 \(\times 10^{-12}\)) up to the pasta phase (\(\rho\) = \(9.4\times 10^{-2}\)). The crust region and the core region of the EOSs have been matched by using the cubic interpolation method that offers true continuity between the crust and the core. In Table 4, we list the various EOSs, their particle compositions and properties of non-rotating neutron stars like maximum gravitational mass (\(M_{G}\)), radius \(R_{max}\), (\(R_{1.4}\)) and dimensionless tidal deformability \(\Lambda_{1.4}\) of canonical mass. The properties like mass and radius for the neutron star are calculated by integrating the Tolman-Oppenheimer-Volkoff (TOV) equations [87]. TOV equations are solved for various EOSs consisting of nucleonic and nucleonic with quark matter. The composition at any density is so determined that the charge neutrality and beta equilibrium conditions hold good. A set of EOSs used in the present work has been displayed in Table 4 where DOPS1, DOPS2, and DOPS3 are pure hadronic (nucleonic) EOSs computed with the newly generated DOPSs parameterizations. The EOSs namely DOPS1Q, DOPS2Q, and DOPS3Q composed of nucleons and quarks in beta equilibrium with the NJL coexisting phase are also presented. For the sake of comparison, the results for EOSs calculated with NL3, FSUGarnet, IOPB-1 and Big Apple parameters are also presented. In Fig. (5), we display the variation of pressure with energy density for various EOSs composed of pure nucleonic matter. The results are also compared with NL3 and IOPB-1 parameter sets. The shaded region (magenta color) represents the observational constraints from the ref. [88] and regions (brown and grey) denote the EOS of cold dense matter with 95 % confidence limit [89]. It is clear that EOS computed with DOPS1 parameter set is very stiff like NL3 and is required to account for supermassive neutron star as predicted by GW190814 event and is ruled out by the above observational constraints. The EOSs calculated with DOPS2, DOPS3, and IOPB-1 are relatively softer and are in well agreement with the observational constraints as shown in Fig. (5). In Fig. (6), we plot the results for hybrid EOSs composed of nucleons and quark matter with a coexisting phase in \(\beta\)-equilibrium. The nucleonic part of the EOS is calculated by using DOPSs parameterizations and the pure quark phase is described with three flavor NJL model as discussed in Section II. For the coexisting phase of hadronic matter and quark matter, the Glendenning construction method has been employed along with the global charge neutrality condition. The solid circles represent the boundary of coexisting phase region which consists of nucleons and quarks. The coexisting phase Figure 4: (Color online) Variation of Pressure as a function of baryon density for pure neutron matter (PNM) computed with DOPS1, DOPS2 and DOPS3 parameterizations along with NL3 and IOPB-1. The shaded region represents the experimental data taken from the reference [85]. Figure 5: (Color online) Variation of pressure with energy density for EOSs calculated with DOPS1, DOPS2, DOPS3, NL3 and IOPB-1 parameterizations. The shaded region (magenta) represents the observational constrains from Ref. [88] and the regions (orange and cyan) denote the EOS of cold dense matter with 95 % confidence limit [89]. region lies in the density ranging from 2.88\(\rho_{0}\) - 6.5\(\rho_{0}\), 3.49\(\rho_{0}\)- 6.71\(\rho_{0}\) and 3.43\(\rho_{0}\)- 6.57\(\rho_{0}\) for DOPS1, DOPS2 and DOPS3 respectively. The coexisting phase region for the DOPS1 region is large as compared to others. As DOPS1 parameter set produces a stiff EOS and thus the coexisting phase region lies in the higher pressure region. In Fig. (7), we plot the gravitational mass (\(M_{G}\)) of the neutron star as a function of the baryon density for various EOSs considered in the present work. It is evident from the figure that gravitational mass increases with the increase in baryon density to obtain its maximum value. It is quite obvious that the maximum gravitational mass is corresponding to the stiffest EOS and goes on decreasing as the EOS becomes softer. In Fig. (8), we have also shown the proton fraction as a function of baryon density for various EOSs. Fig. (9), presents our results for the gravitational mass of non-rotating neutron star and its radius for DOPSs parameterizations. The results are also displayed for NL3 and IOPB-1 parameter sets. The maximum mass of non-rotating neutron star obtained for EOS calculated with the DOPS1 parameter set is found to be 2.57\(M_{\odot}\) with a radius of 12.36 Km. This maximum mass obtained with the DOPS1 parameter satisfies the constraint from GW190814 event which gives the mass range between 2.50 - 2.67 \(M_{\odot}\)[90] indicating that the secondary component might be the heaviest neutron star composed of nucleonic matter. This parameter set also satisfies the recently measured radius of \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **No.** & **EOS** & **Particle** & **M(M\({}_{\odot}\))** & **R\({}_{max}\)** & **R\({}_{1.4}\)** & \(\Lambda_{1.4}\) \\ & & **composition** & & (km) & (km) & \\ \hline 1 & DOPS1 & n,p,e,\(\mu\) & 2.57 & 12.36 & 13.61 & 627.37 \\ 2 & DOPS2 & n,p,e,\(\mu\) & 2.12 & 11.56 & 13.24 & 546 \\ 3 & DOPS3 & n,p,e,\(\mu\) & 2.05 & 11.61 & 13.25 & 563.2 \\ 4 & DOPS1Q & n,p,e,\(\mu\),u,d,s & 2.23 & 13.22 & 13.60 & 637 \\ 5 & DOPS2Q & n,p,e,\(\mu\),u,d,s & 1.95 & 12.39 & 13.25 & 527.3 \\ 6 & DOPS3Q & n,p,e,\(\mu\),u,d,s & 1.91 & 12.35 & 13.26 & 547.2 \\ 7 & NL3 & n,p,e,\(\mu\) & 2.73 & 13.10 & 14.65 & 1234.8 \\ 8 & FSUGarnet & n,p,e,\(\mu\) & 2.06 & 11.70 & 12.86 & 624.8 \\ 9 & IOPB-1 & n,p,e,\(\mu\) & 2.16 & 12.22 & 14.09 & 833 \\ 10 & Big Apple & n,p,e,\(\mu\) & 2.6 & 12.41 & 12.96 & 717.3 \\ \hline \end{tabular} \end{table} Table 4: The properties of non-rotating compact stars for the various EOSs along with their particle composition computed with the newly generated parameter sets are presented. Results are also displayed for other parameter sets. M\({}_{G}\)(\(M_{\odot}\)) and R\({}_{max}\) denote the Maximum Gravitational mass and radius corresponding to the maximum mass of the non-rotating compact stars respectively. The values for R\({}_{1.4}\) and \(\Lambda_{1.4}\) denote radius and dimensionless tidal deformability at 1.4M\({}_{\odot}\). Figure 6: (Color online) Variation of pressure with energy density for hybrid EOSs DOPS1Q, DOPSQ and DOPSQ3 composed of pure nucleonic matter, quark matter and NJL coexisting phase. The solid circles represent the boundary of coexisting phase comprised of nucleons and quarks. Figure 7: (Color online) Gravitational mass (\(M_{G}\)) for the non-rotating compact stars as a function of baryon density for various EOSs. = 12.39\({}^{+1.30}_{-0.98}\)Km radius by NICER [91]. DOPS2 and DOPS3 sets produce non-rotating neutron stars of maximum mass 2.05\(M_{\odot}\) and 2.12\(M_{\odot}\) with the radius of 11.56 Km and 11.61 Km respectively. The radii of canonical mass \(R_{1.4}\) for DOPS2 and DOPS3 are calculated to be 13.24 Km and 13.25 Km respectively. The DOPS2 and DOPS3 parameter sets satisfy the mass constraints from GW170817 [92], PSR 0740+6620, NICER [91; 93] and radius constraints from NICER [2; 3; 7; 91] and is also very close to the upper limit of the radius constraint [93]. The hybrid EOSs namely DOPS1Q, DOPS2Q, and DOPS3Q produce the hybrid star with a maximum mass of 2.23, 1.95, and 1.91 M\({}_{\odot}\) respectively. The phase transition from hadron to quark matter lowers the maximum mass as EOS becomes softer and, accordingly the maximum mass reduces from 2.57 to 2.23 \(M_{\odot}\), 2.12 to 1.95 \(M_{\odot}\) and 2.05 to 1.91 \(M_{\odot}\) corresponding to DOPS1, DOPS2, and DOPS3 parameterizations respectively. The reduction in the maximum mass is found to be more in the stiffest EOS i.e. DOPS1. The hybrid EOS DOPS1Q satisfies the mass constraint of MSP 0740+6620 [1]. The DOPS2Q and DOPS3Q EOSs satisfy the mass constraint from PSR J1614-2230 [19]. These hybrid EOSs satisfy the radius constraint from NICER [7; 91]. The various observational constraints on maximum mass and radius measurements from recently observed astrophysical events PSR J0740+6620 [91; 93], PSR J1614+2230 [19], PSR J0348+432 [20] and GW190814 [90] are also shown in Fig. (9). Similar results for NL3 and IOPB-1 parameter sets are also displayed. The dimensionless tidal deformability (\(\Lambda_{1.4}\)) obtained by employing EOSs namely DOPS2, DOPS3, DOPS2Q and DOPS3Q considered in the present work lie in the range 527 - 563 as one can see in Table 4. These values satisfy the constraint on \(\Lambda_{1.4}\) = 190\({}^{+390}_{-120}\)[17; 94]. The dimensionless tidal deformability (\(\Lambda_{1.4}\)) for EOSs DOPS1 and DOPS1Q are 627 and 637 respectively and is consistent with the constraints on dimensionless tidal deformability obtained using Bayesian analysis \(\Lambda_{1.4}\) = 500\({}^{+186}_{-367}\)[95]. In Fig. (10), we plotted \(\Lambda\) as a function of gravitational mass. It is obvious from the Fig. (10) that \(\Lambda\) decreases with the increase in gravitational mass of compact stars and reduces to very small value at the maximum mass. The recent observational limits on \(\Lambda_{1.4}\)[17; 95] are also displayed in Fig. (10). ## VI Summary Theoretical studies of dense matter have considerable uncertainty in the high density behavior of the EOSs largely because of the poorly constrained many-body interactions. The theoretical study of the structure of neutron stars is crucial if new observations of masses and radii lead to effective constraints on the EOSs of dense matter. GW170817, GW190814, and PSR J0740 + 6620 etc. are some of those recent astrophysical observations (constraints) to which our study is devoted. This study can further prompt the theoretical/experimental(astrophysical observations) investiga Figure 8: (Color online) Proton fraction plotted against the baryon density. Figure 9: (Color online) Mass-Radius profile for pure nucleonic and hybrid non-rotating neutron star for various EOSs. The recently observed constraints on mass and radius measurements from recently observed astrophysical events PSR J0740+6620 [2; 3; 91; 93], PSR J1614+2230 [19], PSR J0348+432 [20], GW190814 [90] and GW170817 [92] are also depicted. The region excluded by causality (solid green line), rotation constraint of neutron star XTE J1739-285 solid (solid cyan line) and limits on Mass-Radius of compact star from Ozel’s analysis of EXO 0748-676 are also shown. tions. The present study demonstrates that the contributions of self and mixed interactions of \(\sigma\), \(\omega\), and \(\rho\) mesons up to quartic order are important to vary the high density behavior of EOS and favor the naturalness behavior of parameters. We have studied the properties of finite nuclei, infinite nuclear matter, and compact stars with newly generated parameter sets DOPS1, DOPS2, and DOPS3 of field theoretical relativistic mean field (RMF) models which include all the possible self and mixed interactions between the scaler-isoscalar (\(\sigma\)), vector-isoscalar (\(\omega\)) and vector-isovector (\(\rho\)) mesons up to quartic order. The generated parameter sets are in harmony with the finite and bulk nuclear matter properties. All these generated parameterizations fit equally well the finite nuclear properties and closely favor the naturalness behavior [81]. The mean absolute errors in the binding energy per nucleon calculated with the DOPS1, DOPS2, and DOPS3 parameterizations for the finite nuclei used in the fit are 0.027, 0.031, and 0.027 MeV respectively. Similarly, the mean absolute error in the charge rms radii for DOPS1, DOPS2, and DOPS3 parameterizations for the finite nuclei used in the fit are 0.019, 0.022 and 0.023 fm respectively. The maximum mass of a non-rotating star with DOPS1 parameterization is found to be around 2.6 M\(\odot\) for the pure hadronic matter which satisfies the recent GW190814 possible maximum mass constraint [90] indicating that the secondary component of GW190814 could be the non-rotating neutron star consisting of pure nucleonic matter. This parameter set also satisfies the recently measured constraints on the radius for PSR J0740+6620 with \(12.39^{+1.30}_{-0.98}\)Km by NICER [91]. DOPS2 and DOPS3 sets produce non-rotating neutron stars of maximum mass \(2.05M_{\odot}\) and \(2.12M_{\odot}\) with the radius of 11.56 Km and 11.61 Km respectively. The radius of canonical mass \(R_{1.4}\) for DOSPS2 and DOPS3 are calculated to be 13.24 Km and 13.25 Km respectively. The DOPS2 and DOPS3 parameter sets satisfy the mass constraints of PSR 0740+6620, NICER [91; 93] and radius constraints from NICER [2; 3; 7; 91] and is also very close the upper limit of the radius constraint [93]. EOSs computed with the DOPS2 and DOPS3 parameterizations also satisfy the X-Ray observational data by Steiner [88] and the recent observations of GW170817 maximum mass constraint of a stable non- rotating neutron star in the range 2.01 \(\pm\) 0.04 - 2.16 \(\pm\) 0.03 M\(\odot\)[92]. The hybrid EOSs obtained with the NJL model also satisfy astrophysical constraints on the maximum mass of a neutron star from PSR J1614-2230 [19]. The value of dimensionless tidal deformability (\(\Lambda_{1.4}\)) calculated by employing the EOSs DOPS2, DOPS3, DOPS2Q, and DOPS3Q is found to be in the range 527.3 - 563.2 which is consistent with the waveform models analysis of GW170817 [17]. The value of (\(\Lambda_{1.4}\)) calculated with EOSs DOPS1 and DOPS1Q is 627.37 and 637 which is consistent with the constraint on \(\Lambda_{1.4}\) obtained using Bayesian analysis [95]. ###### Acknowledgements. Virender Thakur is highly thankful to Himachal Pradesh University and DST-INSPIRE for providing computational facility and financial assistance (Junior/Senior Research Fellowship).
2305.03031
Magic-angle helical trilayer graphene
We propose helical trilayer graphene (HTG), a helical structure featuring identical rotation angles $\theta\approx 1.5^\circ$ between three consecutive layers of graphene, as a unique and experimentally accessible platform for realizing exotic correlated topological states of matter. While nominally forming a supermoir\'e (or moir\'e-of-moir\'e) structure, we show that HTG locally relaxes into large regions of a periodic single-moir\'e structure in which $C_{2z}$ is broken, giving rise to flat topological bands carrying valley-Chern numbers $C=\pm(1,-2)$. These bands feature near-ideal quantum geometry and are isolated from remote bands by a large gap $E_{\mathrm{gap}}\sim 100$ meV, making HTG a promising platform for experimental realization of correlated topological states such as integer and fractional quantum anomalous Hall states in $C=1$ and $2$ bands.
Trithep Devakul, Patrick J. Ledwith, Li-Qiao Xia, Aviram Uri, Sergio de la Barrera, Pablo Jarillo-Herrero, Liang Fu
2023-05-04T17:53:40Z
http://arxiv.org/abs/2305.03031v1
# Magic-angle helical trilayer graphene ###### Abstract We propose helical trilayer graphene (HTG), a helical structure featuring identical rotation angles \(\theta\approx 1.5^{\circ}\) between three consecutive layers of graphene, as a unique and experimentally accessible platform for realizing exotic correlated topological states of matter. While nominally forming a supermorie (or moire-of-moire) structure, we show that HTG locally relaxes into large regions of a periodic single-moire structure in which \(C_{2z}\) is broken, giving rise to flat topological bands carrying valley-Chern numbers \(C=\pm(1,-2)\). These bands feature near-ideal quantum geometry and are isolated from remote bands by a large gap \(E_{\rm gap}\sim 100\) meV, making HTG a promising platform for experimental realization of correlated topological states such as integer and fractional quantum anomalous Hall states in \(C=1\) and \(2\) bands. The intricate interplay of topology and strong electronic interactions is one of the most fascinating and rapidly evolving areas of modern condensed matter physics. Following the discovery of superconductivity and strong correlations in twisted bilayer graphene (TBG) [1; 2], moire materials have risen to the forefront of both theoretical and experimental condensed matter physics research as an ideal platform for exploring strongly correlated physics in topological bands [3]. In the graphene family, significant progress has also been made in multilayer moire heterostructures, such as alternating twist multilayers [4; 5; 6; 7; 8; 9], or single twist multilayers [10; 11; 12; 13; 14; 15; 16; 17] such as twisted monolayer-bilayer graphene. In parallel, moire heterostructures based on semiconductor transition metal dichalcogenides (TMD) have also revealed a trove of complementary physics ranging from generalized Wigner crystals to topological states [18]. The sheer versatility of the moire platform has lead to the experimental realization of an extraordinarily diverse array of physical phenomena. In magic-angle TBG, a manifold of nearly flat isolated single-particle bands enables a unique regime of physics dominated by interactions and band geometry. Perhaps the most fascinating and direct observations of strongly correlated topology are the quantum anomalous Hall (QAH)[19; 20; 21; 22] and fractional Chern insulator (FCI)[23; 24; 25], lattice analogues of the integer and fractional quantum Hall states driven by intrinsic band geometry rather than Landau level physics [29; 30; 31; 32; 33; 34; 35; 36; 37]. However, these topological states in TBG are often fragile and overpowered by competing non-topological states, likely because they require hBN-alignment [19; 38] or spontaneous breaking of \(C_{2z}\mathcal{T}\) symmetry [39]. The FCI states have thus far only been observed in a substrate aligned sample and at finite magnetic field \(B\sim 5\)T[23]. The apparent requirement of substrate alignment poses a significant experimental challenge that severely limits reproducibility of strongly correlated topology in the TBG platform, and it is not clear whether the FCI state can be made stable at zero field. Very recently, evidence of a zero-field FCI was found in a twisted TMD homobilayer [40; 41]. It is therefore an important theoretical task to identify new platforms in which such topological states may appear most robustly, as well as for the realization of further exotic phases of matter. We propose "helical trilayer graphene" (HTG), a helical structure featuring identical rotation angles between three consecutive layers of graphene, as a promising and experimentally accessible platform for realizing exotic topological states of matter. As we will elaborate, unrelaxed HTG does not realize a single periodic moire superlattice, but instead realizes a supermoire (or "moire-of-moire") structure [42; 43; 44]). Nevertheless, we show that HTG locally relaxes into large regions hosting a single-moire structure featuring a periodic honeycomb lattice of the AA stacking regions (shown in Figs 1a,b). In these regions, which we call h-HTG, \(C_{2z}\) is broken by a fixed lateral shift \(\mathbf{d}=\pm\mathbf{\delta}\) between the two moire superlattices. Remarkably, we find that at a magic angle \(\theta\approx 1.5^{\circ}\), the moire band structure of h-HTG features a pair of flat, isolated, nearly-degenerate topological bands, shown in Fig2b, with valley-contrasting Chern numbers \(C=\pm(1,-2)\). Since each valley carries a net Chern number, even the band insulators are topological quantum valley-Hall states, and valley polarization alone yields a net Chern number. In particular, these bands feature remarkably uniform charge and Berry curvature distributions, as well as "near-ideal quantum geometry"[24; 28; 45; 46; 47; 48; 49; 50; 51], making HTG a promising platform for realizing FCI states in \(|C|=1\) and \(2\) Chern bands. Furthermore, the topological flat band manifold is isolated from remote bands by a very large gap \(E_{\rm gap}\sim 100\) meV, implying a high degree of stability and providing a potential route to higher temperature QAH and FCI states. Zooming out, HTG realizes large regions of h-HTG domains (and its \(C_{2z}\) related counterpart), which form a triangular tiling on the supermoire scale, as shown in Fig 1a. These domains are large (several hundreds of nanometers) so the bulk properties of h-HTG are accessible via local probes such as scanning single-electron transistors [52; 53; 23] and scanning nano superconducting quantum interference devices [54]. Furthermore, the domain size may be tuned via heterostrain engineering [55; 56], and with a small amount of uniform heterostrain (\(\approx 0.03\%\)), the entire device can relax into a single domain of h-HTG, providing a route to a quantized Hall response measurable by transport. When the domain size is finite, a triangular network of interwoven domain walls is realized as shown in Fig1a. When the domains are tuned to incompressible states at integer or fractional filling, including full and empty filling of the flat bands, the low-energy electronic physics is dominated by the network of gapless domain walls. This system therefore provides a natural realization of chiral or counter-propagating edge network models [57; 58] on the supermoire scale. Taken together, our work demonstrates that HTG is a uniquely exciting platform for realizing robust strongly correlated topology, gapless edge networks, and for exploring their interplay. The key ingredient that enables all this richness is lattice relaxation on the supermoire scale, an aspect which was not fully incorporated in previous theoretical studies. Refs [59; 60] focused on the electronic properties of a different single-moire superlattice defined by \(\mathbf{d}=0\), which we find is energetically unfavorable and is minimized in the relaxed structure of Fig1a. Refs [42; 43] examined the electronic properties of the full unrelaxed supermoire structure, thus missing the physics of h-HTG. Various extensions to higher number of layers have also been explored [61; 62; 63]. This paper is structured as follows: We first introduce the HTG structure and demonstrate that relaxation favors the formation of a network of large h-HTG domains. We examine the electronic properties of h-HTG and its symmetries via an effective continuum model description, revealing the advertised magic angle, topological flat bands, and large remote band gap. We then study the model in the "chiral limit"[64], which features exactly flat bands with "ideal quantum geometry"[28; 45; 46; 47; 48; 49; 50; 51], explaining the origin of the magic angle. Finally, we examine the features that make h-HTG promising for the realization of strongly correlated topology, and discuss possible correlated states at integer and fractional fillings. ## I Supermoire reconstruction We consider the HTG structure consisting of three graphene layers with the twist configuration \((\theta_{1},\theta_{2},\theta_{3})=(\theta,0,-\theta)\). In the absence of lattice relaxation, the moire superlattices of the lower and upper two graphene layers are themselves misaligned by an angle \(\theta\), which therefore forms a supermoire structure. This results in a parametric separation of lengthscales: the atomic lengthscale \(a_{0}=2.46\)A is much smaller than the moire lengthscale \(a_{m}=a_{0}/(2\sin\frac{\theta}{2})\) which is in turn much smaller than the supermoire lengthscale \(a_{mm}=a_{0}/(2\sin\frac{\theta}{2})^{2}\). Because of the large supermoire lengthscale, lattice relaxation plays a pivotal role in the physics of HTG and cannot be ignored. This is because even a small amount of atomic lattice relaxation can result in a magnified effect on the moire scale, and hence a doubly magnified effect at the supermoire scale. As an analogy, consider the bilayer case. In TBG at \(\theta\approx 1.1^{\circ}\), lattice relaxation is minor and typically accounted for by a phenomenological parameter \(\kappa\) known as the chiral ratio. At very small angles \(\theta\lesssim 1^{\circ}\) (\(a_{m}\gtrsim 14\) nm), however, lattice relaxation results in severe moire lattice reconstruction [65; 66; 67; 68; 69; 70]: the energetically favorable AB and BA stacking regions are enlarged to form large triangular domains of locally atomically-periodic Bernal stacking regions at the expense of the energetically unfavorable AA regions. In HTG, the analogous effect can now occur at the supermoire scale. Indeed, as we will now demonstrate, while the moire scale lattice reconstruction is minor at \(\theta\approx 1.5^{\circ}\), the supermoire scale \(a_{mm}\sim 300-400\)nm is well in the regime of severe supermoire lattice reconstruction. Figure 1: (a) The relaxed structure of HTG at \(\theta=1.5^{\circ}\), where orange and purple dots show the AA stacking regions of adjacent layer pairs, and the red background indicates the moiré aperiodicity \(A(\mathbf{r})\). The system relaxes to large triangular domains of h-HTG (and its \(C_{2z}\) counterpart, \(\hbar\)-HTG), a periodic moiré superlattice with \(A(\mathbf{r})\approx 0\), separated by a network of domain walls. (b) A zoom in to the h-HTG region and a further zoomed in illustration of the atomic scale structure at high symmetry points. (c) The monolayer graphene BZs for each layer are shown. In the h-HTG region, the three \(K\) points relax onto a single line and fold to the \(\kappa,\gamma,\kappa^{\prime}\) points on the mBZ as illustrated on the right. We model in-plane lattice relaxation in HTG using the configuration space method developed in Refs [70; 71]: the total intra- and inter-layer energy, with parameters extracted from ab initio theory, is minimized in configuration space which avoids issues associated with real space incommensurability. From this, we extract a real space map of the local shift field \(\mathbf{u}_{l}(\mathbf{r})\) which indicates the in-plane displacement of the relaxed structure relative to the unrelaxed structure, for each layer \(l\). In Fig. 1a, we show the AA stacking regions of adjacent layer pairs, labeled by AA\({}_{12}\) and AA\({}_{23}\), for the relaxed structure. The dramatic effect of supermoire lattice reconstruction is clearly visible by eye: large domains separated by a triangular network of domain walls. Within each domain, the AA\({}_{12}\) and AA\({}_{23}\) regions come together to form the two sublattices of a periodic moire-scale honeycomb lattice, as shown in Fig1b. These are characterized by a finite lateral shift \(\mathbf{d}=\pm\mathbf{\delta}\) (defined later) between the two moire sublattices; since \(\mathbf{d}\) is opposite in two adjacent domains, a domain wall must form between them. Thus, while HTG nominally forms a supermoire structure, it is energetically favorable to relax to large domains of locally periodic regions. We use the term periodic moire superlattice to refer to these periodic structures, and specifically those realized in the upwards and downwards pointing triangular domains as h-HTG and \(\overline{\text{h}}\)-HTG, respectively. On the atomic scale, the high symmetry stacking regions in h-HTG correspond to AAB, ABA, and BAA stacking regions (Fig1b), while in \(\overline{\text{h}}\)-HTG they are ABB, BAB, and BBA. The relaxed structure therefore completely avoids the energetically costly AAA stacking region. It is interesting to contrast our results to that of alternating-twist trilayer graphene, \(\theta_{l}=(0,\theta,0)\), where the A-twist-A configuration (which does contain an AAA region) was shown, using the exact same method and energetic parameters, to be favorable [72; 73]. Our results demonstrate that the favorable stacking configuration is not obvious _a priori_, and depends on subtle energetic properties. The periodic structure of h-HTG can be understood from the fact that unrelaxed HTG is already very close to a periodic moire superlattice. In Fig. 1c, we illustrate the monolayer Brillouin zone (BZ) of each graphene layer. The moire BZ (mBZ) for each layer pair, the edges of which are determined by \(\mathbf{K}_{2}-\mathbf{K}_{1}\) and \(\mathbf{K}_{3}-\mathbf{K}_{2}\), are incommensurate with each other as they are rotated by a small angle \(\pm\theta/2\). However, this incommensurability can be remedied by a minuscule uniform compression of the outer graphene layers (and/or dilation of the middle layer) by a factor \(\lambda=\cos(\theta)\approx 0.9997\) for \(\theta=1.5^{\circ}\). The result is that the new \(\mathbf{K}_{l}\) points all lie along a vertical line, satisfying \(\mathbf{K}_{2}-\mathbf{K}_{1}=\mathbf{K}_{3}-\mathbf{K}_{2}\), and therefore resulting in a periodic moire superlattice. We define the commensurate mBZ as shown in Fig1c, in which the \(\mathbf{K}_{1}\), \(\mathbf{K}_{2}\), and \(\mathbf{K}_{3}\) points fold to the \(\kappa\), \(\gamma\), and \(\kappa^{\prime}\) points, respectively. To verify that this is the correct picture, we obtain the local twist angle \(\theta_{l}(\mathbf{r})=\theta_{l}+\sin^{-1}[\frac{1}{2}\nabla\mathbf{\times}\mathbf{u}_{l} (\mathbf{r})]\) and uniform scaling factor \(\lambda_{l}(\mathbf{r})=1+\frac{1}{2}\nabla\cdot\mathbf{u}_{l}(\mathbf{r})\) of the relaxed HTG structure. We then define the "local moire aperiodicity" via \(A(\mathbf{r})\equiv\sum_{l=1,3}|K_{lx}(\mathbf{r})/K_{2x}(\mathbf{r})-1|\), where \(K_{lx}(\mathbf{r})=K\cos[\theta_{l}(\mathbf{r})]/\lambda_{l}(\mathbf{r})\) is the "local \(K_{x}\)", and \(K=\frac{4\pi}{3\alpha_{0}}\). \(A(\mathbf{r})\) is zero if all three \(K\) points lie on a line and non-zero otherwise. The local moire aperiodicity is plotted in the background of Fig1c, which shows that the large triangular domains have indeed relaxed to the locally periodic structure with \(A(\mathbf{r})\approx 0\). Thus, the physics within each domain is indeed described by the periodic moire structure with the mBZ illustrated in Fig1a. In the domain wall region and their intersection, \(A(\mathbf{r})>0\) and is much larger than in the unrelaxed structure, \(A_{0}=2|1-\cos\theta|\approx 0.68\times 10^{-3}\). This implies that these regions (which contain the previously studied \(\mathbf{d}=0\) model [59; 60]) actually relax _away_ from the periodic structure, and therefore appears more locally quasicrystalline [74]. We remark that, although the moire period \(a_{m}\) and the domain size (determined by the unrelaxed supermoire period) \(a_{mm}\) considered thus far are both determined by \(\theta\), they can in principle be tuned independently. This is important as it means that the domain size can be controlled while keeping the local physics within each domain fixed. By applying a small global uniform compression to the outer layers via \(\lambda<1\), which may be possible via heterostrain engineering, the domain size \(a_{mm}=a_{0}\lambda/(2|\lambda-\cos\theta|)\) quickly increases and diverges at \(\lambda=\cos\theta\) at which point the entire system is a single domain. This single domain structure has lower elastic energy density due to the absence of domain walls, so we speculate that some degree of this may already occur naturally in finite systems. Finally, we remark that our conclusion about the relaxed structure are qualitatively insensitive to details such as the precise ratio of intra- and inter-layer elastic energies, which is a potential tuning knob in comparing with experiment [73]. ## II Electronic structure Having established the importance of lattice relaxation on the resulting supermoire structure, we now turn to the electronic structure within a h-HTG domain. Rather than deriving a quantitative electronic model based on the relaxed structure [75; 76], which would contain many detail-dependent terms, we instead take an effective approach that captures the essential physics. The starting point for our analysis is the Bistritzer-MacDonald continuum model generalized to three layers. For more than two layers we must take into account the displacements of the two moire superlattices, \(\mathbf{d}_{t,b}\). \[H_{K}=\begin{pmatrix}-iv\mathbf{\sigma}_{\theta}\cdot\mathbf{\nabla}&T(\mathbf{r}-\mathbf{d}_{t })&0\\ T^{\dagger}(\mathbf{r}-\mathbf{d}_{t})&-iv\mathbf{\sigma}\cdot\mathbf{\nabla}&T(\mathbf{r}-\mathbf{d}_{ b})\\ 0&T^{\dagger}(\mathbf{r}-\mathbf{d}_{b})&-iv\mathbf{\sigma}_{-\theta}\cdot\mathbf{\nabla}\\ \end{pmatrix} \tag{1}\] where \(\mathbf{\sigma}_{\theta}=e^{-i\theta\sigma_{z}}(\sigma_{x},\sigma_{y})\) and \[T(\mathbf{r})=w\begin{pmatrix}\kappa U_{0}(\mathbf{r})&U_{-1}(\mathbf{r})\\ U_{1}(\mathbf{r})&\kappa U_{0}(\mathbf{r})\end{pmatrix} \tag{2}\] is the moire tunneling between layers, with \(U_{l}(\mathbf{r})=\sum_{n=0}^{2}e^{\frac{2\pi i}{\hbar}ln}e^{-i\mathbf{q}_{n}\cdot\mathbf{r}}\). The tunneling wavevectors are such that \(q_{n,x}+iq_{n,y}=-ik_{\theta}e^{\frac{2\pi i}{\hbar}n}\), where \(k_{\theta}=2K\sin\frac{\theta}{2}\). We will use \(v=1.03\times 10^{6}\)m/s and \(w=105\)meV, which we believe well models trilayer graphene at these twist angles capturing some degree of interaction-induced velocity renormalization [73; 74]. The intra-sublattice tunneling strength is suppressed due to lattice relaxation and renormalization by \(\kappa<1\); while hard to estimate precisely[28], TBG studies [69; 70; 75; 76; 77; 78; 79; 80; 81; 82] suggest \(\kappa\approx 0.5-0.8\), and we therefore take a conservative estimate \(\kappa=0.7\) for now. The Hamiltonian for the \(K^{\prime}\) valley can be obtained by time reversal symmetry, and spin degeneracy is implied. This model has a moire translation symmetry with reciprocal lattice vectors \(\mathbf{b}_{1,2}=\mathbf{q}_{1,2}-\mathbf{q}_{0}\) and lattice vectors \(\mathbf{a}_{1,2}=\frac{4\pi}{3k_{\theta}}(\pm\frac{\sqrt{3}}{2},\frac{1}{2})\). The Bloch periodicity of \(l\)'th layer is given by \(\psi_{\mathbf{k},l}(\mathbf{r}+\mathbf{a})=e^{i(\mathbf{k}-\mathbf{K}_{l})\cdot\mathbf{a}}\psi_{\mathbf{k},l}(\mathbf{r})\), where \(\mathbf{K}_{1,3}=\mp\mathbf{q}_{0}+\mathbf{K}_{2}\) are the \(\kappa\) and \(\kappa^{\prime}\) points of the mBZ and \(\mathbf{K}_{2}\) is the \(\gamma\) point. Since we may always translate the entire system at the moire scale, only \(\mathbf{d}=\mathbf{d}_{t}-\mathbf{d}_{b}\), the offset between the moire patterns, affects the spectrum of the continuum Hamiltonian \(H_{K}\). While a generic \(\mathbf{d}\) breaks most crystalline symmetries, there is an approximate particle-hole-inversion symmetry \(\mathcal{IC}\) which exchanges the top and bottom layers, multiplies the middle layer by \(-1\), takes \(\mathbf{r}\rightarrow-\mathbf{r}\), and anticommutes with the Hamiltonian. This symmetry is exact if we take \(\sigma_{\pm\theta}\rightarrow\sigma\), which is a very good approximation for the small \(\theta\) of interest here, and is easiest to see if one chooses \(\mathbf{d}_{t}=-\mathbf{d}_{b}\). In Fig2a, we show the remote band gap, defined as the minimum of the gap between the second and first conduction or valence bands, as a function of \(\mathbf{d}\) for \(\theta=1.5^{\circ}\). For special shifts such as \(\mathbf{d}=0\), or along high symmetry lines, the remote band gap is forced to be zero [43; 59]. For generic shifts, however, the remote band gap is non-zero and maximized for shifts at the corners of the moire unit cell: \(\mathbf{d}=\pm\mathbf{\delta}=\pm\frac{1}{3}(\mathbf{a}_{2}-\mathbf{a}_{1})\). Computing the total Chern number of the first conduction and valence bands, we find \(C_{\rm tot}=\mp 1\) in the regions smoothly connected to the high symmetry \(\pm\mathbf{\delta}\) points. The corresponding bands in the \(K^{\prime}\) valley have opposite Chern numbers by time-reversal symmetry. The band structure at \(\mathbf{d}=\mathbf{\delta}\), shown in Fig 2b, demonstrates the existence of two isolated nearly-flat bands carrying net topology. We now focus on the properties of the h-HTG periodic moire superlattice, obtained by setting \(\mathbf{d}=\mathbf{\delta}\). The \(\overline{\rm h}\)-HTG model, obtained by setting \(\mathbf{d}=-\mathbf{\delta}\), is related by \(C_{2z}\mathcal{T}=\sigma_{x}\mathcal{K}\), where \(\mathcal{K}\) is complex conjugation, which takes \(\mathbf{r}\rightarrow-\mathbf{r}\) and leaves \(\mathbf{k}\) invariant. At this special value of \(\mathbf{d}\) the model has additional symmetries. Because \(\mathbf{\delta}\), as the corner of the unit cell, is a \(C_{3z}\) invariant point, the resulting moire superlattice is \(C_{3z}\) symmetric. Furthermore, because \(\mathbf{\delta}\rightarrow-\mathbf{\delta}\) under \(x\rightarrow-x\), the model is additionally symmetric under \(C_{2y}\), which exchanges the top and bottom layers and also exchanges valleys. The antiunitary mirror symmetry \(C_{2y}\mathcal{T}\) acts within a valley. Fig2c shows the density of states (DOS) of h-HTG as a function of \(\theta\). The most prominent feature is the appearance of the advertised magic angle at \(\theta\approx 1.5^{\circ}\), where two topological flat bands appear at the charge neutrality point. At the magic angle, the DOS exhibits a sharp peak (\(\text{DOS}>20\text{eV}^{-1}\text{nm}^{-2}\)), the remote band gap is large \(E_{\text{gap}}\approx 85\)meV, and the dispersion, half the total bandwidth of both bands, is small \(W\approx 15\)meV. These values should be contrasted with a typical interaction scale; this can be obtained by scaling up the \(20-30\)meV estimate[2; 83] of TBG interactions by \(\approx 1.5\), due to the larger angle, yielding the range \(30-45\)meV. This ordering of energy scales is ideal for exploring strongly correlated topology, in which the large \(E_{\text{gap}}\) essentially "locks in" the quantum geometry of the flat band manifold, within which interactions are dominant. ## III Chiral limit The origin of these flat bands can be understood from the chiral model[64], obtained by setting \(\kappa=0\), which we now analyze in detail. Chiral models, with exactly flat bands[84; 85; 86; 87; 88; 89; 90; 64; 91; 92], also motivate a "sublattice-Chern Figure 2: (a) The remote band gap \(E_{\text{gap}}\) multiplied by \(C_{\text{tot}}=\pm 1\) of \(H_{K}\) is shown as a function of \(\mathbf{d}\), the relative offset between the moiré lattices (illustrated in inset), for \(\theta=1.5^{\circ}\) and \(\kappa=0.7\). (b) The moire band structure for h-HTG, corresponding to \(\mathbf{d}=\mathbf{\delta}\). (c) The density of states for h-HTG as a function of \(\theta\). basis"[93; 94; 49], useful for all \(\kappa\), where the flat bands have Chern number \(C=(1,-2)\). The chiral model enables strong coupling approaches[49] to several types of correlated insulating states including generalized ferromagnets, topological charge density waves, and fractional Chern insulators. To write an explicit form of the chiral Hamiltonian, we choose \(\mathbf{d}_{t}=-\mathbf{d}_{b}=-\mathbf{\delta}\) such that \(\mathbf{d}=-2\mathbf{\delta}\equiv\mathbf{\delta}\) and \(U_{l}(\mathbf{r}-\mathbf{d}_{t,b})=U_{l\pm 1}(\mathbf{r})\). We therefore have, in the basis where \(\sigma_{z}=\text{diag}(1,1,1,-1,-1,-1)\), \[\begin{split} H_{K}&=vk_{\theta}\begin{pmatrix}0&D^{ \dagger}\\ D&0\end{pmatrix},\\ D&=\begin{pmatrix}-2ie^{i\zeta}\overline{\partial}&\alpha U_{-1}(\mathbf{r} )&0\\ \alpha U_{0}(-\mathbf{r})&-2i\overline{\partial}&\alpha U_{0}(\mathbf{r})\\ 0&\alpha U_{-1}(-\mathbf{r})&-2ie^{-i\zeta}\overline{\partial}\end{pmatrix}.\end{split} \tag{3}\] Here we have nondimensionalized the Hamiltonian using \(\mathbf{r}\rightarrow\mathbf{r}k_{\theta}\), \(\overline{\partial}\rightarrow\overline{\partial}/k_{\theta}\) where \(\overline{\partial}=\frac{1}{2}(\partial_{x}+i\partial_{y})\). Nominally \(\zeta=\theta\), but it can be instructive to imagine tuning it independently; none of our conclusions depend on its precise value. As we tune the dimensionless tunneling strength \(\alpha=w/vk_{\theta}\sim 1/\theta\), we find a sequence of magic angles, listed in Table 1, at which we obtain _exactly_ flat bands at zero energy, seen by the vanishing bandwidth in Fig3a. Due to the chiral symmetry of the Hamiltonian \(\{H,\sigma_{z}\}=0\), we may label zero modes by their eigenvalue under \(\sigma_{z}\), such that the flat bands correspond to zero modes of \(D\) polarized on the A sublattice and of \(D^{\dagger}\) on the B sublattice. As shown in Fig3b, odd parity magic angles have two flat bands per spin per valley, while even-parity magic angles have four together with a dispersive Dirac cone at \(\Gamma\) (for \(\zeta=0\)). Interestingly, the even magic angle dispersive cone is gapped out by \(\zeta\neq 0\) but the four exactly flat bands remain. The distinction between even and odd magic angles, together with ratios between magic \(\alpha\) that do not match those of TBG, suggest that the magic angles here do not descend from those of TBG. This is in contrast to the chiral magic angles of twisted chirally stacked multilayers[4; 46], alternating twist multilayers[4], and the \(\mathbf{d}=0\) periodic HTG [60], which can all be related to TBG. A detailed understanding of the mathematical structure of this model is an interesting subject beyond the scope of this work. Let us focus on the first magic angle \(\alpha_{1}\approx 0.377+O(\zeta)\) which is the most experimentally relevant. Here we obtain two exactly flat bands; the A sublattice band has \(C_{A}=1\) and the B sublattice band has \(C_{B}=-2\). To understand the emergence of flat bands and their Chern numbers we begin with the 3 Dirac cones associated with the \(\alpha=0\) decoupled limit. These cones are protected and pinned to zero energy by chiral symmetry, pinned to the \(\kappa,\gamma,\kappa^{\prime}\) points by \(C_{3}\) symmetry, and all have positive chirality. The net chirality of three implies that the flat bands obtained by gapping all three cones with a \(\sigma_{z}\) mass results in bands that differ in Chern number by \(C_{A}-C_{B}=3\)[50]. The low-energy bands must therefore have a net topology \(C_{A}+C_{B}\neq 0\). We now analytically derive the exact flatness and Chern numbers at the first magic angle. Let us focus on the \(A\) sublattice, \(\psi=\psi_{A}\). From the decoupled limit, we find that the \(C_{3z}\) representation of the zero mode \(\gamma\)-point wavefunction is such that \(\psi_{\gamma 1,3}(\mathbf{r}=0)=0\) but \(\psi_{\gamma 2}(0)\) is in general nonzero. In a \(C_{2y}\mathcal{T}\) symmetric gauge, \(\psi_{\gamma 2}(0)\) is a signed real number, and it is natural for it to cross through zero [92; 49]; we find that it does at \(\alpha=\alpha_{1}\). Such a crossing point is stable to \(C_{3z}\) and \(C_{2y}\mathcal{T}\) preserving perturbations, and leads to exactly flat bands. At the crossing point, the entire \(\gamma\) point zero-mode wavefunction vanishes at \(\mathbf{r}=0\). We may therefore write[84; 64] \[\psi_{\mathbf{k}}(\mathbf{r})=e^{\frac{i}{2}\mathbf{\overline{k}}z}\frac{\sigma(z+ik)}{ \sigma(z)}\psi_{\gamma}(\mathbf{r}) \tag{4}\] as a zero mode wavefunction at wavevector \(\mathbf{k}\), measured from the \(\gamma\) point for concreteness. Here, \(\sigma(z)=\sigma(z|a_{1},a_{2})\) is the (modified [95]) Weierstrass sigma function which satisfies \(\sigma(-z)=-\sigma(z)\) and \(\sigma(z+a_{1,2})=-e^{\frac{1}{2}\mathbf{\overline{a}}_{1,2}\left(z+\frac{\mathbf{ \overline{a}}_{1,2}}{2}\right)}\). The pole associated with the zero of the sigma function is cancelled by the zero of \(\psi_{\gamma}\). Here we \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\alpha_{1}\) & \(\alpha_{2}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\alpha_{9}\) \\ \hline 0.377 & 1.297 & 1.755 & 2.414 & 2.991 & 3.628 & 4.213 & 4.840 & 5.430 \\ \hline \end{tabular} \end{table} Table 1: List of magic angles for the chiral model with \(\zeta=0\). Figure 3: (a) The dispersion \(W\) and remote gap \(E_{\text{gap}}\) for the chiral model with \(\zeta=0\), as a function of the dimensionless tunneling parameter \(\alpha\). (b) The band structure at \(\alpha_{1}\) (left) and \(\alpha_{2}\) (right). (c) The charge density \(n(\mathbf{r})\) and Berry curvature \(F(\mathbf{k})\) in the Chern basis for the \(|C|=1,2\) bands, where \(A_{uc}\) and \(A_{BZ}\) are the moiré unit cell and mBZ areas respectively. (d) The evolution of \(W\), Berry curvature deviation \(\delta F\), and trace condition violation \(\overline{T}\), as a function of \(\kappa\) in the Chern basis, and corresponding TBG values for comparison. have used the complex number notation \(k=k_{x}+ik_{y}\), \(z=x+iy\), and \(a=a_{x}+ia_{y}\). We note that (4) may be interpreted as the wavefunction of a Dirac particle moving in an effective inhomogeneous magnetic field of \(2\pi\) flux per unit cell, where \(B_{\rm eff}(\mathbf{r})=\nabla^{2}\log\frac{|\sigma(z)|}{||\psi_{\gamma}(\mathbf{r})||}\)[24; 48], up to an unimportant \(\mathbf{k}\)-independent normalized layer vector. The fact that \(\psi\) has a single \(k\)-space zero, for each \(\mathbf{r}\), of positive winding implies that the band has \(C_{A}=1\) (there is a winding by \(2\pi\) around the mBZ)[48]. It is also possible to compute the Chern number from the \(k\)-space quasiperiodicity of (4) [48; 49]. Since \(D\) and \(D^{\dagger}\) have the same singular values when there is no external magnetic flux[91], the B sublattice must also have an exact band of zero modes from which \(C_{A}-C_{B}=3\) implies \(C_{B}=-2\). In Fig3c, we show the charge density \(n(\mathbf{r})\) and Berry curvature \(F(\mathbf{k})\) for these bands. The charge distribution for both bands are remarkably uniform. This can be motivated from the fact that TBG has a singly peaked charge density at the AA region: since h-HTG contains two such AA sites, AA\({}_{12}\) and AA\({}_{23}\), the charge density has two peaks at these locations for each layer pair and is overall much more uniform. We also find that the Berry curvature distribution for both bands feature a multi-peak structure in momentum space, and are relatively uniform (with the A sublattice \(C=1\) band being extremely uniform). These are both important features reminiscent of the lowest Landau level which persist to larger \(\kappa\), as we discuss later. All the Chern basis bands, labeled by valley, spin, and sublattice, are explicitly summarized in Table 2. ## IV Correlated states We now discuss the correlated physics and band geometry of these bands. At integer filling, generalized quantum Hall ferromagnets, obtained by filling any combination of bands within the Chern basis (Table 2), are very energetically competitive; they are exact eigenstates in the chiral limit, and exact zero energy ground states when bare and Hartree dispersions can be neglected[93; 96]. At filling \(\nu=\pm 3\), measured relative to charge neutrality such that the empty and full flat bands have \(\nu=\mp 4\) respectively, the generalized ferromagnetic state necessarily carries non-zero Chern number, though we will later discuss other states which avoid this restriction. Coherence between bands of differing Chern number is heavily penalized because such order parameters must have vortices in the Brillouin Zone, equal in number to the difference in Chern number[21]. Since no two Chern numbers are the same (including the other valley we have \(C=\pm 1,\mp 2\)), we therefore do not expect intervalley coherence (IVC) in this system. Indeed, while TBG has an approximate U\((4)\times\) U\((4)\) symmetry[93; 97; 98] that can rotate between IVC and valley diagonal orders, we have the U\((2)\times\) U\((2)\times\) U\((2)\times\) U\((2)\) subgroup consisting of spin and charge rotations for each sublattice and each valley. For nonzero bare dispersion, or outside the chiral limit, this symmetry is broken to U\((2)\times\) U\((2)\) consisting of spin and charge rotations in each valley. A rich phase diagram of spontaneous symmetry breaking has been observed in TBG [99; 100; 52], and we expect similar physics to arise here. In this sense, the Chern basis is meaningful and important even outside the chiral limit [101; 49; 93]; to access it one can diagonalize the band projected sublattice operator \(\Gamma_{\alpha\beta}=\bra{u_{\mathbf{k}\beta}}\sigma_{z}\ket{u_{\mathbf{k}\alpha}}\), where \(\ket{u_{\mathbf{k}\alpha}}\) is the Bloch wavefunction at wavevector \(\mathbf{k}\) associated to band \(\alpha\). Zero mode bands of chiral Hamiltonians of the form (3) have "ideal quantum geometry"[24; 28; 45; 46; 47; 48; 49; 50; 51] for fractional Chern insulators in a sense that we now describe. Because \(D\) only depends on antiholomorphic derivatives, the zero mode band of \(D\) maps to itself under multiplication by \(z=x+iy\); we have \(zP=PzP\) where \(P\) is the projector onto the band and \(z=x+iy\) can be thought of as a vortex operator; this condition is referred to as "vortexability"[45]; vortices may be added while remaining within the band of interest. This condition may be iterated to replace \(z\) with any holomorphic function \(f(z)\). In momentum space, vortexability is equivalent to the ability to choose a gauge where the wavefunctions \(u_{k}=e^{-i\mathbf{k}\cdot\mathbf{r}}\psi_{\mathbf{k}}\) are holomorphic in \(k_{x}+ik_{y}\)[45; 102; 103; 104; 105; 106]. It is also equivalent to the momentum space "trace condition"[31; 36; 107; 29], the saturation of the inequality \(\overline{T}=\int d^{2}\mathbf{k}({\rm tr}\,g_{\rm FS}(\mathbf{k})-|F(\mathbf{k})|)\geq 0\), where \(g_{\rm FS}\) is the Fubini-Study metric and \(F(\mathbf{k})\) is the Berry curvature. We say a system has ideal quantum geometry if \(\overline{T}=0\). Ideal quantum geometry is intimately related to fractional Chern insulator ground states; we may begin with an ordinary many-body state \(\ket{\Psi_{0}}\), e.g. the fully filled state, and create the state [45; 46; 47] \[\ket{\Psi_{2s}}=\prod_{i<j}(z_{i}-z_{j})^{2s}\ket{\Psi_{0}} \tag{5}\] which lies entirely within the band of interest due to the vortexability condition. This construction generalizes that of the \(\nu=1/(2s+1)\) Laughlin state but also applies to bands with \(C>1\). If the band is flat and the band-projected interaction is sufficiently short-ranged and normal ordered with respect to an empty "vacuum", then \(\ket{\Psi_{2s}}\) is the unique ground state at its filling factor[24; 45; 48; 108; 109]. The previously mentioned charge density and Berry curvature homogeneity further help with stability to long-ranged interactions, which can be motivated by \begin{table} \begin{tabular}{|c|l|l|} \hline Band & h-HTG & h-HTG \\ \hline \((K,s,A)\) & \(C=1\) & \(C=2\) \\ \((K,s,B)\) & \(C=-2\) & \(C=-1\) \\ \((K^{\prime},s,A)\) & \(C=-1\) & \(C=-2\) \\ \((K^{\prime},s,B)\) & \(C=2\) & \(C=1\) \\ \hline \end{tabular} \end{table} Table 2: Table of Chern basis bands labeled by (valley, spin, sublattice), showing their Chern numbers, in the h-HTG and \(C_{2s}\)-related \(\overline{\text{h}}\)-HTG structure. analogy with the lowest Landau level [107; 36; 110]. Additionally, the interaction generated Hartree dispersion obtained by integrating out already-filled bands is much smaller if charge density is peaked at more than one point[50]. This is indeed the case here, especially relative to TBG where an AA-peaked charge density leads to a strong Hartree dispersion that works against FCIs [28]. Moving away from the chiral limit, we show the evolution of various geometric indicators in Fig3d. We first observe the dispersion \(W\) increasing with \(\kappa\), and we identify the optimal magic angle \(\alpha_{1}^{\rm opt}(\kappa)\) by the minimum in \(W\). We show the Berry curvature deviation \(\delta F=\left(\int d\mathbf{k}[\frac{1}{2\pi}F(\mathbf{k})-C]^{2}\right)^{\frac{1}{2}}\) and trace condition violation \(\overline{T}\) of the two bands in the Chern basis, at \(\alpha_{1}^{\rm opt}(\kappa)\), as a function of \(\kappa\), along with the corresponding values calculated for TBG. We find that \(\delta F\) shows remarkably weak dependence on \(\kappa\), and is significantly lower than TBG for both bands at realistic values of \(\kappa\). Turning to \(\overline{T}\), we find that the \(|C|=2\) (1) band is uniformly more (less) ideal than TBG, but are all of similar magnitude. Overall, these favorable quantum geometric indicators are highly suggestive of an FCI ground state at fractional filling, and call for a detailed numerical analysis. Many correlated states have been predicted for \(C=2\) bands, from the starting point of ideal geometry [47; 111]. By doubling the unit cell, the \(C=2\) band may be split [112; 113; 114] into two \(C=1\) bands that are each individually vortexable [47; 111] and related by translation symmetry; linear combinations of these \(C=1\) subbands lead to a nearly-degenerate manifold of charge and spin density waves that can occur at half-integer fillings [47; 111]. These states are guaranteed to be stabilized in the limit of short-ranged interactions when the bands have ideal geometry, similar to FCIs, but are numerically present for realistic parameters as well [27]. A \(C=1\) insulator of this nature at half-integer filling was observed in twisted monolayer-bilayer graphene [10]. We expect that this could be the case in h-HTG as well. By filling two of the \(|C|=1\) sub-bands, say in opposite valleys, it is possible to also obtain integer filling states with unexpected properties, such as a \(\nu=3\) insulator with net Chern number zero. At fractional filling, there are a variety of fractional Chern insulating states that have been proposed in higher Chern bands [112; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123]. Many can be constructed from (5) using different parent states \(|\Psi_{0}\rangle\). We refer readers to Refs. [47; 111] for more details. We briefly discuss the incommensurate Kekule spiral state, which was recently found to be important in strained TBG [124; 125; 126; 127]. We expect that such incommensurate orders are less likely in h-HTG since their energetics appear to rely on a large, peaked, Hartree dispersion. The large Hartree dispersion originates from particular features of TBG wavefunctions, such as a single peak of the charge density per unit cell, that are not shared by h-HTG. Instead, we expect the previously discussed topological states to be favored. We have highlighted that the topological nature and nearly ideal quantum geometry of h-HTG lead to a wide variety of potential correlated states, from generalized quantum Hall ferromagnets to topological density waves to fractional Chern insulators. Due to the proximity to the \(\kappa=0\) chiral limit, there is a pathway towards understanding the competition between these various correlated states. The investigation of the detailed energetic competition of such correlated states is an important subject for future works. ## V Discussion Having discussed extensively the rich correlated physics of h-HTG, we now briefly discuss the global physics when the domain size is finite. In this case, HTG separates into large domains of h-HTG and \(\overline{\rm h}\)-HTG, related by the valley-preserving \(C_{2z}\mathcal{T}\) transformation which flips the sign of all Chern numbers. When the domains are tuned to incompressible fillings, the low energy physics is dominated by the network of domain wall states which provide a realization of Chalker-Coddington type network models [57; 58]. At the full or empty band insulating states \(\nu=\pm 4\), stabilized by the \(E_{\rm gap}\sim 100\)meV remote gap, each domain is a quantum valley Hall state with net valley-Chern number that is opposite in adjacent domains: we therefore expect the appearance of gapless edge modes counter-propagating along the network of domain walls. Similar physics of counter-propagating edge networks has been actively explored in marginally twisted \(\theta\ll 1^{\circ}\) TBG [128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143], where a strong vertical displacement field is needed to gap the AB and BA domains. In HTG, the large intrinsic band gap eliminates the need for a displacement field. More interesting possibilities also arise here due to the correlated topological states. When the system is valley polarized into QAH domains, adjacent domains carry opposite Chern numbers thus realizing a network of gapless chiral edge modes. Alternatively, adjacent domains may have opposite valley polarizations; the competition between these possibilities depends on detailed domain wall energetics [144] and may be probed by an out-of-plane magnetic field. This situation is similar to the Chern mosaic in hBN-aligned magic-angle TBG [145; 54]. At fractional filling, an even more exotic possibility is a network of FCI edge states. A detailed theory of the emergent gapless networks and their interplay with the correlated topological states in HTG is an important subject for future works. While we have mainly discussed HTG in the context of TBG, flat Chern bands have also been studied in a variety of other \(C_{2z}\)-breaking graphene structures such as twisted chirally-stacked multilayers[146; 147; 148; 149; 150; 151; 152] and periodically strained graphene[153; 154; 155; 156]. Twisted monolayer-bilayer graphene in particular also has \(C=\pm(1,-2)\). All of these systems have natural "chiral limits"[64] where the band-wavefunctions have ideal quantum geometry for fractional Chern insulators [46, 47, 50, 111, 152]. However, for monolayer-bilayer and bilayer-bilayer the exact chiral limit requires ignoring trigonal warping, and in its presence a large displacement field must be added to flatten the bands and reveal correlated states in experiment [10, 11, 12, 13, 14, 15, 16, 17]. In total, these realities likely impair the ideal quantum geometry for fractional Chern insulators in these systems. While strained graphene's chiral-flat \(|C|=1\) band is more robust [50], realizing the strain requires nanorod engineering [157] or buckling over a \(C_{2z}\)-breaking substrate [158]; the latter has only been achieved with the metallic NbSe\({}_{2}\)[158], which precludes tuning the density with a gate. Furthermore, HTG is unique in its supermoire scale domain walls and gapless edge-modes between \(C_{2z}\mathcal{T}\) related topological states. We have therefore demonstrated that HTG is a unique and exciting platform for realizing strongly correlated topological states, without the need for substrate alignment. While unrelaxed HTG forms a moire-of-moire pattern, we have shown that lattice relaxation favors the formation of h-HTG, a periodic \(C_{2z}\)-breaking single-moire superlattice. We identify the relevant continuum model description for h-HTG and identify a magic angle at which a pair of flat topological bands with near-ideal quantum geometry isolated by a large remote band gap emerges. These flat bands can be traced back to a chiral limit, which enables a controlled starting point for a strong coupling approach to correlated topological states. Our work lays the foundation for future theoretical and experimental studies of the strongly correlated topological physics in this platform. ###### Acknowledgements. We thank Ziyan Zhu for useful discussions and for collaboration on a related project. TD thanks Yves Kwan for helpful discussions. PJL thanks Eslam Khalaf, Ashvin Vishwanath, Daniel Parker, Junkai Dong, Grigory Tarnopolsky, and Qiang Gao for collaborations on related projects. This work was supported by the Air Force Office of Scientific Research (AFOSR) under award FA9550-22-1-0432. This work was partially supported by the Army Research Office MURI W911NF2120147, the 2DMAGIC MURI FA9550-19-1-0390, the National Science Foundation (DMR-1809802), the STC Center for Integrated Quantum Materials (NSF grant no. DMR-1231319), and the Gordon and Betty Moore Foundation's EPiQS Initiative through grant GBMF9463 to PJH. AU acknowledges support from the MIT Pappalardo Fellowship and from the VATAT Outstanding Postdoctoral Fellowship in Quantum Science and Technology.
2304.03318
Explainable AI And Visual Reasoning: Insights From Radiology
Why do explainable AI (XAI) explanations in radiology, despite their promise of transparency, still fail to gain human trust? Current XAI approaches provide justification for predictions, however, these do not meet practitioners' needs. These XAI explanations lack intuitive coverage of the evidentiary basis for a given classification, posing a significant barrier to adoption. We posit that XAI explanations that mirror human processes of reasoning and justification with evidence may be more useful and trustworthy than traditional visual explanations like heat maps. Using a radiology case study, we demonstrate how radiology practitioners get other practitioners to see a diagnostic conclusion's validity. Machine-learned classifications lack this evidentiary grounding and consequently fail to elicit trust and adoption by potential users. Insights from this study may generalize to guiding principles for human-centered explanation design based on human reasoning and justification of evidence.
Robert Kaufman, David Kirsh
2023-04-06T18:30:27Z
http://arxiv.org/abs/2304.03318v1
# Explainable AI And Visual Reasoning: Insights From Radiology ###### Abstract Why do explainable AI (XAI) explanations in radiology, despite their promise of transparency, still fail to gain human trust? Current XAI approaches provide justification for predictions, however, these do not meet practitioners' needs. These XAI explanations lack intuitive coverage of the evidentialer basis for a given classification, posing a significant barrier to adoption. We posit that XAI explanations that mirror human processes of reasoning and justification with evidence may be more useful and trustworthy than traditional visual explanations like heat maps. Using a radiology case study, we demonstrate how radiology practitioners get other practitioners to see a diagnostic conclusions's judg. Machine-learned classifications lack this evidentialer grounding and consequently fail to elicit trust and adoption by potential users. Insights from this study may generalize to guiding principles for human-centered explanation design based on human reasoning and justification of evidence. ## 1 Introduction The _Fronunciation_ (Fronunciation ## Introduction Al is playing an increasingly important role in healthcare, particularly for medical image classification [13, 30]. Recent AI-based systems demonstrate greater accuracy than human radiologists [14, 39], dermatologists [1], and oncologists [32] at detecting certain pathologies. Despite their promise, few AI image classification systems make it to real-world deployment [2, 26]. A major impediment to adoption is that practitioners want justification for a system's decisions; they don't like to rely on blind faith. Without knowing how an AI arrives at its conclusion, a diagnosis is difficult to trust [11, 12, 21]. This lack of transparency is a well-documented problem, as is a lack of trust in AI assistants across healthcare [3]. In radiology specifically, prior work shows AI-based diagnostic tools lacking transparency increase diagnostic ambiguity and time to diagnosis --factors which are weighed against the potential benefits like more accurate diagnoses when making adoption decisions [21]. The purpose of _Explainable_ AI (XAI) is, first and foremost, to engender trust through transparency [11, 12, 22, 7]. Why then do explainable AI explanations in radiology, despite their promise of transparency, still fail to gain human trust? In this work, we present findings derived from an ethnographic study of 13 radiologists and radiology residents explaining their findings and impressions of chest x-rays to other radiology practitioners. We found that XAI explanations of chest x-rays differ from human explanations in the way that visual reasoning and evidence are communicated. That is, they lack intuitive coverage of the evidentiary basis for a given conclusion. Further, these explanations fail to account for the complex contextually-derived needs of their diverse users, including support for goal-oriented tasks like learning [23]. We posit that some XAI inadequacies may be addressed by matching explanations to the reasoning and justification processes of the systems' users, supporting the formation of an accurate diagnostic narrative and allowing them to cross-examine the XAI. This approach conforms the XAI to human cognition and ensures synchrony with the contextual needs of the specific task being automated. If accomplished, reasoning-informed XAI may improve XAI usefulness and _calibrated_ trust. ## Visual Reasoning Humans get other humans to see the validity of an interpretation by explaining _why_ they see what they see. That is, they are familiar with the process of directing attention to relevant details, providing evidence for claims, and linking what they see to why it matters [37]. Machine-learned classifications lack this evidentiary understanding, with the consequence that popular visualizations such as heat maps do not meet many users' explanatory needs [35, 16]. Visual reasoning refers to the process of analyzing visual information and deriving insights from it [37]. By understanding an end users' reasoning and justification procedures, XAls can support the evidentiary needs of even highly specialized end users like radiologists. ### XAI in Radiology Current diagnostic AIs for radiology classify images by applying dozens of statistical measures over a full grayscale image. Radiologists forming diagnostic interpretations tend to focus on edges, blobs, areas of contrast, and textures describable in natural language. Machines are sensitive to changes within convolution windows of arbitrary size [31, 20, 34] regardless of whether these correspond to attributes describable in natural language. The implication is that evidence that is statistically informative to the machine may be uninformative to humans. This makes it challenging to explain the evidentialy basis of a machine's classification. The most common way radiology XAI systems attempt to explain an interpretation, such as an x-ray with a 'COVID-19', is by providing another image - a heat map - paired with a classification and a probability measure for certainty. Figure 1 shows an example from Cheupert [14]. Though Cheupert and similar classification systems [29, 15] are impressive in their interpretative accuracy, we argue their explanations' fall short of those given by human radiologists [21]. They fail to draw attention to visual evidence in the radiograph in the way human appliances need in order to understand the basis for an interpretation. Further, they fail to form a set of logical premises that connect this visual evidence to a clinically-meaningful radiological impression using steps of justification. Steps of justification are evident in human-human explanation, where one person calls attention to specific regions, and within those to specific features. As an explainer moves through an image, focusing on what is relevant, they create an argument that constitutes a chain of evidence similar to step-by-step reasoning in language [33]. When temporal ordering of joint attention is successful and paired with enough information to derive meaning, the explainer understands the grounds for a classification. Recent work emphasizes that XAI must be designed from the perspective of human cognition [24], aligning with similar views on theory-driven XAI [38], expert-informed XAI [28], and socially transparent XAI [6]. The most obvious way to improve XAI for image understanding is to match the human reasoning process by calling attention to attributes one assumes the explainer knows and building out a contextually-relevant evidentialy justification from there. Indeed, past efforts have grouped pixels based on perceptual factors and tied these to named shapes [18]. Pattern-recognition of these shapes might then be related to radiological concepts [40, 41]. Connecting visual features to an overall impression could then be assisted with while conveying (un)certainty and providing alternatives. Matching XAI explanations to the temporally-grounded reasoning process of end users may improve XAI for image classification and beyond. Provided, of course, the system is calibrated to the features and meanings different users can recognize. Adaptability will be necessary for XAI to achieve to common ground with different end users and contexts. Though analyzed as part of our study, we focus on visual reasoning; calibration will be covered elsewhere. ## 2 Method To build on the theories outlined above and inform XAI design using practitioner explanations, we ethnographically observed how radiologists (n = 7) and radiology residents (n = 6) interpret and expanded 12 radiographs [4] to other radiology practitioners. Explanations were transcribed and broken into segments reflecting different types of information conveyed during the interpretation explanation process. These segments are linguistic units (words or short phrases) which combine to form the full explanation [27]. Segernets were determined by identifying the categories of information which explanations progressively cover, from low-level visual features to abstract impressions using domain-specific jargon. Codes were assigned based on how certain types of words and phrases corresponded with each segment,formed to standard radiological lexicon [10, 25, 19]. Content was analyzed to reflect the information communicated within each segment and how the segments unfold over time. Figure 1: A heat map visual explanation for chest x-rays produced by Cheupert using GradCAM (bottom) and the original _x-ray image_ (top). Along with the visualization is a classification and probability measure: _Pulmonary edema_, \(p\) = 0.824. ## Results We elaborate on the content of an explanation in radiology to show the processes by which _humans_ visually reason and justify decisions to other _humans_. Though the case presented here focuses on radiology, similar methods may generalize to other domains where there is a disconnect between how humans process visual information and how XAIs explain it. To assess content, we break human explanations into linguistic segments that can be counted [27] (Table 1 presents a simplified example): 1. A visual reference point, a region of interest (ROI), is established on the image. Identifying a ROI enables joint attention to visual attributes. 2. Visual attributes of the ROI are connected to domain-specific language that help the practitioner understand their explanations in segments of increasing abstraction, we build a case for reasoning-sensitive XAI explanations. In visual cases, XAIs must help explainees see what they need to, in the right order, and make sense of each layer of information in order to assist with sensemaking. This will allow a user to cross-examine the XAI system and empower them to make accurate reliance and trust judgments. An opportunity exists for XAIs to move beyond highlighting ROIs and facilitate sensemaking by mirroring the interpretation, justification, and decision-making processes of humans. This will provide intuitive justifications for conclusions as well as facilitate other XAI functions like teaching, data exploration, and prepping a user for action. Radiology examples include temporally directing spatial references, identifying features as findings using domain-specific terminology, visualizing how a constellation of findings contributes to an impression, and suggesting possible next steps given a case. Goal-oriented tasks such as learning can be sup \begin{table} \begin{tabular}{|p{227.6pt}|} \hline _This blob’ might be_\# a ground glass _opacity_\({}^{2}\) \\ _by the hazy shape_\({}^{2}\). This makes me think there’s an infection? like COVID_\({}^{2}\) \\ _given the pandemic_\({}^{2}\). \\ _ft order a follow-up CT_\({}^{*}\). \\ _ft’s an artifact_\({}^{*}\). \\ _next would check_\({}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{\_}\{}\{}\) \\ \hline \end{tabular} \end{table} Table 1: A simplified example explanation explanation with segments highlighted. The segments are: 1) identifying a ROI, 2) abstracting the ROI features to radiological finding terms [28, 21], 1 needed, assistant with this connection through finding elaboration_\({}^{2}\), 3). Inferring impression items’ from findings, 3). If needed, assist with this (3a, 1) proposed with this (3b, 1) proposed in elaboration_\({}^{2}\) and 4). Conticulates the differences and add orders’ as neat witness. Though, the segments are modified to include 5). Certain via hedging terms and 6) Alternative conclusions via counterfactuals. 7) The x-ray interpretation process is expanded upon via process elaborations’ if needed. ported by providing didactic information, counterfactuals with their own _why not_ reasoning-based explanations, and prepping a user for action by contextualizing conclusions within a larger clinical agenda. Some explanation elements may even be informed from existing expert labeled data which often include confidence levels, uncertainties, negations, and other observations used to train the system [14]. From a design perspective, progressive disclosure may be used to mitigate the risk that increases to explanation complexity may result in increased cognitive load demand and the potential for redundancy [36]. More generally, we forward the premise that human-centered explanations should take into account the reasoning styles and processes of the users of a system as well as support the specific task being performed. Though this work presented a radiology example, the methods can be expanded beyond radiology. Mapping human reasoning processes to an XAls may lead to more useful and trustworthy explanations in other safety-critical domains like autonomous driving and security. ## Conclusion XAI systems do not at present explain like humans do. The most popular forms give no guidance on how to attend to features, make sense of the relations between features, nor understand features in the larger clinical context. This XAI approach reflects a failure to accommodate how humans reason and make sense of information. By modeling XAls based on human reasoning and the communication of evidence, we can inform future XAls that are trustworthy, useful, and sensitive to user needs and goals. ## Acknowledgments We would like to thank the radiology practitioners who participated in the study, Michael Pazzani for his support, and Louie Kaufman for his theoretical feedback. Funding was provided through NSF grant #2026809 and the DARPA Explainable AI Program under contract from NRL.
2306.02119
Homology of multiple complexes and Mayer-Vietoris spectral sequences
Similarities are noted in two Mayer-Vietoris spectral sequences that generalize to any number of ideals in the Mayer-Vietoris exact sequence in local cohomology for two ideals. One has as first terms \v{C}ech cohomology with respect to sums of the given ideals and converge to cohomology with respect to the product of the ideals, the other has as first terms \v{C}ech cohomology with respect to products of the given ideals and converge to cohomology with respect to the sum of the ideals. The first one was obtained by Lyubeznik in \cite{Lyu}, while the second is constructed in \cite[Chapter 2]{Hol} and could also be deduced from results in \cite{God}. We present results on the cohomology of multiple complexes that enables us to deduce both from two related constructions on multiple complexes. A key ingredient is a fact that seems not to have been noticed before: cohomology with respect to a product of ideals is the one of a subcomplex of the \v{C}ech complex computing cohomology with respect to the sum of the given ideals; this provides a much shorter complex to compute cohomology with respect to the product of ideals.
Marc Chardin, Rafael Holanda, José Naéliton
2023-06-03T14:14:50Z
http://arxiv.org/abs/2306.02119v1
# Homology of multiple complexes and Mayer-Vietoris spectral sequences ###### Abstract. Similarities are noted in two Mayer-Vietoris spectral sequences that generalize to any number of ideals the Mayer-Vietoris exact sequence in local cohomology for two ideals. One has as first terms Cech cohomology with respect to sums of the given ideals and converge to cohomology with respect to the product of the ideals, the other has as first terms Cech cohomology with respect to products of the given ideals and converge to cohomology with respect to the sum of the ideals. The first one was obtained by Lyubeznik in [8], while the second is constructed in [7, Chapter 2] and could also be deduced from results in [4]. We present results on the cohomology of multiple complexes that enables to deduce both from two related constructions on multiple complexes. A key ingredient is a fact that seems not to have been noticed before: cohomology with respect to a product of ideals is the one of a subcomplex of the Cech complex computing cohomology with respect to the sum of the given ideals; this provides a much shorter complex to compute cohomology with respect to product of ideals. ## 1. Introduction Let \(R\) be a commutative unitary ring, \(I\) a finitely generated ideal of \(R\) and \(M\) an \(R\)-module. Cech cohomology modules \(H^{i}_{I}(M)\) are important objects in commutative algebra and algebraic geometry. Their vanishing is studied throughout combinatorial properties in [1, 8], as well as the multigraded pieces of such modules are investigated in [2, 7] (see also the references therein). Spectral sequences play a fundamental role in these works. Similarities in the spectral sequences in [7, Chapter 2] and [8] motivated us to search for a framework where both may take place. Multicomplex theory as presented in [11] is one. We avoid the geometric or combinatorial taste given in [1, 4, 8] by approaching the spectral sequences in a more elementary way. There are two main spectral sequences, each one of these in two variants. They all compare cohomologies of some natural subquotient complexes attached to faces of the cone \(\mathbb{R}^{n}_{\geq 0}\) and their interiors, for a \(n\)-multicomplex. To give an idea of the picture: one spectral sequence has first terms corresponding to complexes obtained from the interior of faces of all dimensions and converge to the cohomology of the total complex, while the other has first terms corresponding to the faces of all dimensions and converges to the homology of the interior of the complex. In Section 2, we develop constructions on multicomplexes and give spectral sequences out of these constructions, in order to prove Theorem 2.3, our main result in the theory of multicomplexes. Section 3 relates cohomology of multiple Cech complexes and its interior to local cohomology with respect to sum and product of the corresponding ideals, see Theorem 3.3. As an application of Theorem 2.3 and Theorem 3.3, we provide four Mayer-Vietoris spectral sequences in Section 4; two of these coincide with the ones mentioned above [7, 8], they are as follows: **Theorem 1.1**.: _Let \(R\) be a commutative unitary ring, \(M\) be an \(R\)-module and \(\mathfrak{a}_{i}\), for \(1\leq i\leq n\), be finitely generated ideals. There exist two converging spectral sequences_ 1. \(E_{1}^{n-p,q}=\bigoplus_{i_{1}<\ldots<i_{p}}^{1\leq p\leq n}H_{\mathfrak{a}_{i_ {1}}+\cdots+\mathfrak{a}_{i_{p}}}^{q}(M)\Rightarrow_{p}H_{\mathfrak{a}_{1} \cdots\mathfrak{a}_{n}}^{q-(p-1)}(M)\)_,_ 2. \(E_{1}^{p,q}=\bigoplus_{i_{1}<\ldots<i_{p}}^{1\leq p\leq n}H_{\mathfrak{a}_{i_ {1}}\cdots\mathfrak{a}_{i_{p}}}^{q}(M)\Rightarrow_{p}H_{\mathfrak{a}_{1}+ \cdots+\mathfrak{a}_{n}}^{q+(p-1)}(M)\)_._ The similarities between these two spectral sequences relating Cech cohomologies was the starting point of our work. ## 2. Multicomplexes and spectral sequences ### Setup We begin this section recalling the formal definition of a multicomplex in an abelian category; see [11, Chap. I, SS2] for more details. **Definition 2.1**.: Let \(n\) be a positive integer. A commutative (resp. anticommutative) \(n\)-multicomplex \(C^{\underline{\bullet}}\) is a family of objects \(C^{\underline{q}}\) for all \(\underline{q}=(q_{1},\ldots,q_{n})\in\mathbb{Z}^{n}=\oplus_{i=1}^{n}\mathbb{Z} e_{i}\) together with a family of homomorphisms \(d^{\underline{q},i}:C^{\underline{q}}\to C^{\underline{q}+e_{i}}\) for all \(i=1,\ldots,n\) such that conditions (a) and (b) (resp. (a) and (b')) are satisfied : 1. \(d^{\underline{q}+e_{i},i}\circ d^{\underline{q},i}=0\) for all \(\underline{q}\in\mathbb{Z}^{n}\) and \(i=1,\ldots,n\); 2. \(d^{\underline{q}+e_{j},i}\circ d^{\underline{q},j}=d^{\underline{q}+e_{i},j} \circ d^{\underline{q},i}\) for all \(\underline{q}\in\mathbb{Z}^{n}\) and \(i,j=1,\ldots,n\); 3. \(d^{\underline{q}+e_{j},i}\circ d^{\underline{q},j}=-d^{\underline{q}+e_{i},j} \circ d^{\underline{q},i}\) for all \(\underline{q}\in\mathbb{Z}^{n}\) and \(i,j=1,\ldots,n\). There are many ways to transform a commutative multicomplex into an anticommutative one, and vice-versa. The transformation \(\sigma:d^{\underline{q},i}\mapsto(-1)^{q_{1}+\ldots+q_{i-1}}d^{\underline{q},i}\) in [11] is one such option; it satisfies \(\sigma\circ\sigma(C^{\underline{\bullet}})=C^{\underline{\bullet}}\). As in the case of double complexes, the total complex of a multicomplex is defined, **Definition 2.2**.: Let \(C^{\bullet}\) be a \(n\)-multicomplex. The total complex (or totalization) \(T(C^{\bullet})^{\bullet}\) of \(C^{\bullet}\) is defined by 1. \(T(C^{\bullet})^{m}=\bigoplus_{q_{1}+\ldots+q_{n}=m}C^{\underline{q}},\ \forall m\geq 0\); 2. \(T(C^{\bullet})^{m}\to T(C^{\bullet})^{m+1}\) for \(x\in C^{\underline{q}}\) with \(q_{1}+\ldots+q_{n}=m\) is given by 3. \(d^{m}(x):=\sum_{i=1}^{n}\sigma(d^{\underline{q},i})(x)\) if \(C^{\bullet}\) is commutative; 4. \(d^{m}(x):=\sum_{i=1}^{n}d^{\underline{q},i}(x)\) if \(C^{\bullet}\) is anticommutative. It is a routine exercise to verify that \(T(C^{\bullet})^{\bullet}\) is indeed a complex. Despite the possibilities of sign changes on the differentials that keeps the property of being a multiple complex, it is important to notice that every kernel or cokernel of these maps, as well as objects constructed from these - typically ones obtained at pages of some spectral sequences in our work - are unchanged, up to the sign of some maps involved. In practice, depending on the situation, one may for instance choose a commutative version or an anticommutative version in order to make a proof more transparent by avoiding part of the sign tracking that could obscure the proof. Let \(C^{\bullet}\) be a \(n\)-multicomplex with components \(C^{\underline{q}}\) verifying \(C^{\underline{q}}=0\) for \(\underline{q}\not\in\mathbb{N}^{n}=\oplus_{i=1}^{n}\mathbb{N}e_{i}\), with \(e_{i}\) from Definition 2.1. For such a multicomplex, we will consider several complexes attached to the faces of the polyhedral cone \(\mathbb{R}^{n}_{\geq 0}\) : For \(i_{1}<\cdots<i_{p}\), let - \(\mathrm{F}_{i_{1},\ldots,i_{p}}:=\mathbb{N}e_{i_{1}}\oplus\cdots\oplus\mathbb{ N}e_{i_{p}}\), denote a \(p\)-dimensional face, - \(\mathrm{P}_{i_{1},\ldots,i_{p}}:=\mathrm{F}_{i_{1},\ldots,i_{p}}\setminus \underline{0}\), the corresponding punctured \(p\)-dimensional face, - \(\mathrm{I}_{i_{1},\ldots,i_{p}}:=\{\underline{q}\in\mathbb{N}^{n}\ |\ q_{i}=0 \Leftrightarrow i\not\in\{i_{1},\ldots,i_{p}\}\}\), the interior of this \(p\)-dimensional face, - \(\mathrm{U}_{i_{1},\ldots,i_{p}}:=\mathbb{N}^{n}\setminus\mathrm{F}_{i_{1}, \ldots,i_{p}}\), the complement of this \(p\)-dimensional face. For instance, when \(n=2\), one has for example (full dots show corresponding sets) : \(\begin{array}{ Write \(C^{\bullet}_{\mathrm{U}_{i_{1},\ldots,i_{p}}}\) for the subcomplex of \(C^{\bullet}\) obtained by replacing \(C^{\underline{q}}\) by the zero module unless \(\underline{q}\in\mathrm{U}_{i_{1},\ldots,i_{p}}\). It is indeed a subcomplex since \(\mathrm{U}_{i_{1},\ldots,i_{p}}+e_{i}\subseteq\mathrm{U}_{i_{1},\ldots,i_{p}}\) for any \(i\geq 1\). Similarly, we define \(C^{\bullet}_{\mathrm{F}_{i_{1},\ldots,i_{p}}}\) and notice that \(C^{\bullet}_{\mathrm{F}_{i_{1},\ldots,i_{p}}}=C^{\bullet}/C^{\bullet}_{\mathrm{ U}_{i_{1},\ldots,i_{p}}}\) is a quotient of \(C^{\bullet}\). We denote by \(C^{\bullet}_{\mathrm{P}_{i_{1},\ldots,i_{p}}}\) and \(C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}\) the subcomplexes of \(C^{\bullet}_{\mathrm{F}_{i_{1},\ldots,i_{p}}}\) obtained by replacing by zero the modules for \(\underline{q}\not\in\mathrm{P}_{i_{1},\ldots,i_{p}}\) and \(\underline{q}\not\in\mathrm{I}_{i_{1},\ldots,i_{p}}\), respectively. We finally define an augmented version \({}^{+}C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}\) of \(C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}\) for \(p>0\) by adding as module \(C^{\underline{0}}\) in homological degree \(e_{i_{1}}+\cdots+e_{i_{p-1}}\) and adding the map Notice that \(C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}\) starts in total homological degree \(p\) with unique summand \(C^{e_{i_{1}}+\cdots+e_{i_{p}}}\). The totalizations of these complexes will be denoted respectively by \(C^{\bullet}_{\mathrm{F}_{i_{1},\ldots,i_{p}}}\), \(C^{\bullet}_{\mathrm{P}_{i_{1},\ldots,i_{p}}}\), \(C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}\) and \({}^{+}C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}\). These start respectively in homological degree \(0\) or higher, \(1\) or higher, \(p\) or higher and \(p-1\) or higher. As the sets \(\mathrm{F}^{*}_{i_{1},\ldots,i_{p}}\), \(\mathrm{P}^{*}_{i_{1},\ldots,i_{p}}\), \(\mathrm{I}^{*}_{i_{1},\ldots,i_{p}}\) or \(\mathrm{U}^{*}_{i_{1},\ldots,i_{p}}\) are respectively equal to \(\mathrm{F}_{j_{1},\ldots,j_{n-p}}\), \(\mathrm{P}_{j_{1},\ldots,j_{n-p}}\), \(\mathrm{I}_{j_{1},\ldots,j_{n-p}}\) or \(\mathrm{U}_{j_{1},\ldots,j_{n-p}}\) for \(\{j_{1},\ldots,j_{n-p}\}:=\{1,\ldots,n\}\setminus\{i_{1},\ldots,i_{p}\}\), we may alternatively use this other notation. Finally, \(\mathrm{P}:=\mathrm{P}_{1,\ldots,n}\) and \(\mathrm{I}:=\mathrm{I}_{1,\ldots,n}\). In Section 3, we will see that these four types of complexes provide natural cohomologies with respect to sums or products for a multiple complex constructed from Cech complexes with respect to several ideals. ### Spectral sequences arising from multicomplexes We will now show the following result by constructing the corresponding four spectral sequences: **Theorem 2.3**.: _Let \(C^{\bullet}\) be \(n\)-multicomplex satisfying \(C^{\underline{q}}=0\) for \(\underline{q}\not\in\mathbb{N}^{n}=\oplus_{i=1}^{n}\mathbb{N}e_{i}\).Then there exist four convergent spectral sequences as follows:_ 1. \(E_{1}^{p,q}=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C^{\bullet}_{\mathrm{F}^{*}_{i_{1 },\ldots,i_{p}}})\Rightarrow H^{p+q}(C^{\bullet}_{\mathbf{1}})\)_,_ 2. \(E_{1}^{p,q}=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C^{\bullet}_{\mathrm{F}^{*}_{i_{1 },\ldots,i_{p}}})\Rightarrow H^{p+q}(^{+}C^{\bullet}_{\mathrm{I}})\)_,_ 3. \(E_{1}^{p,q}=\oplus_{i_{1}<\ldots<i_{p}}H^{p+q}(C^{\bullet}_{\mathrm{I}_{i_{1 },\ldots,i_{p}}})\Rightarrow H^{p+q}(C^{\bullet}_{\mathrm{P}})\)_,_ 4. \(E_{1}^{p,q}=\oplus_{i_{1}<\ldots<i_{p}}H^{p+q}(^{+}C^{\bullet}_{\mathrm{I}_{i_{ 1},\ldots,i_{p}}})\Rightarrow H^{p+q}(C^{\bullet})\)_._ These will follow from natural filtrations on simple explicit constructions from \(C^{\bullet}\), or on \(C^{\bullet}\) itself. The rest of this section is devoted to detail these constructions. Our four Mayer-Vietoris spectral sequences will be direct corollaries of these, in view of the results of Section 3. We start by a construction that will be used for the two first spectral sequences. Write \(D^{\bullet,\bullet}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 2. _Let_ \(Q^{\bullet}\) _be the totalization of_ \(Q^{\bullet,\underline{\bullet}}\)_, then_ \[H^{i}(Q^{\bullet})\simeq H^{i}(C^{\bullet}_{\mathrm{I}}),\forall i.\] 3. _There is a spectral sequence,_ \[E^{p,q}_{1}=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C^{\bullet}_{\mathrm{F}^{*}_{i_{1},\ldots,i_{p}}})\Rightarrow H^{p+q}(C^{\bullet}_{\mathrm{I}}).\] Proof.: Since \(K^{\bullet}(1,\ldots,1;C^{\underline{q}})\) is exact for any \(\underline{q}\) (as \(n\geq 1\)), (a) is equivalent to the exactness of \(Q^{\bullet,\underline{q}}\) for \(\underline{q}\not\in\mathrm{I}\), because if \(\underline{q}\in\mathrm{I}\) then \(Q^{p,\underline{q}}=0\) unless \(p=0\) and \(Q^{0,\underline{q}}=C^{\underline{q}}\). Assume that \(\underline{q}\) has exactly \(t\geq 1\) coordinates equal to zero and \(\underline{q}\in\mathrm{F}^{*}_{j_{1},\ldots,j_{t}}\). Then \(Q^{\bullet,\underline{q}}\) is the exact subcomplex \[K^{p}(\underbrace{1,\ldots,1}_{\text{$t$ times}};C^{\underline{q}})\subseteq K ^{p}(\underbrace{1,\ldots,1}_{\text{$n$ times}};C^{\underline{q}})\] that corresponds to summands indexed by \(e_{i_{1}}\wedge\cdots\wedge e_{i_{p}}\) for \(\{i_{1},\ldots,i_{p}\}\subseteq\{j_{1},\ldots,j_{t}\}\). For (b), denote by \(Q^{\bullet,\bullet}\) the double complex obtained by totalizing along \(\mathbb{N}^{n}\) the complex \(Q^{\bullet,\underline{\bullet}}\). By (a), for any \(q\in\mathbb{N}\), \(H^{p}(Q^{\bullet,q})=0\) for \(p\neq 0\) and \(H^{0}(Q^{\bullet,q})=C^{q}_{\mathrm{I}}\); hence \(Q^{\bullet}\) is quasi-isomorphic to \(C^{\bullet}_{\mathrm{I}}\) and the conclusion follows. For (c) recall that \(Q^{p,\underline{\bullet}}=\bigoplus_{i_{1},\ldots,i_{p}}C^{\underline{ \bullet}}_{\mathrm{F}^{*}_{i_{1},\ldots,i_{p}}}e_{i_{1}}\wedge\cdots\wedge e_{ i_{p}}\), hence the second spectral sequence for \(Q^{\bullet,\bullet}\) has first terms \(H^{q}(Q^{p,\bullet})=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C^{\bullet}_{\mathrm{F} ^{*}_{i_{1},\ldots,i_{p}}})\) and abuts to the cohomology of \(Q^{\bullet}\) that is in turn isomorphic to the one of \(C^{\bullet}_{\mathrm{I}}\) by (b). **Remark 2.5**.: Alternatively, one can consider the double complex \(F^{p,q}\) defined by \(F^{p,q}:=Q^{p,q}\) unless \(p=-1\) and \(F^{-1,q}:=C^{q}_{\mathrm{I}}\) and then obtain a spectral sequence \[E^{p,q}_{1}=H^{q}(F^{p,\bullet})\Rightarrow 0\] with \(H^{q}(F^{p,\bullet})=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C^{\bullet}_{\mathrm{F} ^{*}_{i_{1},\ldots,i_{p}}})\) for \(p\neq-1\) and \(H^{q}(F^{-1,\bullet})=H^{q}(C^{\bullet}_{\mathrm{I}})\). The variant of the spectral sequence in part (c) of Proposition 2.4 that provides the second spectral sequence in the main theorem is as follows: **Proposition 2.6**.: _With notations as in Proposition 2.4, there is a spectral sequence_ \[E^{p,q}_{1}=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C^{\bullet}_{\mathrm{F}^{*}_{i_{1},\ldots,i_{p}}})\Rightarrow H^{p+q}({}^{+}C^{\bullet}_{\mathrm{I}})\] _with \(p\) in the range \(0\leq p<n\) (i.e. \(E^{p,q}_{1}=0\) for any \(p\geq n\) and \(q\))._ Proof.: Consider the double complex \(Q_{-}^{p,q}\) defined by \(Q_{-}^{p,q}:=Q^{p,q}\) unless \(p=n\) and \(Q_{-}^{n,q}:=0\); let \(H^{q}\) denote the \(q\)-th cohomology of its totalization. It gives rise to a spectral sequence \[E_{1}^{p,q}=\oplus_{i_{1}<\cdots<i_{p}}H^{q}(C_{\Gamma_{i_{1},\ldots,i_{p}}^{ \bullet}}^{\bullet})\Rightarrow H^{p+q}.\] with \(p\) in the range \(0\leq p<n\) as claimed. The other spectral sequence has second terms \({}^{\prime}E_{2}^{n-1,0}\simeq C^{0}_{\mathbb{C}}\), \({}^{\prime}E_{2}^{0,q}=H^{q}(C_{\mathbb{I}}^{\bullet})\) for any \(q\) and \({}^{\prime}E_{2}^{p,q}=0\) otherwise, by Proposition 2.4. As \(C_{\mathbb{I}}^{q}=0\) for \(q<n\), \(H^{q}(C_{\mathbb{I}}^{\bullet})=0\) for \(q<n\) and \(H^{n}(C_{\mathbb{I}}^{\bullet})\subseteq C_{\mathbb{I}}^{n}=C^{\underline{1}}\). It follows that: 1. \(H^{q}=0\) for \(q<n-1\), 2. \(H^{n-1}=\ker({}^{\prime}d_{n}^{n-1,0}:{}^{\prime}E_{n}^{n-1,0}\to{}^{\prime}E _{n}^{0,n})\simeq\ker(C^{\underline{0}}\to H^{n}(C_{\mathbb{I}}^{\bullet}))\), 3. \(H^{n}=\operatorname{coker}({}^{\prime}d_{n}^{n-1,0}:{}^{\prime}E_{n}^{n-1,0} \to{}^{\prime}E_{n}^{0,n})\simeq\operatorname{coker}(C^{\underline{0}}\to H ^{n}(C_{\mathbb{I}}^{\bullet}))\), 4. \(H^{q}=H^{q}(C_{\mathbb{I}}^{\bullet})\) for \(q>n\). To conclude the proof, we show that, after the identifications \({}^{\prime}E_{n}^{n-1,0}\simeq{}^{\prime}E_{1}^{n-1,0}\simeq C^{\underline{0}}\) and \({}^{\prime}E_{n}^{0,n}\simeq{}^{\prime}E_{2}^{0,n}\simeq H^{n}(C_{\mathbb{I}}^ {\bullet})\), one has \({}^{\prime}d_{n}^{n-1,0}=\pm d^{e_{1}+\cdots+e_{n-1},n}\circ\ \cdots\circ\ d^{e_{1},2}\ \circ\ d^{ \underline{0},1}\). We follow the construction of this spectral sequence in [12, page 133]. Write \(d_{K}\) for the Koszul differential and \(d_{Q}\) for the differential on the totalization of the \(n\)-multicomplex \(Q^{\underline{\bullet}}\). The identification \({}^{\prime}E_{1}^{n-1,0}\simeq C^{\underline{0}}\) sends the class of an element \(x:=\sum_{|I|=n-1}\alpha_{I}\wedge_{j\in I}e_{j}\) to \(\sum_{i}\sum_{|I|=n-1}\alpha_{I}e_{i}\wedge(\wedge_{j\in I}e_{j})=\alpha e_{1} \wedge\cdots\wedge e_{n}\). Modulo a border, \(x\) is equal to \(\alpha e_{2}\wedge\cdots\wedge e_{n}\). To determine the image of the class of \(x\) by \({}^{\prime}d_{n}^{n-1,0}\), set \(\alpha_{0}:=\alpha\), \(x_{0}:=\alpha_{0}e_{2}\wedge\cdots\wedge e_{n}\), \[\alpha_{q}:=d^{e_{1}+\cdots+e_{q-1},q}\circ\cdots\circ d^{\underline{0},1}( \alpha)\in C^{e_{1}+\cdots+e_{q}}\subseteq C^{q}\] for \(1\leq q\leq n\) and if \(1\leq q<n\) \[x_{q}:=\alpha_{q}e_{q+2}\wedge\cdots\wedge e_{n}\in K^{n-q-1}(\underbrace{1, \ldots,1}_{n-q\text{ times}};C^{e_{1}+\cdots+e_{q}}).\] Now \(d_{K}(x_{q})=\varepsilon_{q}\alpha_{q}e_{q+1}\wedge e_{q+2}\wedge\cdots\wedge e _{n}\), with \(\varepsilon_{q}=\pm 1\), since \(e_{j}\wedge e_{q+2}\wedge\cdots\wedge e_{n}=0\) for \(j\geq q+1\), unless \(j=q+1\). On the other hand, since by definition \(\alpha_{q}=d^{e_{1}+\cdots+e_{q-1},q}(\alpha_{q-1})\), it follows that \(d_{Q}(x_{q-1})=\varepsilon_{q}^{\prime}\alpha_{q}e_{q+1}\wedge e_{q+2}\wedge \cdots\wedge e_{n}\), with \(\varepsilon_{q}^{\prime}=\pm 1\). Hence, setting \(\epsilon_{j}:=\prod_{i=1}^{j}-\varepsilon_{i}\varepsilon_{i}^{\prime}\), the element \[x^{\prime}:=x_{0}+\epsilon_{1}x_{1}\oplus\cdots\oplus\epsilon_{n-1}x_{n-1}\in \bigoplus_{q=0}^{n-1}K^{n-q-1}(\underbrace{1,\ldots,1}_{n-q\text{ times}};C^{e_{1}+\cdots+e_{q}})\subseteq\bigoplus_{q=0}^{n-1}C^{n-q-1,q}\] is in \(A_{0}^{n}\) as in the construction of [12, page 133]. The conclusion follows, since then, by definition, \({}^{\prime}d_{n}^{n-1,0}(x)={}^{\prime}d_{n}^{n-1,0}(x^{\prime})\) is the class of \(\pm\alpha_{n}=\pm d_{Q}(x_{n-1})\in C^{e_{1}+\cdots+e_{n}}\subseteq H^{n}(C_{ \mathbb{I}}^{\bullet})\). **Proposition 2.7**.: _Let \(C^{\bullet}\) be a \(n\)-multicomplex satisfying \(C^{\underline{a}}=0\) for \(\underline{q}\not\in\mathbb{N}^{n}=\oplus_{i=1}^{n}\mathbb{N}e_{i}\). Then there exists a convergent spectral sequence:_ \[E_{1}^{p,q}=\oplus_{i_{1}<\ldots<i_{p}}H^{p+q}(C^{\bullet}_{\mathrm{I}_{i_{1}, \ldots,i_{p}}})\Rightarrow H^{p+q}(C^{\bullet}_{\mathbb{N}^{n}\setminus\{0\}}).\] Proof.: Given \(p\geq 1\), define \[X_{p}:=\{(q_{1},...,q_{n})\in\mathbb{N}^{n}\ |\text{ at most $n-p$ of the $q_{j}$'s are zero}\}.\] The family of subcomplexes \(F_{p}^{\bullet}\) with \(F_{p}:=\oplus_{\underline{q}\in X_{p}}C^{\underline{a}}\) is a limited descending filtration of \(F_{1}^{\bullet}=C^{\bullet}_{\mathbb{N}^{n}\setminus\{0\}}\) and therefore yields a spectral sequence converging to \(H^{p+q}(C^{\bullet}_{\mathbb{N}^{n}\setminus\{0\}})\). As for \(p\geq 1\), \(F_{p}/F_{p+1}\simeq\oplus_{i_{1}<\ldots<i_{p}}C^{\bullet}_{\mathrm{I}_{i_{1}, \ldots,i_{p}}}\) (exactly \(p\) of the \(q_{i}\)'s are not zero), \[E_{1}^{p,q}=H^{p+q}(F_{p}/F_{p+1})\simeq\bigoplus_{i_{1}<\ldots<i_{p}}H^{p+q}( C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}).\] To derive the fourth spectral sequence, let \(T^{\bullet}(C^{\underline{0}})\) be the trivial hypercube commuting multiple complex on \(C^{\underline{0}}:\,T^{\underline{q}}=C^{\underline{0}}\) if \(\underline{q}\in\{0,1\}^{n}\) and \(0\) otherwise, and differentials are the identity if source and target are in degrees that belong to \(\{0,1\}^{n}\) and \(0\) else. If \(C^{\bullet}\) is a commuting multiple \(\mathbb{N}^{n}\) complex, we define a map from \(T^{\bullet}(C^{\underline{0}})\) to \(C^{\bullet}\) by \[\psi_{i_{1},\ldots,i_{p}}\ :\ x\in T^{e_{i_{1}}+\cdots+e_{i_{p}}}(C^{\underline{0}})=C ^{\underline{0}}\mapsto d^{e_{i_{1}}+\cdots+e_{i_{p-1}},i_{p}}\circ\,\cdots \,\circ\,d^{e_{i_{1}},i_{2}}\,\circ\,d^{0,i_{1}}(x)\in C^{e_{i_{1}}+\cdots+e_ {i_{p}}}\] and notice that it provides a commuting \(\mathbb{Z}^{n+1}\)-multicomplex \({}^{\Box}C^{\bullet,\underline{\bullet}}\) sitting in degrees that belong to \(\{-1,0\}\times\mathbb{N}^{n}\), with the hypercube sitting in degrees \(\{-1\}\times\mathbb{Z}^{n}\) and \({}^{\Box}C^{0,\underline{\bullet}}=C^{\bullet}\). Denote by \(C^{\bullet}\) the totalization of \(C^{\bullet}\). **Proposition 2.8**.: _Let \(C^{\bullet}\) be a \(n\)-multicomplex satisfying \(C^{\underline{a}}=0\) for \(\underline{q}\not\in\mathbb{N}^{n}=\oplus_{i=1}^{n}\mathbb{N}e_{i}\). Then there exists a convergent spectral sequence:_ \[E_{1}^{p,q}=\oplus_{i_{1}<\ldots<i_{p}}H^{p+q}(^{+}C^{\bullet}_{\mathrm{I}_{ i_{1},\ldots,i_{p}}})\Rightarrow H^{p+q}(C^{\bullet}).\] Proof.: First we may, and will, assume that \(C^{\bullet}\) is a commutative \(n\)-multicomplex (applying \(\sigma\) as defined in Section 2 if it is anticommutative to go to this case). As \(\mathrm{Tot}(T^{\bullet}(C^{\underline{0}}))\) has trivial total cohomology, the complex \(T^{\bullet}:=\mathrm{Tot}({}^{\Box}C^{\bullet,\underline{\bullet}})\) satisfies \(H^{p+q}(C^{\bullet})\simeq H^{p+q}(T^{\bullet})\). For \(p\geq 0\), define \(X^{\prime}_{p}:=\mathbb{Z}\times X_{p}\), with \(X_{p}\) as in the proof of Proposition 2.7. The family of subcomplexes \(F_{p}^{\bullet}\) with \(F_{p}:=\oplus_{q^{\prime}\in X^{\prime}_{p}}{}^{\Box}C^{\underline{q}^{ \prime}}\) is a limited descending filtration of \(F_{0}^{\bullet}=T^{\bullet}\) and therefore yields a spectral sequence converging to \(H^{p+q}(T^{\bullet})\). For \(p\geq 1\), \(F_{p}/F_{p+1}\) is the direct sum of the complexes hence \[E_{1}^{p,q}=H^{p+q}(F_{p}/F_{p+1})\simeq\bigoplus_{i_{1}<\ldots<i_{p}}H^{p+q}(^{+ }C^{\bullet}_{\mathrm{I}_{i_{1},\ldots,i_{p}}}).\] For \(p=0\), \(F_{p}/F_{p+1}\) is, since \(\psi_{\underline{0}}=id\), with \(C^{\underline{0}}\) sitting in degrees \(-1\) and \(0\). Hence \(E_{2}^{0,q}=0\) for any \(q\) and the conclusion follows. ## 3. Cohomology in multiple Cech complexes Let \(R\) be a commutative unitary ring. For an \(R\)-module \(M\) and \(\mathbf{x}:=(x_{1},\ldots,x_{m})\) a sequence of elements in \(R\), \(\mathcal{C}^{\bullet}_{\mathbf{x}}(M)\) denotes the Cech complex and \(\check{\mathcal{C}}^{\bullet}_{\mathbf{x}}(M)\) the complex obtained by replacing \(\mathcal{C}^{0}_{\mathbf{x}}(M)\) by \(0\) in \(\mathcal{C}^{\bullet}_{\mathbf{x}}(M)\) (truncation), For \(n\) sequences \(\mathbf{a}_{i}:=(a_{i,1},\ldots,a_{i,m_{i}})\) of elements in \(R\), we write \(\mathbf{a}_{1}\cdots\mathbf{a}_{n}\) for the sequence of the \(m_{1}\cdots m_{n}\) elements \(a_{1,i_{1}}\cdots a_{n,i_{n}}\) lexicographically ordered. The ideal generated by the elements in \(\mathbf{a}_{i}\) is written \(\mathfrak{a}_{i}\). The complex \(\check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M):= \check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1}}(R)\otimes_{R}\cdots\otimes_{R} \check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{n}}(R)\otimes_{R}M\) has as first possibly non-trivial module \(\oplus M_{a_{1,i_{1}}\cdots a_{n,i_{n}}}=\check{\mathcal{C}}^{n}_{\mathbf{a}_{ 1}\cdots\mathbf{a}_{n}}(M)\) sitting in cohomological degree \(n\) and admits a natural augmentation from \(M\) to this first module. The augmented complex: is denoted by \(\mathcal{C}^{\bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\). The sequences \(0\to\mathcal{C}^{n-1}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\to\mathcal{C} ^{n}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\) and \(0\to\mathcal{C}^{0}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\to\mathcal{C}^{1} _{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\) are identical, the modules \(H^{i+n-1}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M):=H^{i+n-1}(\mathcal{C}^{ \bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))\) and \(H^{i}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M):=H^{i}(\mathcal{C}^{\bullet}_{ \mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M))\) thus coincide for \(i=0\) and are both equal to \(H^{0}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\). We write \(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M):=H^{n}(\check{\mathcal{C}}^{ \bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))=\ker(\phi)\) and \(D_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M):=H^{1}(\check{\mathcal{C}}^{\bullet }_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M))=\ker(\psi)\), with \(\check{\mathcal{C}}^{1}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\) and remark that \(D_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\subseteq D_{\mathbf{a}_{1},\ldots, \mathbf{a}_{n}}(M)\). The exact sequence of complexes where \(M\) also denotes the complex centered in the \(R\)-module \(M\) at degree \(n-1\) gives rise to an exact sequence (1) and equalities \[H^{i}(\tilde{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M))=H^{ i}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)\] for all \(i>n+1\). **Lemma 3.1**.: _Assume that \(M=H^{0}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)\), then \(\mathcal{C}^{i}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)=\mathcal{C}^{i+n-1}_{ \mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)=0\) for any \(i>0\)._ Proof.: For \(i>0\), summands of \(\mathcal{C}^{i}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\) or \(\mathcal{C}^{i+n-1}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)\) are localizations \(M_{w}\), where \(w\) is a multiple of an element of the form \(a_{1,i_{1}}\cdots a_{n,i_{n}}\), for some \(n\)-tuple \((i_{1},\dots,i_{n})\). Recall that, for any sequence \(\mathbf{x}=(x_{1},\dots,x_{m})\) of elements in \(R\), \(q\in\mathbb{N}\), \(j\in\{1,\dots,m\}\) and \(\alpha\in H^{q}_{\mathbf{x}}(M)\), \(x_{j}^{t}\alpha=0\) for some \(t\) (see, for instance, the identification in [6, Theorem 2.3]). This extends to sums of localizations \(\mathcal{C}^{p}_{\mathbf{a}}H^{q}_{\mathbf{x}}(M)\), and hence to the subquotients \(H^{p}_{\mathbf{a}}(H^{q}_{\mathbf{x}}(M))\) or the submodules \(D_{\mathbf{a}}(H^{q}_{\mathbf{x}}(M))\), for any sequence \(\mathbf{a}\) of elements in \(R\). Hence \(H^{i}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)=H^{0}_{\mathfrak{a}_{1} \cdots\mathfrak{a}_{n}}(H^{i}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M))\) for any \(i\geq 0\), but also: **Lemma 3.2**.: _For any \(R\)-module \(M\) and any \(i\geq 0\), \(H^{i}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)=H^{0}_{\mathfrak{a}_{1}\cdots \mathfrak{a}_{n}}(H^{i}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M))\)._ Proof.: The case \(n=1\) is the remark before the statement and we induct on \(n\). The result is clear for \(i\leq n-1\) since \(H^{n-1}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)=H^{0}_{\mathfrak{a}_{1} \cdots\mathfrak{a}_{n}}(M)\) and \(H^{i}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)=0\) for \(i<n-1\). For \(i=n\), by recursion hypothesis and the exact sequence (1), given \(a_{j,i_{j}}\in\mathbf{a}_{j}\) for \(1\leq j\leq n\) and \(x\in D_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)=D_{\mathbf{a}_{1},\dots, \mathbf{a}_{n-1}}(D_{\mathbf{a}_{n}}(M))\), there exists \(N\) such that \((a_{1,i_{1}}\cdots a_{n-1,i_{n-1}})^{N}x\in D_{\mathbf{a}_{n}}(M)\). Hence \((a_{1,i_{1}}\cdots a_{n,i_{n}})^{N^{\prime}}x\in M\) some \(N^{\prime}\geq N\). In other words, \((a_{1,i_{1}}\cdots a_{n,i_{n}})^{N^{\prime}}x=0\) in \(H^{n}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)\). For \(i\geq n+1\), notice that if \(E^{\bullet,\bullet}_{t}\Rightarrow H^{\bullet}\) is a spectral sequence such that \(H^{0}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(E^{p,q}_{t})=E^{p,q}_{t}\), for some \(t\) and all \(p,q\) with \(p+q\geq s\), then \(H^{0}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(H^{i})=H^{i}\) for all \(i\geq s\). The double complex with \(D^{p,q}:=\tilde{\mathcal{C}}^{p}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n-1}}(R) \otimes_{R}\tilde{\mathcal{C}}^{q}_{\mathbf{a}_{n}}(R)\otimes_{R}M\) gives rise to two spectral sequences that abuts to the homology of \(\tilde{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}(M)\). Taking \(t:=n+1\) and \(s:=n+2\) in any of these two spectral sequences, it then follows by recursion that \(H^{i}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)=H^{0}_{\mathbf{a}_{1}\cdots \mathbf{a}_{n}}(H^{i}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))\) for all \(i\geq n+1\) according to what was noticed just above - also recall that if \(\mathfrak{a}\subseteq\mathfrak{b}\), then for any module \(N\), \(H^{0}_{\mathfrak{b}}(N)=N\) implies \(H^{0}_{\mathfrak{a}}(N)=N\). **Theorem 3.3**.: _For any \(R\)-module \(M\) and any \(i\),_ \[H^{i+n-1}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\simeq H^{i}_{\mathbf{a}_{ 1}\cdots\mathbf{a}_{n}}(M)\] _and \(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\simeq D_{\mathfrak{a}_{1}\cdots \mathfrak{a}_{n}}(M)\)._ Proof.: We may assume that \(H^{0}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)=0\) according to Lemma 3.1. The exact sequences \[0\to M\to D_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)\to H^{1}_{\mathfrak{a }_{1}\cdots\mathfrak{a}_{n}}(M)\to 0\] and \[0\to M\to D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\to H^{n}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\to 0\] provide two isomorphisms of complexes: \[\check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)\simeq \check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(D_{ \mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M))\quad\text{and}\quad\check{ \mathcal{C}}^{\bullet}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(M)\simeq\check{ \mathcal{C}}^{\bullet}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))\] according to Lemma 3.1, since \(H^{1}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)=H^{0}_{\mathfrak{a}_{1} \cdots\mathfrak{a}_{n}}(H^{1}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M))\), and \(H^{n}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)=H^{0}_{\mathfrak{a}_{1}\cdots \mathfrak{a}_{n}}(H^{n}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))\) by Lemma 3.2. In particular, 1. \(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)=D_{\mathbf{a}_{1},\ldots,\mathbf{ a}_{n}}(D_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M))\) and \(D_{\mathbf{a}_{1}\cdots\mathfrak{a}_{n}}(M)=D_{\mathfrak{a}_{1}\cdots \mathfrak{a}_{n}}(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))\); 2. \(H^{i+n-1}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)=H^{i+n-1}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(D_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M))\) and \(H^{i}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)=H^{i}_{\mathfrak{a}_{1} \cdots\mathfrak{a}_{n}}(D_{\mathfrak{a}_{1},\ldots,\mathbf{a}_{n}}(M))\), for any \(i\geq 2\). Lemma 3.2 assures that the two spectral sequences associated to \(\check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{1}\cdots\mathbf{a}_{n}}(\check{ \mathcal{C}}^{\bullet}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M))\) collapse at step 2; it thus provides isomorphisms \[H^{i}_{\mathbf{a}_{1}\cdots\mathfrak{a}_{n}}(D_{\mathbf{a}_{1},\ldots,\mathbf{ a}_{n}}(M))\simeq H^{i+n-1}_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(D_{\mathfrak{a}_{1} \cdots\mathfrak{a}_{n}}(M)),\quad\forall i\geq 2\] and shows that \(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(D_{\mathfrak{a}_{1}\cdots\mathfrak{a} _{n}}(M))=D_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(D_{\mathbf{a}_{1},\ldots, \mathbf{a}_{n}}(M))\). Thus, by (a), \(D_{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}}(M)=D_{\mathfrak{a}_{1}\cdots \mathfrak{a}_{n}}(M)\) and the conclusion follows from (b). **Corollary 3.4**.: _For any \(R\)-module \(M\), \(1\leq i_{1}<\cdots<i_{p}\leq n\) with \(p\geq 1\) and any \(i\),_ 1. \(H^{i+p-1}_{\mathbf{a}_{i_{1}},\ldots,\mathbf{a}_{i_{p}}}(M)\simeq H^{i}_{ \mathbf{a}_{i_{1}}\cdots\mathbf{a}_{i_{1}}}(M)\)_,_ 2. \(H^{i+p-1}(\check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{i_{1}},\ldots,\mathbf{a}_{ i_{p}}}(M))\simeq H^{i}(\check{\mathcal{C}}^{\bullet}_{\mathbf{a}_{i_{1}} \cdots\mathbf{a}_{i_{p}}}(M))\) ## 4. The four Mayer-Vietoris spectral sequences In this section, we make use of Theorem 2.3 and Theorem 3.3 to construct four spectral sequences, two goes from cohomology supported in sums of the given ideals to the cohomology supported in the product of all of them, the others from cohomology supported in products of the given ideals to the cohomology supported in the sum of all of them; in each case, the versions include non augmented and augmented Cech complexes. To follow usual notation, we set \(\check{H}^{i}_{\mathfrak{a}}(M):=H^{i+1}(\tilde{\mathcal{C}}^{\bullet}_{ \mathfrak{a}}(M))\) if \(\mathfrak{a}\) is generated by the elements in \(\mathfrak{a}\). **Theorem 4.1**.: _Let \(M\) be an \(R\)-module and \(\mathfrak{a}_{i}\) be finitely generated ideals, then there exist four convergent spectral sequences_ 1. \(E_{1}^{n-p,q}=\bigoplus_{i_{1}<\dots<i_{p}}^{1\leq p\leq n}H^{q}_{\mathfrak{a} _{i_{1}}+\dots+\mathfrak{a}_{i_{p}}}(M)\Rightarrow_{p}H^{q-(p-1)}_{\mathfrak{a }_{1}\cdots\mathfrak{a}_{n}}(M)\)_,_ 2. \(E_{1}^{n-p,q}=\bigoplus_{i_{1}<\dots<i_{p}}^{1\leq p\leq n}\check{H}^{q}_{ \mathfrak{a}_{i_{1}}+\dots+\mathfrak{a}_{i_{p}}}(M)\Rightarrow_{p}\check{H}^{ q-(p-1)}_{\mathfrak{a}_{1}\cdots\mathfrak{a}_{n}}(M)\)_,_ 3. \(E_{1}^{p,q}=\bigoplus_{i_{1}<\dots<i_{p}}^{1\leq p\leq n}H^{q}_{\mathfrak{a} _{i_{1}}\cdots\mathfrak{a}_{i_{p}}}(M)\Rightarrow_{p}H^{q+(p-1)}_{\mathfrak{a }_{1}+\dots+\mathfrak{a}_{n}}(M)\)_,_ 4. \(E_{1}^{p,q}=\bigoplus_{i_{1}<\dots<i_{p}}^{1\leq p\leq n}\check{H}^{q}_{ \mathfrak{a}_{i_{1}}\cdots\mathfrak{a}_{i_{p}}}(M)\Rightarrow_{p}\check{H}^{ q+(p-1)}_{\mathfrak{a}_{1}+\dots+\mathfrak{a}_{n}}(M)\)_._ Proof.: We will apply Theorem 2.3 to the \(n\)-multicomplex \(C^{\bullet}=\mathcal{C}^{\bullet}_{\mathfrak{a}_{1}}(\mathcal{C}^{\bullet}_{ \mathfrak{a}_{2}}(\cdots(\mathcal{C}^{\bullet}_{\mathfrak{a}_{n}}(M))))\) for (1a) and (2a) and to the punctured complex \(C^{\bullet}_{\overline{P}}\) for (1b) and (2b). Recall that the \(i\)-th homology of the totalization of \(C^{\bullet}_{\mathrm{F}_{i_{1},\dots,i_{p}}}=\mathcal{C}^{\bullet}_{\mathfrak{ a}_{i_{1}}}(\mathcal{C}^{\bullet}_{\mathfrak{a}_{i_{2}}}(\cdots(\mathcal{C}^{ \bullet}_{\mathfrak{a}_{i_{p}}}(M))))\) is \(H^{i}_{\mathfrak{a}_{i_{1}}+\dots+\mathfrak{a}_{i_{p}}}(M)\), while the one of \(C^{\bullet}_{\mathrm{P}_{i_{1},\dots,i_{p}}}=(C^{\bullet}_{\mathrm{P}})_{ \mathrm{F}_{i_{1},\dots,i_{p}}}=(C^{\bullet}_{\mathrm{F}_{i_{1},\dots,i_{p}}} )_{\mathrm{P}}\) is \(\check{H}^{i-1}_{\mathfrak{a}_{i_{1}}+\dots+\mathfrak{a}_{i_{p}}}(M)\). By Corollary 3.4 (a), \(H^{i}(^{+}C^{\bullet}_{\mathfrak{I}_{i_{1},\dots,i_{p}}})=H^{i-n+1}_{\mathfrak{ a}_{i_{1}}\cdots\mathfrak{a}_{i_{p}}}(M)\), while \(H^{i}(C^{\bullet}_{\mathfrak{I}_{i_{1},\dots,i_{p}}})=\check{H}^{i-n}_{ \mathfrak{a}_{i_{1}}\cdots\mathfrak{a}_{i_{p}}}(M)\) by Corollary 3.4 (b). Therefore (1a), (1b), (2a) and (2b) follow respectively from items (2), (1), (4) and (3) in Theorem 2.3. Items (1b) and (2b) concerns local cohomology in the version that equals sheaf cohomology: \(\check{H}^{i}_{\mathfrak{a}}(M)=H^{i}(U,\tilde{M})\) with \(U:=\mathrm{Spec}(R)\setminus V(\mathfrak{a})\) by [5, 1.2.3 & 1.4.3]. In the other two versions (1a) and (2a), Cech cohomlogy is related to sheaf cohomology by the exact sequence as proved in the same reference, and \(H^{i}_{\mathfrak{a}}(M)=\check{H}^{i-1}_{\mathfrak{a}}(M)=H^{i-1}(U,\tilde{M})\) for \(i\geq 2\). Also recall that \(H^{i}_{\mathfrak{a}}(-)\) is the \(i\)-th right derived functor of \(H^{0}_{\mathfrak{a}}(-)\) whenever \(R\) is Noetherian. To give a hint on the behavior of the spectral sequences above, we make a few comments on the spectral sequence (1a); is construction is similar to (1b). For concrete examples and applications, that partially inspired this work, see [1, 8, 7]. In general, its differential \(d_{r}\) has degree \((r,1-r)\), i.e. \(d_{r}^{n-p,q}:E_{r}^{n-p,q}\to E_{r}^{n-p+r,q-r+1}\) for all \(p,q\). For three ideals it has only three columns, so that it degenerates at the third page. A sketch of such a spectral sequence, where the blue arrows represent first-page differentials and the red ones represent the directions of the second-page differentials, is the following: \[\vdots\] \[0\] \[H_{\mathfrak{a}_{1}+\mathfrak{a}_{2}+\mathfrak{a}_{3}}^{j+2}(M)\] \[0\] \[H_{\mathfrak{a}_{1}+\mathfrak{a}_{2}+\mathfrak{a}_{3}}^{j+1}(M)\] \[0\] \[H_{\mathfrak{a}_{1}+\mathfrak{a}_{2}+\mathfrak{a}_{3}}^{j+1}(M)\] \[0\] \[H_{\mathfrak{a}_{1}+\mathfrak{a}_{2}+\mathfrak{a}_{3}}^{j}(M)\] \[\vdots\] \[\vdots\] The names of the \(E_{1}^{\bullet,\bullet}\) differentials are simplified to improve clarity (e.g. \(\varphi=d_{1}^{0,j+2}\)); these are given by the natural maps coming from the inclusions of ideals (e.g. the inclusion \(\mathfrak{a}_{1}\subseteq\mathfrak{a}_{1}+\mathfrak{a}_{2}\) provides a natural map \(H_{\mathfrak{a}_{1}+\mathfrak{a}_{2}}^{i}(-)\to H_{\mathfrak{a}_{1}}^{i}(-)\) for every \(i\)), each of these pieces taken with an appropriate sign. The dotted line is the diagonal where the filtration \(0\subseteq F^{2}\subseteq F^{1}\subseteq F^{0}=H_{\mathfrak{a}_{1}\mathfrak{ a}_{2}\mathfrak{a}_{3}}^{j}(M)\) is given by the infinity terms (here isomorphic to the terms at step 3). This filtration satisfies: \[F^{2}\simeq\operatorname{coker}(\ker(\varphi^{\prime})\underline{\to} \operatorname{coker}(\psi^{\prime\prime})),\quad F^{1}/F^{2}\simeq\frac{ \ker(\psi^{\prime})}{\operatorname{im}(\varphi^{\prime})},\quad F^{0}/F^{1} \simeq\ker(\ker(\varphi)\underline{\to}\operatorname{coker}(\psi^{\prime})).\] **Remark 4.2**.: 1. In Theorem 4.1 (1a) we retrieve the Mayer-Vietoris spectral sequence in [8]. 2. A spectral sequence as (2b) in Theorem 4.1 could be obtained as a Cech spectral sequence, by [4, Theoreme 5.4.1]. Indeed, the first terms in the quoted spectral sequence are described in [4, Chap. II, SS5.3] as sheaf cohomologies on open sets that are intersections of the open complements \(U_{i}\) of \(V(\mathfrak{a}_{i})\) in \(\operatorname{Spec}(R)\) : the ones that appear in the Cech covering of \(\cup_{i}U_{i}=\operatorname{Spec}(R)\setminus V(\mathfrak{a}_{1}+\cdots+ \mathfrak{a}_{n})\). These sheaf cohomology modules are in turn isomorphic to local cohomologies with support in products of the ideals, by [5, 1.2.3 & 1.4.3], since \(U_{i_{1}}\cap\cdots\cap U_{i_{p}}=\operatorname{Spec}(R)\setminus V(\mathfrak{ a}_{i_{1}}\cdots\mathfrak{a}_{i_{p}})\). 3. Such spectral sequences degenerate into the celebrated Mayer-Vietoris long exact sequence when one consider only two ideals. It is a different way from [9, Theorem 9.4.3] and [10] of obtaining this exact sequence. **Acknowledgements.** First and third named authors thanks France-Brazil network RFBM for supporting this work. The second named author was supported by a CAPES Doctoral Scholarship.
2310.07787
Using Spark Machine Learning Models to Perform Predictive Analysis on Flight Ticket Pricing Data
This paper discusses predictive performance and processes undertaken on flight pricing data utilizing r2(r-square) and RMSE that leverages a large dataset, originally from Expedia.com, consisting of approximately 20 million records or 4.68 gigabytes. The project aims to determine the best models usable in the real world to predict airline ticket fares for non-stop flights across the US. Therefore, good generalization capability and optimized processing times are important measures for the model. We will discover key business insights utilizing feature importance and discuss the process and tools used for our analysis. Four regression machine learning algorithms were utilized: Random Forest, Gradient Boost Tree, Decision Tree, and Factorization Machines utilizing Cross Validator and Training Validator functions for assessing performance and generalization capability.
Philip Wong, Phue Thant, Pratiksha Yadav, Ruta Antaliya, Jongwook Woo
2023-10-11T18:20:17Z
http://arxiv.org/abs/2310.07787v1
# Using Spark Machine Learning Models to Perform Predictive Analysis on Flight Ticket Pricing Data ###### Abstract This paper discusses predictive performance and processes undertaken on flight pricing data utilizing r2(r-square) and RMSE that leverages a large dataset, originally from Expedia.com, consisting of approximately 20 million records or 4.68 gigabytes. The project aims to determine the best models usable in the real world to predict airline ticket fares for non-stop flights across the US. Therefore, good generalization capability and optimized processing times are important measures for the model. We will discover key business insights utilizing feature importance and discuss the process and tools used for our analysis. Four regression machine learning algorithms were utilized: Random Forest, Gradient Boost Tree, Decision Tree, and Factorization Machines utilizing Cross Validator and Training Validator functions for assessing performance and generalization capability. Regression, Random Forest, Gradient Boost Tree, Decision Tree, Factorization Machines, Spark, Hadoop ## 1 Introduction & Business Case The business case for predicting pricing has two stakeholders' airlines operators and Customers. Airlines may optimize their pricing strategy by predicting pricing trends over time, setting appropriate prices for specific routes, and allowing the airline to compare future prices against their competitors, creating a competitive advantage. In the example of competitive advantage and pricing optimization, a low-cost carrier such as Southwest Airlines may want to set prices for future flights that are lower cost than their competitors but not too low to which the carrier will be leaving revenue on the table it could have generated. Likewise, travelers may find the analysis can provide unique insights into what specific factors (or features) can impact airline pricing. This would allow the savvy traveler to gain an advantage over others. Also, the ability to forecast ticket fares enables the consumer to better plan budgets and find opportunities to lower the overall cost of airfare. ## 2 Data Set Used The dataset used in this paper is taken from an open-source platform which is Kaggle.com [5]. The dataset is of csv format, and contains information about Expedia's purchasable tickets between the months of April 2022 and October 2022. ## 3 Technical Specifications ## 4 Related Work The paper, "Airline Fare Prediction Using Machine Learning Algorithms", investigates the variables influencing airfare and aims to forecast travel costs [1]. It analyzes flight schedules, destinations, length, holidays, and vacations as factors affecting ticket prices. The study employs seven machine learning models, including linear quantile mixed regression, Learn++, Bayesian estimation with Kalman filter, and ARMA mixed with random forest algorithms. It also utilizes regression machine learning models like Extreme Learning Machine (ELM), Multilayer Perceptron (MLP), Generalized Regression Neural Network, Random Forest Regression Tree, Regression Tree, Linear Regression (LR), and Regression SVM (Polynomial) for cost prediction. Our research shares the same objective of comparing machine learning models for predicting airline ticket costs; however, it distinguishes itself by incorporating Factorization Machines (Fm) and Gradient Boosted Tree (GBT) techniques, which are not used in the mentioned paper. The paper, "Flight Fare Prediction System Using Machine Learning"; explores the use of machine learning algorithms for predicting airline ticket prices [2]. The study compares various supervised learning algorithms, including Classification Tree (CART), Logistic Regression (LR), Naive Bayes, SoftMax Regression, and Support Vector Machines (SVMs), to classify \begin{table} \begin{tabular}{|l|l|} \hline Cluster Version & Hadoop 3.2.1-amzn-3.1 \\ \hline No of CPUs & 8 CPUs \\ \hline Pyspark Version & 3.0 \\ \hline Number of Nodes & 5 \\ \hline CPU speed & 2.20 GHz \\ \hline Total Storage & 481 GB \\ \hline \end{tabular} \end{table} Table 1: H/W Specification Figure 1: Dataset Specification Figure 2: Features and Label Overview – TableauPrep ticket prices into different bins relative to the average price. The dataset encompasses various attributes, including the source and destination of flights, departure dates, departure times, number of stops, arrival times, and corresponding costs. This article aims to achieve the same objective as our research work which is to compare different machine-learning models for predicting airline ticket costs. In this article, SVM classification is used to categorize costs as "greater" or "lower" than the average, which is not used in our article. The paper, "Regression - Flight Price Prediction," conducts an exploratory analysis of the data and performs similar regression testing using DT, SVR, KNN, and LR in addition to ensemble models [3]. There are 13,000 records in the dataset and 11 fields. Our research shares the same objective of comparing machine learning models for predicting airline ticket costs, and the paper also identified GBT as the best-performing algorithm; however, the difference includes critical differences. Firstly, the dataset difference in size and richness is substantially smaller and less complex compared to our dataset of 20 million records and 15 fields analyzed; our sample size alone of nearly 100k records is already much larger. We also leveraged Hadoop BigData systems to process our larger dataset while the author did not specify any technical information regarding the tools or systems used in their analysis. In terms of method, the author of this paper did not describe or use feature importance or engineering to determine fields or draw business insights similar to our analysis. ## 5 Feature Importance and Engineering Using feature importance generated with the GBT algorithm was used for our analysis of the features as it was one of the best-performing machine learning algorithms for the model. ### Feature Engineering Findings The features _StartingAiport_ and _FlightDuration_ both had the most impact on the model, followed by _DestinationAirport_ and _SeatsRemaining_. This appears to be expected as these parameters will have a more significant impact on pricing. A key finding was the _AirlineEqument_ field, which represents the aircraft used for the specific flight segment in the context of the data. This field had a small but consistent impact on the R2 values across the machine-learning algorithms. This could mean that travelers looking to find a lower-cost alternative may be able to identify specific planes and imply that newer planes have low operating costs, leading to lower flight ticket prices. Concatenating fields that may have more meaning together than separate such as depart and arrival airport (which creates a unique route), had little to no impact on R2. Another finding was despite the large dataset, the timeframe of the data was between 3 and 4 months which may have limited the predictive pricing performance of our models. ## 6 Workflow Architecture Figure 4 shows the workflow to build the Machine Learning models. The first step was to find the right source of data. In this dataset of flights that we have chosen, we have defined objectives in terms of measurable goals that we wanted to achieve by the end of the project. We downloaded the dataset from the Kaggle website. Once the data is collected, it needs to be processed and prepared for the use of the model. This involved tasks like cleaning the data, scaling, and normalizing the features, and transforming the data into a format suitable for the model. We also performed feature engineering to see which features have the highest feature importance and tried to see which features are appropriate for the model. Before training the model, it is important to split the data into a training set and a testing set. We have split 70% data into training data and 30% into testing data. The training set is to train the model, while the testing set is used to evaluate the performance of the model and ensure that it generalizes well to new, unseen data. Now that the data is been prepared and split, the next step is to train the machine-learning model using an appropriate algorithm. It also involves finding the best parameters or hyperparameters to optimize the model's performance. Then we build a model with the trained dataset. Once the model has been trained, it needs to be tested and validated using the testing set. We implement models using regression algorithms so that the accuracy of the models can be measured through Root Mean Square Error (RMSE) and Coefficient of Determination (R2). ### Machine Learning Algorithms Regression is a supervised machine learning (ML) technique that is used to predict continuous values. In part one of our project involving flight price prediction, we make use of Regression models, as price, the target variable is a continuous numeric variable. We used four Regression algorithms - Random Forest Regression, Gradient Boost Tree Regression, Factorization Machine Regression, and Decision Forest Regression. We have implemented these models on the Sample Dataset to predict the flight price. These four algorithms were implemented in PySpark. ### Random Forest Regression The parameters that we have used in paramGrid are used to fine-tune the Random Forest model through hyperparameter tuning. For example, the maxDepth parameter controls the depth of each decision tree, with values like 13 and 16 being Fig 4: Workflow Architecture Diagram Fig 3: Feature Importance of Gradient Boost Tree explored to strike the right balance between capturing complex patterns and preventing overfitting. We observed during training data with a train-validation split tuning the hyperparameter helps in finding the optimal configuration that maximizes the model's performance. In terms of evaluation, from our results, we found that the R-squared value for the Train-Validation split (TV) is slightly higher than for Cross-Validation (CV), with a difference of only 0.01. However, when comparing these results on the full dataset, the R-squared values for CV and TV are almost similar. The main difference lies in computation time, with CV taking almost double the time compared to TV on the full dataset. ### Gradient Boost Tree Regressor The results for the Gradient Boost Tree algorithm were applied to both the TrainingValidator function and the CrossValidator function, with results showing similar R2 values of.607 and.587, respectively, applied to the sampled dataset with reasonably well RMSE values of 96.95 and 97.51. The TrainingValidator was selected to be applied on the full dataset as it provided a consistently higher R2 value than the CrossValidator function at nearly half the processing time. A key finding when GBT was applied with the optimized parameters to the full dataset was a significant increase in performance of 15% increase or (R2.706) to R2 versus the sample with a similar decrease to RMSE. An additional finding was that the MaxIter parameter had the best performance utilizing the value of 5 consistently, where the default is set to 20. ### Decision Tree Regressor The following results have been obtained from testing the R2 and RMSE values on a sample dataset, both for cross-validation and train validation. On the sample dataset, the cross-validation (CV) R2 value is slightly higher, with a mere 0.02 difference. Hence, we can conclude that the difference is negligible. Allowing the decision tree to grow excessively deep during cross-validation can result in overfitting. To overcome this issue, we can employ a train-validation split instead. This approach enables us to control the tree's depth and improve its generalization capability. ### Factorization Machines Hyperparameter tuning was conducted to determine the optimal configuration of the model, resulting in the selection of the best-performing model. The step size 1 and 0.5 worked well with the model to get higher accuracy. The R2 and RMSE values were tested on a sample dataset, both for cross-validation and train validation. The results showed that the Train-Validation R2 value is slightly higher, with only a 0.1 difference. On the full dataset, the R-squared values for CV and TV are nearly identical. However, the significant difference between the two lies in the computation time, with cross-validation taking approximately twice as long as train validation. The results and insights obtained by comparing the performance of four algorithms on the Full dataset using both Cross Validation (CV) and Train Validation (TV) are as follows. The comparison results are presented in tables, and the R2 and RMSE values indicate little difference between CV and TV. However, there is a noticeable difference in the training time of the models. The Random Forest and Gradient Boost Tree algorithms are found to have the highest accuracy, both achieving above 70%. Based on this analysis, the recommended model to use in the Train Validation scenario is the Gradient Boost Tree (GBT), which has a shorter training time compared to other algorithms while maintaining a similar level of accuracy as the Random Forest. This conclusion is supported by the fact that GBT has a shorter training time compared to other algorithms while maintaining a similar level of accuracy as the Random Forest. ## 9 Conclusion Our paper's objective was to discover the most efficient models for forecasting flight ticket prices. We conducted a comprehensive analysis to predict prices using various Regression algorithms and compared their performance. Out of the four regression algorithms tested, the Gradient Boost Tree algorithm demonstrated the highest accuracy for price prediction, achieving an RMSE of 83.17 and an R2 value of 0.706. Consequently, this algorithm emerged as the most suitable choice for predicting prices compared to the other regression algorithms. Additionally, our study emphasized the significance of feature importance analysis, which enables informed decisions regarding feature selection, model improvement, and future data collection. By understanding the relative importance of different features, we can enhance the accuracy and efficiency of predictive models. We evaluated the performance of the machine learning models using both cross-validation and train-validation split techniques. Cross-validation was valuable in selecting the optimal hyperparameters, while the train-validation split allowed us to assess the overall model performance on new data. The implications of our findings have practical applications for both airlines and customers. Airlines can leverage predictive capabilities to optimize pricing strategies for specific routes and seasons. By analyzing pricing trends, airlines can develop effective strategies tailored to different routes and periods. Similarly, customers can utilize the dataset to forecast future flight prices and plan their journeys accordingly. Applying these insights has the potential to enhance the efficiency and effectiveness of pricing strategies in the airline industry, benefiting both businesses and customers alike.
2305.03757
MilliKelvin microwave impedance microscopy in a dry dilution refrigerator
Microwave impedance microscopy (MIM) is a near-field imaging technique that has been used to visualize the local conductivity of materials with nanoscale resolution across the GHz regime. In recent years, MIM has shown great promise for the investigation of topological states of matter, correlated electronic states and emergent phenomena in quantum materials. To explore these low-energy phenomena, many of which are only detectable in the milliKelvin regime, we have developed a novel low-temperature MIM incorporated into a dilution refrigerator. This setup which consists of a tuning-fork-based atomic force microscope with microwave reflectometry capabilities, is capable of reaching temperatures down to 70 mK during imaging and magnetic fields up to 9 T. To test the performance of this microscope, we demonstrate microwave imaging of the conductivity contrast between graphite and silicon dioxide at cryogenic temperatures and discuss the resolution and noise observed in these results. We extend this methodology to visualize edge conduction in Dirac semimetal cadmium arsenide in the quantum Hall regime
Leonard Weihao Cao, Chen Wu, Rajarshi Bhattacharyya, Ruolun Zhang, Monica T. Allen
2023-05-05T18:00:05Z
http://arxiv.org/abs/2305.03757v2
# Millikelvin microwave impedance microscopy in a dry dilution refrigerator ###### Abstract Microwave impedance microscopy (MIM) is a near-field imaging technique that has been used to visualize the local conductivity of materials with nanoscale resolution across the GHz regime. In recent years, MIM has shown great promise for the investigation of topological states of matter, correlated electronic states and emergent phenomena in quantum materials. To explore these low-energy phenomena, many of which are only detectable in the millikelvin regime, we have developed a novel low-temperature MIM incorporated into a dilution refrigerator. This setup, which consists of a tuning-fork-based atomic force microscope with microwave reflectometry capabilities, is capable of reaching temperatures down to 70 mK during imaging and magnetic fields up to 9 T. To test the performance of this microscope, we demonstrate microwave imaging of the conductivity contrast between graphite and silicon dioxide at cryogenic temperatures and discuss the resolution and noise observed in these results. We extend this methodology to visualize edge conduction in Dirac semimetal cadmium arsenide in the quantum Hall regime. ## I Introduction Microwave impedance microscopy (MIM) has the unique capacity to probe the local conductivity and permittivity of quantum materials with nanoscale spatial resolution [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. This enables direct visualization of the microscopic nature of electronic states, including the real-space disorder landscape, multi-domain behavior, or the presence of topological modes that propagate along the sample boundaries. By coupling microwaves with a wavelength of 1-100 cm to a sharp metallic probe and collecting the reflected signal, MIM characterizes the complex admittance between the tip and the sample without the requirement for the sample to be conductive, which is less restrictive than other electronic imaging techniques [11; 12; 13; 14; 15]. As demonstrated in recent experiments, MIM can provide insight into the real-space nature of correlated states and topological states in two-dimensional heterostructures [16; 17; 18; 19; 20; 9]. However, many of these states are characterized by low energy scales and are therefore most robust at millikelvin temperatures, motivating the development of cryogenic MIM instrumentation. Thus far, most state-of-the-art MIM experiments have been performed in 1.5-2 K [21] or He-3 cryostats, which can reach of a minimum temperature of 450 mK [20]. Here we report on the construction of a novel millikelvin MIM, which will support spatially-resolved detection of quantum electronic states at ultra-low temperatures. This setup consists of scanning probe microscope with tuning-fork-based height feedback integrated into a dry dilution refrigerator. A sharp metallic probe driven by an AC signal at microwave frequency is coupled to the tuning fork and scanned over the sample. Using reflectometry, MIM detects the sample's response to high frequency electromagnetic fields emanating from the probe. To demonstrate the measurement capabilities of this setup, we present MIM images of the conductivity contrast between graphite and SiO\({}_{2}\) down to temperatures of 70 mK. Finally, we also demonstrate microwave imaging of edge states in Cd\({}_{3}\)As\({}_{2}\) thin films in the quantum Hall regime at the base temperature. ## II Experimental Setup This setup consists of an custom-designed tuning fork based atomic force microscope (AFM) integrated into a Leiden Cryogenics CF-CS110 dilution refrigerator. The microscope housing is in thermal equilibrium with the mixing chamber plate on the cold-insertable probe, which is loaded into a dilution refrigerator, as shown schematically in Figure 1(a). Figure 1(b) shows the design of the microscope head, which houses an etched tungsten wire mounted onto to the end of one prong of a tuning fork (TF) mechanical resonator (blue box) [22]. The oscillation amplitude of the TF is monitored for continuous height feedback, which enables tapping-mode topographic imaging [23]. Below the tip holder, the sample stage is mounted on a stack of CuBe piezoelectric scanners (Attocube AN-Sxyz100) and the positions (ANPx(z)100), which control fine xyz scanning (up to 40 \(\mu\)m \(\times\) 40 \(\mu\)m below 4 K) and coarse positioning (5 mm \(\times\) 5 mm below 4 K), respectively. On the MIM circuitry side, GHz signals are generated by an analog signal generator, and one split branch of the signal is coupled to the tip via an impedance matching network (IMN) [24], which is responsible for minimizing the reflected signal [inset in Figure 1(c)] [25]. A plot of the reflected microwave power \(S_{11}\) of an example IMN is shown in Figure 1(c), showing the first resonance at 1.8 GHz. The reflected signal from the tip passes through two directional couplers mounted on the probe-still plate (1 K) to cancel out the residual reflected power. The signal from the sample is then amplified by a cryogenic amplifier (Cosmic Microwave Technology CITCRYO1-12) mounted on the 3 K stage, after which the signal propagates out of the probe and gets further amplified and demodulated at room temperature, as shown in Figure 1(a). During the tip approach procedure, active height feedback can be performed by monitoring either the TF oscillation amplitude or the MIM signal. Here we use a Nanonis SC5 controller to excite and track the TF oscillation and control the fine scanners during imaging [25]. Figure 1(d) displays a measurement of the oscillation amplitude of the tuning fork as a function of excitation frequency, showing the resonance peak near 32.768 kHz. The Q-factor of the resonance is around 500 - 2000 at room temperature (upper panel), while at base temperature it can easily reach 10,000-100,000 (lower panel). The main technical challenge of microwave imaging in a dry dilution fridge is the emergence of new noise sources, which impact both spatial resolution and the signal-to-noise of the microwave reflectometry measurements. There are two main sources of increased noise: (1) mechanical pulse tube vibrations, which are associated with the cooling mechanism of the dilution fridge, place limits on the lateral spatial resolution and add noise to the measured MIM signal, and (2) the high Q factor of the tuning fork at mK temperatures leads to fluctuations in the tip-sample distance, which also couples with the pulse tube vibration. Our fridge is equipped with a pulse tube cryocooler operating at \(\sim 1.4\) Hz [26; 27] generating vibrations that amplitude modulate the tuning fork oscillation, and consequently also modulate the GHz MIM signal. To mitigate these vibrations, we physically decoupled the pulse tube motion from the microscope by unscrewing the rotary valve from the fridge and putting isolation foam in between [28], while the high-pressure helium lines connected to the compressor are wrapped with acoustic pipe lagging. We found that performing AC-mode MIM imaging, described below, largely eliminates background oscillations in the images that arise from pulse tube vibrations. In AC height-modulated imaging mode, a low frequency lock-in amplifier (SR830) is added to the output of the GHz frequency mixer to demodulate the reflected MIM signal at the tuning fork resonance frequency (32 kHz), after which low-pass filters can be used to attenuate noise [25]. We note that because the GHz MIM signal (from the tip) is amplitude-modulated by both the tuning fork oscillation at 32 kHz and the pulse tube vibration, there are multiple side-bands around the measurement frequency. Therefore, band-pass filters between 20-30 kHz are added to the output of the GHz mixer to reduce noise, after which the MIM signal is fed into the SR830 lock-in amplifier for demodulation. During this step, the lock-in amplifier multiplies the MIM input signal with a TF reference signal (provided by the measured piezo current from the tuning fork, after amplification by a commercial low-noise charge amplifier) to extract the in-phase components. Both the filters inside SR830 and the additional low-pass filter added to the output of the lock-in are chosen to eliminate noise at the pulse tube vibration frequency. Figure 1: **Scanning probe microscopy system with combined AFM and MIM readout, integrated into a dilution refrigerator.****(a)** Schematic of the scanning MIM readout electronics and hardware integrated into a dilution refrigerator. The shaded regions refer to the different thermal stages inside the fridge. **(b)**_Left panel:_ Photo of the microscope head, scanners and sample stage (corresponding to the red box in the schematic). _Right panels:_ Zoomed-in view of the tip glued onto the tuning fork (blue box) and a scanning electron microscope image of the etched tungsten tip used for combined AFM and MIM imaging. **(c)** Plot of the reflect microwave power \(S_{11}\) of the impedance matching network (IMN) shows the fundamental resonance at the 1.8 GHz. _Inset:_ Circuit diagram of the IMN with a 0.2 pF capacitor with 5 cm coax connected in series. **(d)** Plots of the oscillation amplitude of the tuning fork as a function of frequency, showing the mechanical resonance used for height feedback. The upper and lower panels show the resonance peak at room temperature and 70 mK, respectively. ## III Results and discussion We characterized the low temperature performance of the AFM on a sample consisting of an array of etched SiO\({}_{2}\) holes patterned on a Si wafer, as depicted in the optical image in Figure 2(a). Cryogenic AFM measurements are used to visualize the topographic profile of a 5 \(\mu\)m x 5 \(\mu\)m scan region at 70 mK, as depicted in Figure 2(b). Figure 2(c) shows a cross-sectional cut of this AFM image, whose position is marked by the black line, revealing a noise level of roughly 3 nm. To more carefully assess the magnitude of the z-noise during AFM scanning, we performed 96 x 96-pixel noise scans over a 1 nm x 1 nm area, such that the spatial profile is irrelevant. Root mean square (RMS) roughness was calculated using Gwyddion after line fitting, which gives z-noise levels in the range of 1.8 - 2.2 nm. Furthermore, upon careful inspection of Figure 2(b), we noticed that a tilted stripe pattern appears as a background modulation in the AFM image. By taking a Fourier transform of this data, we found that the stripe pattern has a frequency of 1.4 Hz, which coincides with the frequency of the pulse tube. Next, to demonstrate our combined AFM and MIM imaging capabilities at low temperatures, we measured the spatial contrast of the MIM response across the boundary between graphite and SiO\({}_{2}\) at 70 mK. Figure 3(a) shows an optical image of the graphite sample, which has terraces of varying thicknesses: the purple region is \(\sim\) 3 nm and the bright yellow region is 15-20 nm. In Figure 3, panels (b) and (c) display AFM and MIM images of the graphite/SiO\({}_{2}\) interface measured at 4K and 70 mK, respectively. In both sets of AFM images, the 3/20 nm step height in graphite is clearly visible, while the graphite/SiO\({}_{2}\) boundary only shows a faint contour, as the z-movement of the scanner to compensate for fluctuations in the tip-sample distance dominates over the 3 nm boundary. Meanwhile, we observe a sharp contrast in the MIM signal across the graphite/SiO\({}_{2}\) boundary due to the different conductivities of the two materials, as predicted by the response curves in Figure 3(b). To explain the experimental observations, one can model the tip-sample interaction for this system using finite element analysis, which can be used to calculate the MIM response curves as a function of sample conductivity [10]. At a measurement frequency of 1.8 GHz, the imaginary part of the MIM response should monotonically increase with the sample conductivity, saturating when the resistivity is higher than 10\({}^{-2}\)\(\Omega\cdot m\) (insulating limit) or lower than 10\({}^{-5}\)\(\Omega\cdot m\) (conductive limit), as shown in Figure 3(b). A cross-sectional profile of the penetration of the tip potential into the sample is provided in the inset. We estimate the MIM spatial resolution to be around 200 nm, constrained by the apex geometry of the etched tungsten tip and mechanical noise from pulse tube vibrations. We also apply this methodology to visualize edge states at the boundaries of thin film cadmium arsenide (Cd\({}_{3}\)As\({}_{2}\)), a novel three-dimensional Dirac semi-metal, in the quantum Hall regime [29; 30]. A cross-sectional schematic of the epitaxially-grown hetero-structure is shown in Figure4 (a), where the film thickness is 20 nm [31; 32]. The Cd\({}_{3}\)As\({}_{2}\) device is lithographically patterned and etched into strips of width 10-15 \(\mu\)m, which are electrically grounded. Transport measurements were performed to characterize the magnetic field behavior of the sample, which reveal dips in the longitudinal resistance at around 4.7 T and 6.5 T, as shown in Figure 4(b). These minima should correspond to the emergence of quantum Hall plateaus [33]. To shed light on the real-space conductivity profile of Cd\({}_{3}\)As\({}_{2}\) in the quantum Hall regime and monitor its evolution across the topological phase transition between plateaus, MIM measurements were performed at a series of magnetic fields at a base temperature of 90 mK. Microwave imaging reveals a sharp enhancement of the reflected MIM signal at the boundaries of the sample in the quantum Hall insulator state, which rapidly decays into the bulk of the sample, as shown in Figure 4(c). Meanwhile, we observed a spatially-uniform conductivity at the transition between quantum Hall plateaus when the longitudinal resistance deviates from zero at B = 5.4 T (Figure 4(d)). The variation of the MIM signal between different lines comes both from the noise in the MIM signal and spatial inhomogeneities in the sample. To more clearly compare the spatial dependence of the MIM signal in these two regimes, in Figure 4(e-f) we plot the cross-sectional profiles of the MIM response across the sample extracted from panels (c) and (d), respectively. These low temperature microwave images reveal sharply enhanced edge conduction that encloses an insulating interior in the quantum Hall regime, which is consistent with the results of transport measurements performed on this system in prior experimental studies. We note that one way to improve signal quality is to use "floating" AC-mode MIM, where imaging is performed with the tip retracted a fixed distance (60-100 nm) above the sample surface. At this distance, the AFM channel will not be modulated due to the topography feedback, but the MIM tip can still interact with the sample via the electromagnetic fields in the vicinity of the tip (when operated in the near-field regime). Because periodic oscillations in the tip-sample distance at the Figure 2: **Topographic imaging of a micropatterned dielectric film at mK temperatures using tuning-fork-based atomic force microscopy.****(a)** Optical image of an etched array of holes SiO\({}_{2}\). The diameter and the spacing of the holes are 1 \(\mu m\). The hole depth is 20 nm. **(b)** AFM spatial scan at 70mK. The scan covers 4\(\times\) 4 um and the scan speed is 400 \(nm/s\). **(c)** Cross-sectional line cut corresponding to the black line in (b). Figure 4: **Microwave imaging of edge modes in a cadmium arsenide film in the quantum Hall regime.****(a)** Cross-sectional schematic of an epitaxially-grown Cd\({}_{3}\)As\({}_{2}\) heterostructure. **(b)** Transport measurement of the longitudinal resistance \(R_{xx}\) as a function of magnetic field at \(90\,\mathrm{m}\mathrm{K}\). The minima correspond to the emergence of quantum Hall plateaus. **(c)** MIM image at \(6.5\,\mathrm{T}\), revealing a sharp enhancement of the reflected signal at the boundaries of a quantum Hall insulator state. **(d)** MIM image at \(5.4\,\mathrm{T}\), showing spatially uniform conductivity at the transition between quantum Hall plateaus **(e-f)** Cross-sectional line cuts of the MIM response across the sample extracted from (c) and (d) respectively. Figure 3: **Microwave impedance microscopy of graphite at millikelvin temperatures.****(a)** Optical image of a graphite flake exfoliated onto a SiO\({}_{2}\)/Si substrate. The dark purple region has a thickness of \(3\,\mathrm{\SIUnitSymbolDegree}\) nm, and the light yellow region has a thickness of \(\sim 20\,\mathrm{n}\mathrm{m}\). The blue box marks the imaging window for (c) and (d). **(b)** Theoretical MIM response curves simulated at \(1.8\,\mathrm{G}\mathrm{H}\mathrm{z}\), illustrating the evolution of the MIM contrast with the sample conductivity. _Inset:_ vertical cut of the potential distribution for the tip-sample interaction, calculated using finite-element analysis. **(c)** AFM and MIM imaging of the graphite flake at \(4\,\mathrm{K}\), with the scan window covering the \(20\,\mathrm{n}\mathrm{m}\) region (lower left), \(3\,\mathrm{\SIUnitSymbolDegree}\) nm region (middle), and the SiO\({}_{2}\) region (upper right). The scan speed is \(0.5\,\,\mu m/s\). **(d)** AFM and MIM images of the same location at \(70\,\mathrm{m}\mathrm{K}\). The scan speed is \(0.2\,\,\mu m/s\). tuning fork resonance are decoupled from the surface roughness of the sample, noise in the MIM response can be dramatically reduced in floating mode. Figure 5 shows the results of a floating mode MIM scan performed at 3 GHz and T= 70 mK, with the tip lifted 100 nm above an hBN-covered graphite layer. The tip apex is around 0.8 um, which is reflected in the spatial profile of the MIM signal change across the boundary between the graphite flake and hBN. In this case, the signal-to-noise ratio is even better than that observed in tapping mode MIM images (Figure 3(c-d), which is especially useful for fixed-location MIM measurements. However, this advantage comes at the expense of signal size, as the tip is further away from the sample than for tapping mode. The choice of tip-sample distance for floating-mode measurements is a compromise between maximizing signal sensitivity and minimizing the risk of a tip crash due to vertical fluctuations in the tip-sample distance, which arise from pulse tube vibrations and are aggravated by the large Q factor of the tuning fork at mK temperatures. For larger scan windows or rougher sample surfaces, the tip may need to be retracted further. We expect the sensitivity of floating mode to be around \(0.01-0.1\) \(\mathrm{aF}/\sqrt{\mathrm{Hz}}\) at 0.1 uW input power, and in our case the noise is mostly due to vertical modulations of the tip-sample distance [16]. ## IV Conclusion and Outlook In summary, we report on the development of a microwave impedance microscope that operates at temperatures down to 70 mK. This is achieved by incorporating a TF-based AFM with near-field GHz imaging capabilities into a dry dilution refrigerator. Pushing the limits of MIM into new low temperature regimes should enable local sensing of quantum phenomena that only exist at low energy scales, including certain topological states of matter, domain wall physics at phase transitions, quantum states arising from geometric confinement in mesoscopic devices, and correlated states in two-dimensional materials and van der Waals heterostructures. Because this instrumentation is equipped with combined transport and imaging capabilities, it can also illuminate the correspondence between macroscopic transport behavior and the underlying microscopic nature of electronic states, including the real-space disorder landscape or presence of edge modes. During the preparation of this manuscript, we became aware of a pre-print on a related topic [34]. ###### Acknowledgements. We thank Alex Lygo and Susanne Stemmer for providing cadmium arsenide devices for these experiments, Yongtao Cui for inspiring discussions, and Evan Cobb for helping develop some of the MIM circuitry. We gratefully acknowledge funding support from the UC Office of the President, specifically the UC Laboratory Fees Research Program (award LFR-20-653926), the AFOSR Young Investigator Program (award FA9550-20-1-0035) and the AFOSR/ARO MURI Program (award FA9550-22-1-0270). This work was performed, in part, at the San Diego Nanotechnology Infrastructure (SDNI) of UCSD, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (Grant ECCS-2025752). ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2305.12882
On the screening condition in the core of neutron stars
Earlier, the screening condition in neutron star core has been formulated as equality of velocities of superconducting protons and the electrons $\mathbf{v}_p=\mathbf{u}_e$ at wavenumbers $q\ll\lambda^{-1}$ ($\lambda$ is the London penetration depth) and has been used to derive the force exerted by the electrons on a moving flux tube. By calculating the current-current response, I find that $\mathbf{v}_p\neq\mathbf{u}_e$ for $l^{-1}<q\ll\lambda^{-1}$ ($l$ is the electron mean free path). I show that at typical realistic parameters the electric field induced by a moving (relative to the electrons) flux tube is not screened by the electron currents. The implication is that the existing picture of the momentum exchange between the electrons and the flux tubes must be reassessed.
Dmitry Kobyakov
2023-05-22T10:03:49Z
http://arxiv.org/abs/2305.12882v2
# On the screening condition in the core of neutron stars ###### Abstract The screening condition in neutron star core has been formulated as equality of velocities of superconducting protons and the electrons \(\mathbf{v}_{p}=\mathbf{u}_{e}\) at wavenumbers \(q\ll\lambda^{-1}\) (\(\lambda\) is the London depth) and has been used to derive the force between the electronic flow past the flux tube, which has astrophysical applications. By calculating the current-current response, I find that \(\mathbf{v}_{p}\neq\mathbf{u}_{e}\) for \(l^{-1}<q\ll\lambda^{-1}\) (\(l\) is the electron mean free path) at typical realistic parameters. Therefore, the momentum exchange between the electrons and the flux tubes in the core of neutron stars remains an open question. _Introduction._ In the core of neutron stars, the protons are superconducting and the electrons are normal. Moreover, observable neutron stars host the magnetic field in their interior, which according to theoretical expectations induces a flux tube lattice in the superconducting protons. Scattering of the electrons by the flux tubes effectively couples the electrons and the protons, and this coupling plays an important role in modelling of observable phenomena in neutron stars. Scattering of the electrons by a single flux tube in superconducting protons (or by a single superfluid neutron vortex with magnetization) has been a long-standing problem. Alpar, Langer and Sauls [1] have considered this problem for the first time and have discussed some astrophysical implications. Their work has given rise to many publications where this problem was addressed. A review of the literature on this subject goes beyond the aims of this paper and is not essential for the further discussion below. A comprehensive literature review may be found in a recent work by Gusakov [2], who also has done detailed calculations of the scattering of the electrons by the proton flux tube in order to resolve some discrepancies between different approaches to the physical problem, which have emerged since the appearance of the work by Alpar, Langer and Sauls [1]. In the calculations of Gusakov [2], an essential step was made in equation (16) of [2], when the screening condition \(\mathbf{v}_{p}=\mathbf{u}_{e}\) was used. Lately, this condition was also used by Sourie and Chamel [3] in their studies of superfluid neutron vortices in the core of neutron stars. The condition \(\mathbf{v}_{p}=\mathbf{u}_{e}\) is expected to hold in the hydrodynamic regime [4], which is defined, in the present context, for wavenumbers \(q\) much smaller than the electron inverse mean free path \(l^{-1}\)[5]. In other related problems, one may define the hydrodynamic regime as length scales much longer than the superfluid vortex separation [6], but here we will be interested in the electron properties. However, the flux tube core length scale \(\lambda\) at typical temperature \(T=10^{8}\) K is about 80 fm, and therefore it is not clear whether the hydrodynamic approximation applies for the problem of extraction of the momentum exchange in the electron-flux tube scattering, because \(l\gg\lambda\). Actually, we will see that \(l\) is typically somewhere between \(10^{5}\) fm and \(10^{9}\) fm, while the flux tube separation is somewhere between 144 fm (at the magnetic field \(H=10^{15}\) Oe) and 455 fm at (\(H=10^{14}\) Oe). In this paper, I will show that when the flux tube size is resolved by the electron dynamics, the latter are in the particle-hole regime and consequently the screening condition does not always hold. _Screening of charge current by electrons._ We will do calculations with the material parameters corresponding to the nuclear saturation density. The baryon number density is \(n=0.16\) fm\({}^{-3}\), the proton fraction \(x_{p}=0.05\) and the proton number density is \(n_{p}=x_{p}n\), the electron number density is \(n_{e}=n_{p}\) and the the electron Fermi wavenumber is \(k_{e}=(3\pi n_{e})^{1/3}\approx 0.6187\) fm\({}^{-1}\). The electron velocity \(\mathbf{u}_{e}\) is the microscopic velocity averaged on length scale much longer than \(k_{e}^{-1}\). Likewise, the proton superflow velocity \(\mathbf{v}_{p}\) is the quantity averaged on length scale much longer than the proton coherence length \(\xi(T)\). At \(T\ll T_{c}\), the superconducting density is equal to \(n_{p}\), where \(T_{c}=\gamma\Delta_{p}/k_{B}\pi\), with \(\ln\gamma=0.577\) is the Euler constant and \(\Delta_{p}\) is the proton s-wave superconducting energy gap, and we have \[\xi(0)=\frac{\hbar v_{F}}{2\gamma\Delta_{p}}\approx 7.2\left(\frac{1\,\text{MeV} }{\Delta_{p}}\right)\,\text{fm}. \tag{1}\] Here, \(v_{F}=\hbar k_{p}/m_{p}\) is the electron Fermi velocity, \(k_{p}=k_{e}\) and \(m_{p}\) is the proton mass. The number currents are defined as \(\mathbf{j}_{e}=n_{e}\mathbf{u}_{e}\) and \(\mathbf{j}_{p}=n_{p}\mathbf{v}_{p}\) and the electric currents are \(\mathbf{J}_{e}=-en_{e}\mathbf{u}_{e}\) and \(\mathbf{J}_{p}=en_{p}\mathbf{v}_{p}\), where \(e\) is the proton electric charge. Noting that \(m_{e}\ll m_{p}\), where \(m_{e}\) is the electron mass, is a reasonable lowest-order approximation even for the relativistic electrons, we may neglect the inertia of the electrons [4]. This implies that in order to study the transverse electromagnetic response of the system it is sufficient to assume that the proton velocity is given and then to calculate the equilibrium electron velocity. The physical meaning of the screening condition \(\mathbf{v}_{p}=\mathbf{u}_{e}\) is that for a given proton velocity, the electron velocity is the same and the total electrical current \(\mathbf{J}_{e}+\mathbf{J}_{p}\) is zero, or \(\mathbf{J}_{e}=-\mathbf{J}_{p}\). Thus, we are free to choose the proton phase as a constant, and we obtain \[\mathbf{J}_{p}=en_{p}(-e\mathbf{A}/c), \tag{2}\] where \(\mathbf{A}\) is the electromagnetic vector potential [7]. It will be convenient to work in the Fourier representation, \(\mathbf{X}(\mathbf{q},\omega)=\int d^{3}\mathbf{x}dt\,e^{i\mathbf{q}-i\omega t }\mathbf{X}(\mathbf{x},t)\), where \(\mathbf{X}(\mathbf{x},t)\) is any function of space and time. From the relation \(E=-c\partial\mathbf{A}/\partial t\) we observe that to calculate the electron current for a given proton velocity, in the lowest approximation we need to calculate the quantity \[\sigma(\mathbf{q},\omega)=\frac{\delta\mathbf{J}_{e}(\mathbf{q},\omega)}{ \delta\mathbf{E}(\mathbf{q},\omega)}=\frac{ec}{\mathrm{i}\omega}\frac{ \delta\mathbf{j}_{e}(\mathbf{q},\omega)}{\delta\mathbf{A}(\mathbf{q},\omega)}, \tag{3}\] which is the electrical conductivity of the electrons. From the other hand, the electrical conductivity may be expressed through the dielectric function \(\varepsilon_{t}\), for which one has at least two equivalent choices. We shall work with the transverse fields in this paper; for convenience, the subscript \(t\) is added to stress that \(\varepsilon_{t}\) is the transverse dielectric function. Choosing the definition of the dielectric function following Lindhard [8] from the electric induction \(\mathbf{D}\), or \[\mathbf{D}=\varepsilon_{t}^{\mathrm{(L)}}\mathbf{E}, \tag{4}\] we have the Maxwell equation \[\left(q^{2}-\frac{\omega^{2}}{c^{2}}\varepsilon_{t}^{\mathrm{(L)}}(q,\omega) \right)\mathbf{A}(q,\omega)=\frac{4\pi}{c}\mathbf{J}_{\mathrm{ext}}(q,\omega). \tag{5}\] Alternatively, one can follow Jancovici [9] and define the dielectric function from the equation for the total self-consistent vector potential \(\mathbf{A}=\mathbf{A}_{\mathrm{ext}}+\mathbf{A}_{\mathrm{ind}}\), which is a sum of the external part \(\mathbf{A}_{\mathrm{ext}}\) due to a source and the induced part \(\mathbf{A}_{\mathrm{ind}}\) due to the electrons: \[\mathbf{A}(q,\omega)=\frac{1}{\varepsilon_{t}(q,\omega)}\mathbf{A}_{\mathrm{ ext}}(q,\omega). \tag{6}\] Then we have the Maxwell equation \[\left(q^{2}-\frac{\omega^{2}}{c^{2}}\right)\varepsilon_{t}(q,\omega)\mathbf{ A}(q,\omega)=\frac{4\pi}{c}\mathbf{J}_{\mathrm{ext}}(q,\omega), \tag{7}\] and we obtain \[\varepsilon_{t}^{\mathrm{(L)}}(\mathbf{q},\omega)-1=\frac{c^{2}q^{2}-\omega^ {2}}{\omega^{2}}\left(1-\varepsilon_{t}(q,\omega)\right). \tag{8}\] From the Maxwell equations, \(\sigma\) is related to \(\varepsilon_{t}^{\mathrm{(L)}}\) as following [10]: \[\sigma(\mathbf{q},\omega)=\frac{\omega}{\mathrm{i}4\pi}\left(\varepsilon_{t}^ {\mathrm{(L)}}\left(\mathbf{q},\omega\right)-1\right). \tag{9}\] It is convenient to introduce the response function \(\tilde{\chi}_{t}(\mathbf{q},\omega)\) according to the definition \[\varepsilon_{t}(\mathbf{q},\omega)=1+\frac{4\pi e^{2}}{q^{2}}\tilde{\chi}_{t} (\mathbf{q},\omega). \tag{10}\] Combining the above equations we can write \[\tilde{\chi}_{t}(\mathbf{q},\omega)=\frac{cq^{2}}{e(\omega^{2}-c^{2}q^{2})} \frac{\delta\mathbf{j}_{e}(\mathbf{q},\omega)}{\delta\mathbf{A}(\mathbf{q}, \omega)}. \tag{11}\] Using the approximation discussed above we associate the external current \(\mathbf{J}_{\mathrm{ext}}\) with \(\mathbf{J}_{p}\) and the total current with \(\mathbf{J}_{p}+\mathbf{J}_{e}\); thus, we associate the electronic current \(\mathbf{J}_{e}\) with the induced current \(\mathbf{J}_{\mathrm{ind}}\). With this definition, we have \[\mathbf{J}_{e}(\mathbf{q},\omega)=\frac{1-\varepsilon_{t}(\mathbf{q},\omega) }{\varepsilon_{t}(\mathbf{q},\omega)}\mathbf{J}_{p}(\mathbf{q},\omega). \tag{12}\] In the present system, the electrons are relativistic and quantum degenerate. It is important to distinguish between the regimes of the electron dynamics [10]: the hydrodynamic regime is realized when \(\omega\ll\nu\) and \(q\ll l^{-1}\); the particle-hole regime results when \(\omega>\nu\) and/or \(q>l^{-1}\). There are two options in the particle-hole regime: with \(q>l^{-1}\) and \(\omega>\nu\) the collisions are unimportant; with \(q>l^{-1}\) and \(\omega<\nu\) the collisions are important. Here, \(\nu=c/l\) is the electron collision rate, which is defined as a sum of the collision rates with all kinds of impurities that scatter the electrons. In the present system, \[\nu=\nu_{1}+\nu_{2}, \tag{13}\] where \(\nu_{1}\) (or \(\nu_{2}\)) is the rate of collisions of the electrons with the normal protons within the magnetic flux tubes (or with the magnetic field lines within the magnetic flux tubes and the superfluid neutron vortices). _Calculation of_\(\sigma(\mathbf{q},\omega)\). As the first step, we address the time and length scales involved in the problem of calculation of \(\sigma\) (or, equivalently, \(\varepsilon_{t}\)). In the present system, we are interested in the short length scales (\(q\geq l^{-1}\)) and slow frequencies (\(\omega\ll\nu\)). We notice that Jancovici has already calculated [9] the dielectric function \(\varepsilon_{t}(\mathbf{q},\omega)\) for \(q\geq l^{-1}\), but completely neglecting the electron collisions (for \(\omega\gg\nu\)). Here, \(\omega\) are rather small and may take values from \(\omega\sim 0\) for phenomena related to smooth evolution of the magnetic field in neutron stars, to \(\omega\sim 10^{4}\) for phenomena related to quasi-periodic oscillations in magnetars. At temperatures \(T\ll T_{c}\) the normal protons in the bulk of the superconductor may be neglected and collisions of the electrons with the normal protons occur only within the flux tubes, therefore \[\nu_{1}\approx\tau_{\mathrm{tr}}^{-1}\frac{H_{c2}(0)}{H}, \tag{14}\] where \(H_{c2}(0)=\Phi_{0}/2\pi\xi(0)^{2}\), \(\Phi_{0}=\pi\hbar c/e\), is the upper critical magnetic field of the superconductor at \(T=0\), \(H\) is the stellar magnetic field and \(\tau_{\mathrm{tr}}\approx 2\times 10^{-14}\) s is the electron transport relaxation time with normal protons evaluated by Baym, Pethick and Pines [11]. With typical \(H=10^{15}\) Oe and \(\Delta_{p}=1\) MeV, we find \(\nu_{1}\approx 3.166\times 10^{15}\) s\({}^{-1}\). For scattering of the electrons by the magnetic field in the flux tube, we use the order of magnitude estimate \[\nu_{2}=\frac{c}{l_{\mathrm{et}}}\approx c\sigma_{\mathrm{et}}n_{t}, \tag{15}\] where \(l_{\mathrm{et}}\) is the electron mean free path between consecutive collisions of the electron with the flux tube, \(\sigma_{\mathrm{et}}\) is the differential cross-section for scattering of the electron with the flux tube and \(n_{t}=H/\Phi_{0}\) is the number of flux tubes per unit area. Note that if \(H\) is smaller than the lower critical magnetic field, the neutron vortices would provide the dominant impurity scattering for the electrons. The cross section is given by \(\sigma_{\mathrm{et}}=\alpha k_{e}^{-1}\), where \(\alpha=\alpha(k_{e}\xi,\lambda/\xi)\). From equations (37) and (40) from Gusakov [2] we infer that \(\alpha\sim 10^{-2}\), depending on the dimensionless parameters \(k_{e}\xi\) and \(\lambda/\xi\). Thus, with typical \(H=10^{15}\) Oe and \(\Delta_{p}=1\) MeV, we find \(\nu_{2}\approx\alpha\times 2.343\times 10^{19}\) s\({}^{-1}\) and \(\nu\approx\nu_{2}\). This coincides with the standard theoretical expectation that the main source of the electron-proton coupling in superconducting matter of neutron star core is the electron interaction with the magnetic flux tubes. We have found, as expected, that \(\omega\ll\nu\) for typical processes in neutron stars. Thus, for the further calculations, \(\varepsilon_{t}(\mathbf{q},\omega)\) calculated by Jancovici [9] must be modified in order to include the electron collisions. We will use the following notation: \[\tilde{\chi}_{t}(\mathbf{q},\omega)=\left\{\begin{array}{l}\chi_{t}(\mathbf{ q},\omega)\text{ for }\omega\gg\nu,\\ \chi_{t}^{\nu}(\mathbf{q},\omega)\text{ for }\omega\ll\nu.\end{array}\right. \tag{16}\] As the second step, we turn to the kinetic equation, from which the functional derivative \(\delta\mathbf{j}_{e}(\mathbf{q},\omega)/\delta\mathbf{A}(\mathbf{q},\omega)\) can be calculated in both cases, when either \(\omega\ll\nu\) (the collision integral \(I[n_{1p}]\) is nonzero) or \(\omega\gg\nu\) (\(I[n_{1p}]=0\)). Here, \(n_{1p}\) is the departure of the distribution function from true equilibrium [10, 12]. In the relaxation time approximation (RTA), the collision integral is written in the form \[I[n_{1p}]=-\nu\left(n_{1p}-n_{1p}^{R}\right), \tag{17}\] where \(n_{1p}^{R}\) is the so-called locally relaxed equilibrium distribution function [12]. As Conti and Vignale have shown [12], in RTA the response function _with_ collisions (\(\chi_{t}^{\nu}(\mathbf{q},\omega)\)) can be obtained from the response function _without_ collisions with the frequency \(\omega\) replaced by \(\omega+\mathrm{i}\nu\) (\(\chi_{t}(\mathbf{q},\omega+\mathrm{i}\nu)\)): \[\chi_{t}^{\nu}(\mathbf{q},\omega)=\frac{\omega}{\omega+\mathrm{i}\nu}\chi_{t }(\mathbf{q},\omega+\mathrm{i}\nu). \tag{18}\] Notice that Conti and Vignale [12] have worked with the quantity \(\chi_{t}^{\tau}\) (the superscript 1999 is to refer to the quantities used in [12]), \[\chi_{t}^{\tau}(\mathbf{q},\omega)\equiv\frac{\delta\mathbf{j}_{e}^{1999}( \mathbf{q},\omega)}{\delta\mathbf{A}^{1999}(\mathbf{q},\omega)}=-\frac{c}{e} \frac{\delta\mathbf{j}_{e}(\mathbf{q},\omega)}{\delta\mathbf{A}(\mathbf{q}, \omega)}, \tag{19}\] which, as can be easily seen from Eqs. (11), (16) and (19), is proportional to \(\chi_{t}^{\nu}\): \[\chi_{t}^{\nu}(\mathbf{q},\omega)=\frac{q^{2}}{c^{2}q^{2}-\omega^{2}}\chi_{t} ^{\tau}(\mathbf{q},\omega). \tag{20}\] By virtue of the proportionality between \(\chi_{t}^{\nu}\) and \(\chi_{t}^{\tau}\) seen in Eq. (20), the result obtained in RTA for the relation between \(\chi_{t}^{\tau}\) and its collisionless counterpart by Conti and Vignale [12], is applicable to the relation between \(\chi_{t}^{\nu}\) and \(\chi_{t}\); this validates the formula in Eq. (18). This conclusion is useful because it allows to find \(\chi_{t}^{\nu}(\mathbf{q},\omega)\) from \(\chi_{t}(\mathbf{q},\omega)\), which has been obtained by Jancovici. Equation (66) from Jancovici [9] gives: \[\chi_{t}(\mathbf{q},\omega)=\frac{s}{2}\frac{\partial n_{e}}{\partial\mu_{e}} \left(\frac{s}{1-s^{2}}+\frac{1}{2}\log\frac{s+1}{s-1}\right), \tag{21}\] where \(\mu_{e}\) is the electron Fermi energy, \(\partial n_{e}/\partial\mu_{e}=k_{e}^{2}/\pi^{2}\hbar c\) and \(s\equiv s(q,\omega)=\frac{\omega}{cq}\). Collecting the results we obtain the main formula of this paper: \[\sigma(\mathbf{q},\omega)=\frac{\omega^{2}-c^{2}q^{2}}{\mathrm{i}\omega}\frac {e^{2}}{q^{2}}\frac{\omega}{\omega+\mathrm{i}\nu}\chi_{t}(\mathbf{q},\omega+ \mathrm{i}\nu). \tag{22}\] From Eqs. (10), (16), (18) and (21) we will calculate the quantity \[\zeta\equiv\frac{1-\varepsilon_{t}(\mathbf{q},\omega)}{\varepsilon_{t}( \mathbf{q},\omega)}, \tag{23}\] which indicates effectiveness of the current-current screening, see Eq. (12). In the limiting case, when \(\zeta=-1\) we infer that the screening condition holds, and \(\mathbf{v}_{p}=\mathbf{u}_{e}\). In another limiting case, when \(\zeta=0\) we infer that the proton supercurrent is not screened by the electron current, and the screening condition does not hold. lines in Figs. 1 and 3 we observe that at fixed \(\nu\), decreasing of \(q\) leads to improvement of screening; this implies that the smaller \(q\) is, the closer \(\zeta\) is to minus unity, as expected. We can say that decreasing of \(q\) at fixed \(\omega\) moves the curve \(\zeta\) in Fig. 1 to the right. If we have \(\nu\) fixed, for instance, \(\nu=3\times 10^{18}\) Hz, then for any \(\omega\) between \(0\) and \(10^{4}\), the electrons do not screen the proton supercurrent at any length scale shorter than the electron mean free path (\(q>\nu/c\approx 10^{8}\) cm\({}^{-1}\)). A somewhat nontrivial result is that increasing of \(\omega\) improves the screening effectiveness; for example, with the collision rate \(\nu=10^{17}\) Hz, we would have complete screening (\(\zeta=-1\)) at \(q=\nu/c\approx 3\times 10^{6}\) cm\({}^{-1}\) only for \(\omega\geq 10^{8}\) Hz (this numerical result is not shown explicitly in the Figures but could be seen in Fig. 1 as ultimate shift of the curve \(\zeta\) to the right leaving in the plot only the "tail" with \(\zeta=-1\)). We can say that increasing of \(\omega\) at fixed \(q\) moves the curve \(\zeta\) in Fig. 1 to the right. The imaginary part of \(\zeta\) are shown in Figs. 2 and 4. We see that the imaginary part is zero at either complete screening (\(\zeta=-1\)) or at screening absent (\(\zeta=0\)), while in the intermediate case, which can be called an incomplete screening, the induced current may have the phase-shifted (by \(\pi/2\)) component with magnitude equal to that of the in-phase component of the induced current. _Conclusions._ Based on microscopic physics, we have developed the framework, which can be used to estimate the effectiveness of the electrical current response of the electrons to a given proton supercurrent in presense of a lattice of flux tubes. We have used typical parameters corresponding to the core of neutron stars and studied the screening condition for various values of the electron momentum-nonconserving collision frequency. We have obtained that for typical frequencies of change of the relative velocity between the electrons and the superconducting protons (between about \(0\) and \(10^{4}\) Hz), the electrons are unable to screen the proton supercurrent. The implication is that one of the basic conditions used in the earlier literature for calculation of the effective force acting between the electrons and a localized magnetic flux tube associated with a proton (or neutron) quantized magnetized vortex, namely the screening condition, which assumes that the electron velocity is equal to the superflow velocity at distance an order of magnitude larger than the linear size of the flux tube cross-section, does not hold. As a result, the rate of momentum exchange between the electrons and the flux tube lattice in the superconducting and/or superfluid nuclear matter in the core of neutron stars remains an open question.
2302.07596
Clustering-Based Inter-Regional Correlation Estimation
A novel non-parametric estimator of the correlation between grouped measurements of a quantity is proposed in the presence of noise. This work is primarily motivated by functional brain network construction from fMRI data, where brain regions correspond to groups of spatial units, and correlation between region pairs defines the network. The challenge resides in the fact that both noise and intra-regional correlation lead to inconsistent inter-regional correlation estimation using classical approaches. While some existing methods handle either one of these issues, no non-parametric approaches tackle both simultaneously. To address this problem, we propose a trade-off between two procedures: correlating regional averages, which is not robust to intra-regional correlation; and averaging pairwise inter-regional correlations, which is not robust to noise. To that end, we project the data onto a space where Euclidean distance is used as a proxy for sample correlation. We then propose to leverage hierarchical clustering to gather together highly correlated variables within each region prior to inter-regional correlation estimation. We provide consistency results, and empirically show our approach surpasses several other popular methods in terms of quality. We also provide illustrations on real-world datasets that further demonstrate its effectiveness.
Hanâ Lbath, Alexander Petersen, Wendy Meiring, Sophie Achard
2023-02-15T11:29:37Z
http://arxiv.org/abs/2302.07596v1
# Clustering-Based Inter-Regional Correlation Estimation ###### Abstract A novel non-parametric estimator of the correlation between grouped measurements of a quantity is proposed in the presence of noise. This work is primarily motivated by functional brain network construction from fMRI data, where brain regions correspond to groups of spatial units, and correlation between region pairs defines the network. The challenge resides in the fact that both noise and intra-regional correlation lead to inconsistent inter-regional correlation estimation using classical approaches. While some existing methods handle either one of these issues, no non-parametric approaches tackle both simultaneously. To address this problem, we propose a trade-off between two procedures: correlating regional averages, which is not robust to intra-regional correlation; and averaging pairwise inter-regional correlations, which is not robust to noise. To that end, we project the data onto a space where Euclidean distance is used as a proxy for sample correlation. We then propose to leverage hierarchical clustering to gather together highly correlated variables within each region prior to inter-regional correlation estimation. We provide consistency results, and empirically show our approach surpasses several other popular methods in terms of quality. We also provide illustrations on real-world datasets that further demonstrate its effectiveness. _Keywords:_ correlation estimation, hierarchical clustering, Ward's linkage, spatio-temporal data, brain functional connectivity Introduction Correlation estimation is integral to a wide range of applications, and is often the starting point of further analyses. However, data are often contaminated by noise. If data are additionally inherently divided into separate, and study-relevant groups, inter-group correlation estimation becomes all the more challenging. Such datasets are often encountered in spatio-temporal studies, such as single-subject brain functional connectivity network estimation, where voxel-level signals acquired via functional Magnetic Resonance Imaging (fMRI) are grouped into predefined spatial brain regions (De Vico Fallani et al., 2014). This work is relevant as well to other fields, such as organizational studies, where individuals are grouped by organization (Ostroff, 1993). As such, we will be using the words group, region, and parcellation interchangeably. In these contexts, measurement replicates of each individual element, most often collected across time, are available and used to compute the sample correlation between different regions. These elements are grouped according to a parcellation which is fixed and corresponds to a practical reality, like anatomical brain regions in fMRI studies. As a result, regions could themselves be inhomogeneous. This work hence aims to estimate inter-regional correlation, later shortened to inter-correlation, no matter the quality of the parcellation. However, both noise and arbitrary within-region correlation, later called intra-correlation, lead to inconsistent inter-correlation estimation by Pearson's correlation coefficient (Ostroff, 1993; Saccenti et al., 2020). Indeed, it has been established in various contexts that correlation is underestimated in the presence of noise (Ostroff, 1993; Matzke et al., 2017; Saccenti et al., 2020). Furthermore, data are often high dimensional, which presents a challenge of its own. In practice, including many fMRI studies, variables hence are commonly spatially averaged by regions prior to inter-correlation estimation (Achard et al., 2006; De Vico Fallani et al., 2014). Yet, intra-correlation may be weak, which would lead to overestimation of inter-correlations (Wigley et al., 1984). This phenomenon may also be compounded by unequal region sizes (Achard et al., 2011). Thus, standard correlation estimators are not well-suited for the setting of grouped variables under noise contamination. Nonetheless, simultaneously tackling noise and intra-group dependence structures can be quite difficult, especially in a non-parametric setting. Failing to do so can be especially problematic for downstream analyses. For instance, in functional connectivity network estimation, a threshold is often applied to sample inter-correlation coefficients in order to identify edges between brain regions. Under- or over-estimation of the inter-correlation would then lead to missing or falsely detecting edges. To address these problems, we present a data-driven, and non-parametric, approach with an astute intermediate aggregation. First, we propose to gather together highly correlated variables within each region. To this end, variables are projected onto a space where Euclidean distance can serve as a substitute to sample correlation, with lower values of the former corresponding to higher correlations. Hierarchical clustering with Ward's linkage (Ward, 1963; Murtagh and Legendre, 2014) is then applied to the projected variables within each region, resulting in intra-regional clusters of highly correlated variables. Within each intra-regional cluster, these variables are next spatially averaged. For each pair of regions, a sample correlation is then computed for each pair of cluster-averages from different regions. Our approach hence provides a distribution of the sample inter-correlations between each pair of regions, containing as many sample correlations as there are pairs of clusters from the two regions. For a point estimate of the inter-regional correlation for a given pair of regions, the average of the sample inter-correlation coefficients can then be considered. We summarize our main contributions as follows: * We propose a novel non-parametric estimator of inter-regional correlation that offsets the combined effect of noise and arbitrary intra-correlation by leveraging hierarchical clustering. * Based on the properties of hierarchical clustering with Ward's linkage, we prove our estimator is consistent for an appropriate choice of the cut-off height of the dendrograms thus obtained. * We then empirically corroborate our results about the impact of the cut-off height on the quality of the estimation. We also show our proposed inter-correlation estimator outperforms popular estimators in terms of quality, and illustrate its effectiveness on real brain imaging datasets. Related Work In the context of functional connectivity, the vast majority of papers that build correlation networks first average signals within each brain region for each time point, before computing Pearson's correlation across time, possibly after wavelet or other filtering, e.g., (Achard et al., 2006; Bolt et al., 2017; Ogawa, 2021; Zhang et al., 2016). Nevertheless, and as mentioned in the previous section, the correlation of averages overestimates the true correlation when intra-regional correlations are weak, while high noise may lead to underestimation. It was also empirically observed in fMRI data that the application of spatial smoothing, which is a common preprocessing step to reduce the effect of noise, causes the inter-regional correlations to be overestimated (Liu et al., 2017). Several methods tackling the impact of intra-correlation on the estimation of inter-correlation have been proposed in familial data literature, e.g.,(Elston, 1975; Rosner et al., 1977; Srivastava and Keen, 1988; Wilson, 2010). These approaches nonetheless do not address the impact of noise. Moreover, they require normality assumptions on the samples, while we provide consistency guarantees for our proposed estimator that do not require parametric assumptions on the signal distribution. Bayesian inference methods have been proposed to offset the effect of measurement errors (Matzke et al., 2017). However they require a careful choice of priors, in addition to only handling pairs of variables, as opposed to groups of variables--which is what we are interested in. Robust correlation estimation has also been extensively investigated but mostly for specific distributions, such as contaminated normal distributions (Shevlyakov and Smirnov, 2016) or with heavy tails (Lindskog, 2000), whereas we are interested in robustness to noise and weak intra-group dependence. Furthermore, groups of variables are not considered either. Cluster-robust inference in the presence of both noise and within-group correlation has been studied in the econometric literature (Cameron and Miller, 2015). However, inter-correlation, which is the quantity we aim to estimate in this work, is assumed to be zero. To the best of our knowledge, we are the first to propose a method to simultaneously tackle the impact of noise and within-group inhomogeneity to estimate inter-correlation in a non-parametric fashion. Preliminaries From this point forward, and without loss of generality, we will focus on spatio-temporal contexts. In particular, we are motivated by an application to brain fMRI data where individual observed variables correspond to blood-oxygen-level-dependent (BOLD) signals that are assigned to _voxels_, and are grouped by _regions_. Nonetheless, the following results can be applied to any dataset of grouped measurements of a quantity. In this section we define our notation and model, together with the inter- and intra-correlation coefficients. Throughout this paper we consider two regions, generically denoted \(A\) and \(B\). In reality, datasets will involve a potentially large number of regions but, for the purpose of correlation network construction, the correlations can be estimated in a pairwise fashion at the regional-level. Let \(X_{1}^{A},\ldots,X_{i}^{A},\ldots,X_{N_{A}}^{A}\) denote \(N_{A}\) spatially dependent latent (unobserved) random variables in region \(A\), each variable corresponding to an individual voxel in that region. Let \(\epsilon_{1}^{A},\ldots,\epsilon_{i}^{A},\ldots,\epsilon_{N_{A}}^{A}\) represent random noise variables. We assume that the latent process \(X_{i}^{A}\) at each voxel \(i\) is contaminated by noise \(\epsilon_{i}^{A}\), so that the observed variables \(Y_{i}^{A}\) in region \(A\) are \[Y_{i}^{A}=X_{i}^{A}+\epsilon_{i}^{A},\quad i=1,\ldots,N_{A}. \tag{1}\] We assume within-region homoscedasticity of both signal and noise, i.e., \[\sigma_{A}^{2}=\mbox{Var}\left(X_{i}^{A}\right),\ \gamma_{A}^{2}=\mbox{Var} \left(\epsilon_{i}^{A}\right),\ \ \ \ \ i=1,\ldots,N_{A}.\] Analogously we define \(N_{B}\), \(X_{j}^{B}\), \(\epsilon_{j}^{B}\), \(Y_{j}^{B}\), \(\sigma_{B}^{2}\) and \(\gamma_{B}^{2}\), for region \(B\) and voxels \(j=1,\ldots,N_{B}\). We assume the noise variables are spatially uncorrelated both within and across regions, and that they are also uncorrelated to the latent state both within and between regions. A critical reality of the observed data is the _intra-correlation_ or Pearson's correlation between any pair of random variables _within_ a given region \(A\). We denote by \(\eta_{i,i^{\prime}}^{A}\) the intra-correlation of the latent variables \(X_{i}^{A},X_{i^{\prime}}^{A}\). We place no further constraints on the intra-correlation structure. Similarly, we define the _inter-correlation_ as Pearson's correlation between any pair of random variables from two _distinct_ regions. For a given pair of distinct regions, \(A,B\), the inter-correlation between any pair of latent random variables \(X_{i}^{A},X_{j}^{B}\) is assumed to be constant across voxels, and is denoted as \(\rho^{A,B}\). Consider now \(n\) temporally independent and identically distributed (i.i.d.) samples of all observed signals. That is, for each region \(A\) and voxel \(i=1,\ldots,N_{A}\), we have \(n\) i.i.d. observations \(Y_{i}^{A}(t)\), \(t=1,\ldots,n\), each distributed as in (1) with the same intra- and inter-correlation properties as those outlined previously. In particular, for any time point \(t=1,\ldots,n\), and voxels \(i\) and \(j\) from distinct regions \(A\) and \(B\), respectively, \(Cov(Y_{i}^{A}(t),Y_{j}^{B}(t))=\rho^{A,B}\sigma_{A}\sigma_{B}\). Denote by \(\mathbf{Y}_{i}^{A}=[Y_{i}^{A}(1),\ldots,Y_{i}^{A}(t),\ldots Y_{i}^{A}(n)]\) the vector of observations for the \(i\)-th voxel of region \(A\). ## 4 Proposed Inter-Correlation Estimator After defining the sample correlation coefficient in Section 4.1, we highlight in Section 4.2 the impact of the combined presence of noise and intra-correlation, when using popular estimators of inter-correlation. In Section 4.3 we then propose an inter-correlation estimator that limits these effects. Consistency of our estimator is proved in Section 4.4. ### Computing Sample Correlations We denote by \(\widehat{Cor}(\cdot,\cdot)\) the sample (Pearson's) correlation between any two equal-length vectors of samples. This corresponds to the zero-lag empirical cross-correlation in spatio-temporal studies. To be specific, suppose \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{n}\) are any vectors of the same length, and let \(\overline{a}=n^{-1}\sum_{t=1}^{n}a_{t}\) and \(\overline{b}=n^{-1}\sum_{t=1}^{n}b_{t}\) be the averages of their elements, respectively. Let \(\mathbf{1}_{n}\) be the \(n\)-vector of ones, \(\mathbf{a}^{c}=\mathbf{a}-\overline{a}\mathbf{1}_{n}\), and \(\mathbf{b}^{c}=\mathbf{b}-\overline{b}\mathbf{1}_{n}\) their centered versions. With \(\langle\cdot,\cdot\rangle\) and \(\left\|\cdot\right\|\) being the Euclidean inner product and norm, respectively, we define \[\widehat{Cov}(\mathbf{a},\mathbf{b})=n^{-1}\langle\mathbf{a}^{c},\mathbf{b}^ {c}\rangle,\ \widehat{Var}(\mathbf{a})=n^{-1}\left\|\mathbf{a}^{c}\right\|^{2},\ \widehat{Cor}(\mathbf{a},\mathbf{b})=\frac{ \widehat{Cov}(\mathbf{a},\mathbf{b})}{\sqrt{\widehat{Var}(\mathbf{a})\widehat {Var}(\mathbf{b})}}. \tag{2}\] Using this notation, the sample correlation between any two voxels \(i\) and \(j\) in regions \(A\) and \(B\) is \[r_{i,j}^{A,B}=\widehat{Cor}(\mathbf{Y}_{i}^{A},\mathbf{Y}_{j}^{B}). \tag{3}\] Observe that this definition applies equally to sample inter-correlations (\(A\neq B\)) as well as intra-correlations (\(A=B\)). ### Impact of Noise and Intra-Correlation Previously, Matzke et al. (2017) showed that the presence of noise attenuates the observed correlation. Indeed, this phenomenon is captured in the following result: from model (1) and Achard et al. (2020), \(r^{A,B}_{i,j}\) converges almost surely to \[\frac{Cov(Y^{A}_{i},Y^{B}_{j})}{\sqrt{(\sigma^{2}_{A}+\gamma^{2}_{A})\cdot( \sigma^{2}_{B}+\gamma^{2}_{B})}}=\frac{Cov(X^{A}_{i},X^{B}_{j})}{\sqrt{(\sigma^ {2}_{A}+\gamma^{2}_{A})\cdot(\sigma^{2}_{B}+\gamma^{2}_{B})}}. \tag{4}\] Therefore, if distinct regions \(A\),\(B\) with latent signals observed contaminated by noise, \(r^{A,B}_{i,j}\) is not a consistent estimator of true inter-correlation \(\rho^{A,B}\) due to the presence of the noise variances in the denominator of (4). Furthermore, in settings where a single point estimate of the inter-correlation of the unobserved latent signal between two regions is needed, the corresponding pairwise sample inter-correlation coefficients can be averaged to provide an estimator. Denoted \(r^{AC}_{A,B}\), it corresponds to the ensemble estimator in familial data literature (Rosner et al., 1977): \[r^{AC}_{A,B}=\frac{1}{N_{A}\cdot N_{B}}\sum_{i=1}^{N_{A}}\sum_{j=1}^{N_{B}}r^ {A,B}_{i,j}. \tag{5}\] However, the latter is similarly impacted by noise. As mentioned in Section 2, one of the most popular estimators in neuroimaging studies consists of spatially averaging the observation random variables within each distinct region for each time \(t\), before computing the sample correlation between these averages. Specifically, define regional (spatial) averages \(\overline{\mathbf{Y}}^{A}=N_{A}^{-1}\sum_{i=1}^{N_{A}}\mathbf{Y}^{A}_{i}\) and \(\overline{\mathbf{Y}}^{B}=N_{B}^{-1}\sum_{j=1}^{N_{B}}\mathbf{Y}^{B}_{j}\). Then this estimator is \[r^{CA}_{A,B}=\widehat{Cor}(\,\overline{\mathbf{Y}}^{A},\overline{\mathbf{Y}}^ {B}\,). \tag{6}\] Under model (1), and according to results from (Achard et al., 2020), together with intra-regional uncorrelatedness between latent and noise random variables, as well as inter-regional uncorrelatedness of noise, \(r^{CA}_{A,B}\) converges almost surely to: \[\frac{\rho^{A,B}}{\sqrt{\left[\frac{1}{N_{A}^{2}}\cdot\sum\limits_{i,i^{\prime }=1}^{N_{A}}\eta^{A}_{i,i^{\prime}}+\frac{\gamma^{2}_{A}}{N_{A}\cdot\sigma^{2 }_{A}}\right]\left[\frac{1}{N_{B}^{2}}\cdot\sum\limits_{j,j^{\prime}=1}^{N_{B} }\eta^{B}_{j,j^{\prime}}+\frac{\gamma^{2}_{B}}{N_{B}\cdot\sigma^{2}_{B}} \right]}}, \tag{7}\] where \(N_{A}^{-2}\cdot\sum_{i,i^{\prime}=1}^{N_{A}}\eta_{i,i^{\prime}}^{A}\) is the spatial average of the pairwise latent intra-correlation coefficients within region \(A\). It follows from (7) that intra-correlation and noise both contribute to inconsistency of the inter-correlation estimator (6). Indeed, both quantities appear in the denominator. It is then apparent that the smaller the regions (smaller \(N_{A}\)), the higher the impact of noise on the correlation estimation. Additionally, the weaker the spatial intra-regional dependence, the larger the overestimation of the inter-correlation. This effect may also be compounded when regions are large, as was observed by Achard et al. (2011). One would then need to have regions as large as possible, while having an average intra-correlation as close to 1 as possible in order to offset these biases. However, large regions tend to be inhomogeneous in practical scenarios, and thus tend to have low intra-correlation. ### A Clustering-Based Inter-Correlation Estimator Based on these findings, we propose an inter-correlation estimator specifically designed to limit the combined effects of noise and intra-correlation. Instead of aggregating over entire regions, we propose to aggregate over small groups of highly intra-correlated variables (cf. Steps 1 and 2), before computing the correlation of the corresponding local averages (cf. Step 3). #### 4.3.1 Step 1: U-Scores Computation To facilitate the grouping of the variables within each region, we can leverage U-scores to project the sample vectors \(\mathbf{Y}_{i}^{A}\) onto a space where the Euclidean distance can be used as a proxy for the sample correlations.We could then apply any clustering algorithm in the U-score space. _U-scores_ are an orthogonal projection of the Z-scores of random variables onto a unit \((n-2)\)-sphere centered around 0. The U-score \(\mathbf{U}_{i}^{A}\) of \(\mathbf{Y}_{i}^{A}\) is defined by \(\mathbf{U}_{i}^{A}=\mathbf{H}_{2:n}^{T}\mathbf{Z}_{i}^{A}\), where \(\mathbf{H}_{2:n}^{T}\) is a \((n-1)\times(n-1)\) matrix obtained by Gramm-Schmidt orthogonalization, and \(\mathbf{Z}_{i}^{A}\) the Z-score of \(\mathbf{Y}_{i}^{A}\). We refer to (Hero and Rajaratnam, 2011) for a full definition. Sample correlations can then be expressed as an inner product of U-scores: \(r_{i,j}^{A,B}=(\mathbf{U}_{i}^{A})^{T}\mathbf{U}_{j}^{B}=1-\|\mathbf{U}_{i}^{A }-\mathbf{U}_{j}^{B}\|^{2}/2\), where \(\mathbf{U}_{i}^{A}\), \(\mathbf{U}_{j}^{B}\) are the U-scores of the \(i\)th and \(j\)th voxels in regions and \(B\), respectively, and \(\|\cdot\|^{2}\) is the squared Euclidean distance. #### 4.3.2 Step 2: Clustering Once the U-scores are calculated, any standard clustering algorithm can be applied to obtain homogeneous groups of variables within each region. Agglomerative hierarchical clustering with Ward's linkage (Ward, 1963; Murtagh and Legendre, 2014), which is closely related to the k-means algorithm (Hartigan and Wong, 1979), aims to minimize the intra-cluster variance, which implies a maximization of the intra-cluster correlation. A comparison of different clustering methods, which empirically validates the use of Ward's linkage in our context, is presented in Section 5.3. In practice, the number of clusters generally needs to be specified. However, such a strategy, while often satisfactory in common clustering tasks, such as exploratory analyses, does not provide any obvious theoretical guarantees on the homogeneity of the clusters, which is what we are interested in. Nevertheless, hierarchical clustering outputs a dendrogram that can then be cut off at a designated height to produce a clustering. Therefore, instead of setting a number of clusters, we propose to specify a cut-off height through which cluster radii, and by proxy intra-correlations, can be controlled to a certain extent (cf Theorem 1). Proofs can be found in the appendix. **Theorem 1**: _For a region \(A\), a fixed cut-off height \(h_{A}\), and all clusters \(\nu_{A}\) thus obtained, the spatial average of the sample intra-cluster correlation is bounded as follows:_ \[1-\frac{h_{A}^{2}}{2}\ \leq\ \frac{1}{|\nu_{A}|^{2}}\sum_{i,i^{\prime}=1}^{| \nu_{A}|}r_{i,i^{\prime}}^{A,A}\ \leq 1, \tag{8}\] _where \(|\nu_{A}|\) is the size of cluster \(\nu_{A}\)._ Theorem 1 shows that through careful choice of the cut-off heights, clusters of highly correlated variables can be selected within each region. This choice can be guided by the ensuing observations about the maximum distance between U-scores within a given region, denoted by \(h_{A}^{\max}\), which follow immediately from Theorem 1 and the fact that \(1-(h_{A}^{\max})^{2}/2\ =\min_{i,i^{\prime}=1,\ldots,N_{A}}r_{i,i^{\prime}}^{A,A}\): * if \(h_{A}\geq h_{A}^{\rm max}\), \[1-\frac{h_{A}^{2}}{2}\ \leq\ \min_{i,i^{\prime}=1,\ldots,N_{A}}r_{i,i^{\prime}}^{A,A} \ \leq\ \frac{1}{|\nu_{A}|^{2}}\sum_{i,i^{\prime}=1}^{|\nu_{A}|}r_{i,i^{\prime}}^{A,A}\] (9) * and if \(h_{A}\leq h_{A}^{\rm max}\), \[\min_{i,i^{\prime}=1,\ldots,N_{A}}r_{i,i^{\prime}}^{A,A}\ \leq\ 1-\frac{h_{A}^{2}}{2}\ \leq\ \frac{1}{|\nu_{A}|^{2}}\sum_{i,i^{\prime}=1}^{|\nu_{A}|}r_{i,i^{\prime}}^{A,A}.\] (10) Therefore, to ensure all clusters contain more than one voxel, the maximum distance between any two clusters of the region (i.e., the cut-off height) would need to be larger than the maximum distance between any two voxels within the region (i.e., \(h_{A}^{\rm max}\)). Thus, setting the cut-off height to \(h_{A}^{\rm max}\) would ensure to obtain the smallest possible clusters guaranteed to contain at least two variables. Moreover, computing \(h_{A}^{\rm max}\) is computationally inexpensive. It also does not depend on any ground-truth, which remains unknown in practice. Empirical explorations of an optimal choice are presented in Section 5.2, and demonstrate the practical effectiveness of setting the cut-off height to \(h_{A}^{\rm max}\). #### 4.3.3 Step 3: Clustered Correlation Estimation Once clusters are obtained within each region, the inter-correlation is estimated as follows. For two distinct regions \(A\) and \(B\), for fixed cut-off heights \(h_{A},h_{B}\), and any two pairs of clusters \(\nu_{A},\nu_{B}\) within each of these regions, we define the following cluster-level inter-correlation estimator: \[r_{\nu_{A},\nu_{B}}^{CLA}=\widehat{Cor}(\ \overline{\mathbf{Y}}^{\nu_{A}},\ \overline{\mathbf{Y}}^{\nu_{B}}\ ), \tag{11}\] where \(\overline{\mathbf{Y}}^{\nu_{A}}=|\nu_{A}|^{-1}\sum_{i\in\nu_{A}}\mathbf{Y}_{i }^{A}\), and \(\overline{\mathbf{Y}}^{\nu_{B}}\) is defined similarly. A distribution of sample inter-correlation coefficients is hence obtained for this pair of regions, as seen in Figure 1. As mentioned earlier, if a point estimate is needed, one can then simply average the cluster-level estimates to derive the following regional-level estimator: \[r_{A,B}^{CLA}=\frac{1}{N_{A}^{clust}\cdot N_{B}^{clust}}\sum_{\nu_{A},\nu_{B} }r_{\nu_{A},\nu_{B}}^{CLA}, \tag{12}\] where \(N_{A}^{clust}\) is the number of clusters within region \(A\). We refer to Algorithm 1 for a detailed description of our proposed clustering-based correlation estimation procedure for \(J\) regions. ### Consistency of the Proposed Estimator The clusters derived in Algorithm 1 are data-driven, and thus random from a probabilistic perspective. To simplify analysis and allow us to demonstrate the expected behavior of the proposed estimator as the number of time points \(n\) grows, let us assume that clusters \(\nu_{A}\) and \(\nu_{B}\) are fixed. Then define the following quantity, which will be used in several of the Figure 1: Illustration of the inter-correlation estimation of a pair of regions for different cut-off heights. The top panel shows the dendrograms of the hierarchical clustering applied to each region. The horizontal line over each dendrogram indicates the cut-off heights \(h_{A},h_{B}\). The grey crosses in the middle panel correspond to the random variables inside each regions, and are grouped into the resulting clusters (orange ellipses). The arrows represent the sample inter-correlation between the average of the variables inside each cluster (some arrows were left out to improve readability). The bottom panel displays the distribution of the pairwise sample inter-correlation. The true inter-correlation \(\rho_{A,B}\) (solid line) is best approximated by the sample inter-correlation \(r_{A,B}^{CLA}\) (dotted line) when the cut-off heights are neither too small nor too large. subsequent results: \[\rho^{CLA}_{\nu_{A},\nu_{B}}=\frac{\rho^{A,B}}{\sqrt{\left[\frac{1}{|\nu_{A}|^{2} }\cdot\sum\limits_{i,i^{\prime}=1}^{|\nu_{A}|}\eta^{A}_{i,i^{\prime}}+\frac{ \gamma^{2}_{A}}{|\nu_{A}|\sigma^{2}_{A}}\right]\cdot\left[\frac{1}{|\nu_{B}|^{ 2}}\cdot\sum\limits_{j,j^{\prime}=1}^{|\nu_{B}|}\eta^{B}_{j,j^{\prime}}+\frac{ \gamma^{2}_{B}}{|\nu_{B}|\sigma^{2}_{B}}\right]}}. \tag{13}\] **Theorem 2**: _Under the assumptions of model (1), for a fixed pair of clusters \(\nu_{A},\nu_{B}\), as \(n\) tends towards infinity,_ \[r^{CLA}_{\nu_{A},\nu_{B}}\ \overset{a.s.}{\rightarrow}\ \rho^{CLA}_{\nu_{A},\nu_{B}}. \tag{14}\] The proof is detailed in the appendix. We obtain similar results for the regional-level point estimate \(r^{CLA}_{A,B}\). **Corollary 1**: _Under the same assumptions as Theorem 2, for two regions \(A,B\), as \(n\) tends towards infinity,_ \[r^{CLA}_{A,B}\ \stackrel{{ a.s.}}{{\rightarrow}}\ \frac{1}{N^{A}_{clust}N^{B}_{ clust}}\sum_{\nu_{A},\nu_{B}}\rho^{CLA}_{\nu_{A},\nu_{B}}. \tag{15}\] Corollary 1 is a direct consequence of Theorem 2. Theorem 2 and Corollary 1 emphasize the fact that controlling the denominator of \(\rho^{CLA}_{\nu_{A},\nu_{B}}\) is key to obtaining a consistent estimator of \(\rho^{A,B}\). This brings to light the influence of the cut-off height, and thereby the cluster size and intra-cluster correlation, on the consistency of the inter-correlation estimate, both at the cluster- and regional-level. For a pair of regions \(A,B\), as the cut-off heights \(h_{A},h_{B}\) become larger, the impact of noise diminishes. Moreover, the clusters increase in size until there is only a single cluster left that corresponds to the entire region. Thus, for \(h_{A},h_{B}\) sufficiently large, our proposed estimator \(r^{CLA}_{\nu_{A},\nu_{B}}\), and the corresponding point estimate \(r^{CLA}_{A,B}\) are equal to the correlation of averages \(r^{CA}_{A,B}\) mentioned earlier. Conversely, as \(h_{A},h_{B}\) become smaller the maximum distance between U-scores within a cluster decreases, hence the minimal intra-cluster correlation increases (cf. Theorem 1). There are also gradually less variables within each cluster, until they eventually contain only a single variable. It follows that when \(h_{A},h_{B}=0\), \(r^{CLA}_{A,B}\) corresponds to a correlation estimate with no aggregation \(r^{AC}_{A,B}\). This can be visualized in Figure 1, where sample correlation distributions are depicted for different cut-off heights. Therefore, to simultaneously lessen the impact of noise and intra-correlation a trade-off is necessary between a sufficiently high cut-off height (to decrease the impact of noise), and a low enough height (to decrease the impact of intra-cluster correlation). In such cases, both \(r^{CLA}_{\nu_{A},\nu_{B}}\) and \(r^{CLA}_{A,B}\) are consistent estimators of the population inter-correlation. ## 5 Experimental Results In this section we empirically determine the optimal cut-off height, evaluate our proposed inter-correlation estimator on synthetic data, and illustrate our approach on real-world datasets. ### Datasets We first present the different datasets used in this paper. #### 5.1.1 Real-World Datasets Rat Brain fMRI DatasetWe apply our estimator on fMRI data acquired on both dead and anesthetized rats (Becq et al., 2020, 2020). In this paper we consider the following anesthetics: EtoL), Isoflurane (IsoW) and Urethane (UreL). The dataset is freely available at [https://dx.doi.org/10.5281/zenodo.7254133](https://dx.doi.org/10.5281/zenodo.7254133). The scanning duration is 30 min with a time repetition of 0.5 s. After preprocessing (Becq et al., 2020), 25 groups of voxels, each associated with its BOLD signal with a number of time points in the order of thousands, were extracted for each rat. They correspond to rat brain regions defined by an anatomical atlas obtained from a fusion of the Tohoku and Waxholm atlases (Becq et al., 2020). Region sizes vary from about 40 up to approximately 200 voxels. Human Connectome ProjectWe also consider 35 subjects from the human connectome project (HCP), WU-Minn Consortium pre-processed (Glasser et al., 2013). Subjects were pseudonymized. Two fMRI acquisitions on different days are available for each subject. The scanning duration is 14 min and 24 s with a time repetition of 720 ms. A modified AAL template is used to parcellate the brain into 89 regions. The details of the pre-processing are available in (Termenon et al., 2016). Region sizes are in the order of thousands of voxels, and number of time points are in the order of thousands. #### 5.1.2 Synthetic Datasets We consider several synthetic datasets to evaluate our estimator. For each simulation, we simultaneously generate 800 independent samples of a pair of inter-correlated regions, containing each 60 intra-correlated variables that follow a multivariate normal distribution with a predefined covariance structure contaminated by Gaussian noise. The inter-correlation is constant across all pairs of voxels. The different parameters are chosen to ensure the population covariance matrix of the two regions is positive semidefinite. For instance, one cannot generate a covariance matrix where both intra- and inter-correlation values are low. Toeplitz Covariance StructureWe first generate 1-dimensional data with a Toeplitz intra-regional covariance structure (later denoted 1D Toeplitz). For each region, intra-correlation is defined such that it decreases as the distance between two variables increases: for any voxel \(i,i^{\prime}\) in region \(A\), \(Cor(X_{i}^{A},X_{i^{\prime}}^{A})=\max(1-|i^{\prime}-i|/30,\eta_{A}^{-})\), where \(|i^{\prime}-i|\) is the uniform norm between voxels \(i\) and \(i^{\prime}\), and \(\eta_{A}^{-}\) the minimal population intra-correlation of a region \(A\). In this paper we consider several experimental settings by varying the population intra-correlation, inter-correlation and the variance of the noise. The sample pairwise correlation matrices of the observed signals are represented in Figure 2 for a low intra-correlation and a high intra-correlation setting with high noise. lower the range parameter, the lower the mean intra-correlation. Spherical Covariance StructureWe then generate 3-dimensional data with a spherical intra-regional covariance structure that also depends on the Euclidean distance between voxels (later denoted 3D Spherical) (Ribeiro and Diggle, 2001). We vary the range parameters \(\phi_{A}\), \(\phi_{B}\) and the variance of the noise. The lower the range parameter, the lower the mean intra-correlation. ### Choice of the Cut-off Heights In this section we empirically evaluate on the 1D-Toeplitz dataset the impact of the cut-off heights \(h_{A},h_{B}\) on the proposed clustering-based correlation estimator. We also propose a heuristic to choose optimal cut-off heights. We consider different scenarios, including one that loosely matches live rat data settings, where the noise is high and the intra-correlation low. For each simulated pair of regions, and for various cut-off heights \(h_{A},h_{B}\), the squared error of the cluster-level estimators are computed and then averaged across the different clusters: \[\text{ERROR}=\frac{1}{N_{clust}^{A}N_{clust}^{B}}\sum_{\nu_{A},\nu_{B}}(r_{ \nu_{A},\nu_{B}}^{CLA}-\rho^{A,B})^{2}. \tag{16}\] The resulting surfaces are displayed in Figure 3. The lower the error, the better the quality of the estimator. As expected from Theorems 1 and 2, the error is lowest (refer to the orange points in Figure 3) for cut-off heights that are neither too small nor too large. Moreover, when both the intra-correlation and the variance of the noise are low, the error is low, even for low cut-off heights, as there is no need to aggregate the data to obtain a consistent estimator. However, the error is high for large cut-off heights regardless of the scenario. Indeed, even in the high noise settings, intra-correlation still influences the inter-correlation, and this effect is compounded by that of the cluster size. In Section 4.3.2, we proposed a computationally cheap heuristic to determine a suitable cut-off height. Empirically, it seems the maximum distance between U-scores within a given region \(A\), \(h_{A}^{\max}\), could indeed be an optimal cut-off height. It is represented by a yellow diamond in Figure 3. In fact, it seems to be located at the bottom of a valley and quite close to the minimal error for all settings. We then compare our proposed optimal cut-off height, in terms of Mean Squared Error (MSE), to that obtained using a more standard criterion from the clustering literature: the maximum silhouette score. The Squared Error (SE) of a simulation-specific correlation Figure 3: Error as a function of the cut-off heights \(h_{A},h_{B}\) for a pair of simulated regions for four simulation scenarios, with a true inter-correlation \(\rho^{A,B}=0.3\). The yellow diamond represents the error for cut-off heights equal to the maximum distance between U-scores within each region. The orange point corresponds to the minimal error. estimate \(r^{CLA}_{A,B}\) can be defined as \[\text{SE}=(r^{CLA}_{A,B}-\rho^{A,B})^{2}. \tag{17}\] In this section, the MSE is computed by averaging the SEs across 50 replicates. The MSE for varying intra- and inter-correlation values and a fixed high noise variance are depicted in Figures 4 and 5. The MSE is lower when using our proposed cut-off heights in all the considered scenarios. From now on, and unless stated otherwise, we will hence estimate the inter-correlation using this optimal cut-off height. ### Comparison With Other Methods We then empirically evaluate our choice of clustering method and compare our proposed approach with other estimators in terms of MSE. We first compare the performance of hierarchical clustering with Ward's linkage (our proposed choice and later denoted WardMaxU) with that of k-means (Hartigan and Wong, Figure 4: MSE (\(\times 10\)), averaged over 50 replicates, for varying intra-correlation values for regions \(A\) and \(B\). The true inter-correlation \(\rho_{A,B}\) is 0.3 and the noise variance \(\sigma^{2}_{\boldsymbol{\epsilon}^{A}}=\sigma^{2}_{\boldsymbol{\epsilon}^{B}} =0.5\). 1979) and ClustOfVar (CoV) (Chavent et al., 2012). ClustofVar is a hierarchical clustering method which is based on a principal component analysis approach, and closely related to works from Dhillon et al. (2003) and Vigneau et al. (2015). DBSCAN (Ester et al., 1996), which allows to directly control the cluster radii, was also considered. However, it fails to produce any clustering on the type of data we handle, which is high-dimensional. We also compare these clustering methods with a random assignment of the voxels into clusters (Random). We choose the cut-off heights required by Ward's method according to the heuristic validated in the previous section (that is the maximum distance between U-scores). ClustOfVar, k-means and Random all require a choice of the number of clusters (and not of the cut-off heights). We hence choose the former as that obtained with our proposed method. We also evaluate ClustOfVar with the number of clusters chosen according to the maximum rand index (randCoV), which is the proposed criterion in (Chavent et al., 2012). Results are presented in Table 1. All methods with the same number of clusters are similar, with the exception of the random assignment. As expected, the latter displays MSEs an order of magnitude higher than that of the other clustering techniques, except Figure 5: MSE (\(\times 100\)), averaged over 50 replicates, for varying intra-correlation values for regions \(A\) and \(B\). The true inter-correlation \(\rho_{A,B}\) is 0.1 and the noise variance \(\sigma^{2}_{\mathbf{\epsilon}^{A}}=\sigma^{2}_{\mathbf{\epsilon}^{B}}=0.5\). when both minimal intra-correlations are high. Indeed, in such cases, the intra-correlation is high enough that the intra-cluster correlation will be high regardless of the choice of clusters. This demonstrates the importance of constructing clusters with high intra-cluster correlation to correctly estimate the inter-correlation. The method randCoV showcases the second highest MSE in all scenarios, except when both intra-correlation and noise are high, in which case its MSE is similar to that of the k-means and CoV. Moreover, the computation of the rand index requires a bootstrapping step and is thus very computationally expensive. Indeed, the average CPU time of clustering two regions using the method randCov is in the order of 10 min, while average CPU time is approximately 5 s when using CoV, 300 ms using kmeans, and 30 ms using WardMaxU. Additionally, neither k-means nor CoV provide any obvious theoretical guarantees on the intra-correlation values within each cluster. Furthermore, they require to compute the U-scores, unlike our method. Indeed, our approach only depends on the distance between U-scores, which can be obtained directly from the sample voxel-to-voxel inter-correlation coefficients, without transforming the signals into U-scores. This step has a CPU time of about 15 s per region. These methods are thus much more computationally heavy. This confirms the choice of hierarchical clustering with Ward's linkage for our purposes, and will be used in all subsequent results. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{3}{c}{Scenarios} & \multicolumn{5}{c}{Clustering Methods} \\ \cline{2-7} \(\eta_{A}^{-}\) & \(\eta_{B}^{-}\) & \(\gamma_{A}^{2}=\gamma_{B}^{2}\) & K-means & CoV & randCoV & Random & **WardMaxU** \\ \hline 0.2 & 0.2 & 0.5 & 2.0 (1.4) & 2.0 (1.4) & 4.8 (7.8) & 15 (5.2) & 2.0 (1.4) \\ 0.8 & 0.8 & 0.5 & 1.2 (1.5) & 1.2 (1.5) & 1.1 (1.3) & 1.0 (1.0) & 1.2 (1.5) \\ 0.2 & 0.8 & 0.5 & 1.1 (1.2) & 1.1 (1.2) & 2.9 (4.2) & 5.0 (3.1) & 1.1 (1.2) \\ 0.2 & 0.2 & 0.1 & 1.0 (0.9) & 1.0 (0.9) & 4.6 (10) & 26 (8.1) & 1.0 (0.9) \\ 0.8 & 0.8 & 0.1 & 0.6 (1.0) & 0.6 (1.1) & 1.0 (1.4) & 1.4 (1.6) & 0.6 (1.1) \\ 0.2 & 0.8 & 0.1 & 0.4 (0.6) & 0.4 (0.5) & 2.7 (4.4) & 10 (4.5) & 0.4 (0.5) \\ \hline \hline \end{tabular} \end{table} Table 1: Mean (\(\times 10^{-3}\)) and standard deviation in parenthesis (\(\times 10^{-3}\)) of the squared errors over 50 replicates for different clustering methods and different simulation scenarios from the 1D Toeplitz model. The inter-correlation \(\rho^{A,B}\) is set to 0.3. We then compare our proposed estimator with the standard correlation of averages estimator \(r^{CA}_{A,B}\), and the average of correlations \(r^{AC}_{A,B}\)(Rosner et al., 1977). We also conduct comparisons with another inter-correlation estimator from the familial data literature, which is specifically designed for groups of dependent variables but fails to take into account noise (Elston, 1975). Its quality is similar to that of \(r^{AC}_{A,B}\), and these results are hence included in the supplementary materials. Comparison with other correlation estimators from the literature would not be fair as they either only consider pairs of variables or do not handle arbitrary inter-correlation. To proceed we compute the regional-level point estimator \(r^{CLA}_{A,B}\). We then calculate the MSE across 50 simulations. The results obtained for several simulation scenarios are recorded in Table 2. As expected from Theorem 2 and its corollary, our proposed estimator \(r^{CLA}_{A,B}\) outperforms the other estimators for all settings, except the low noise scenarios with 3D Spherical intra-correlation, where the MSE for \(r^{AC}_{A,B}\) is slightly lower. Even in this case, the MSE for \(r^{AC}_{A,B}\) and \(r^{CLA}_{A,B}\) are in the same order of magnitude. More generally, we can note that in all scenarios where the intra-correlation is quite high and the noise variance is low, the MSE for these two estimators are also in the same order of magnitude. Indeed, according to equation (4), Theorem 1, and Corollary 1\(r^{AC}_{A,B}\) and \(r^{CLA}_{A,B}\) would be very similar. Therefore, not only is the quality of the estimation greatly improved in the presence of noise and low intra-correlation, but it is also not deteriorated when intra-correlation is high and the noise is low. Furthermore, in practice, data are expected to be quite noisy with a low intra-correlation. We can remark here that we did not include in Table 2 scenarios where the intra-correlation is close to zero. Indeed, in such cases no clusters of highly correlated variables can be found. In practical situations, this could be due to either high regional inhomogeneity or high noise, and could indicate an issue with the parcellation or data acquisition. Our clustering approach can hence help identify problematic datasets and thus provide information on the quality of the data. ### Illustration on Real-world Data We now apply our proposed estimator on real-world fMRI datasets, with the goal of estimating functional connectivity. At first, the sample cluster-level inter-correlation and voxel-level intra-correlation of different subjects can be visually inspected. The correlation estimates of three rats, including a dead one, are displayed in Figure 6, and that of three \begin{table} \begin{tabular}{c|c c c|c|c|c} \hline \multicolumn{4}{c}{Scenarios} & \multicolumn{4}{c}{Estimators} \\ \hline & \(\eta_{A}^{-}\) & \(\eta_{B}^{-}\) & \(\gamma_{A}^{2},\gamma_{B}^{2}\) & \(r_{A,B}^{AC}\) & \(r_{A,B}^{CLA}\) & \(r_{A,B}^{CA}\) \\ \cline{2-7} & 0.2 & 0.2 & 0.5 & \(1.8\times 10^{-2}\) (\(2.8\times 10^{-3}\)) & \(\mathbf{2.0\times 10^{-3}}\) (\(1.4\times 10^{-3}\)) & \(1.5\times 10^{-1}\) (\(1.8\times 10^{-1}\)) \\ & 0.8 & 0.8 & 0.5 & \(1.2\times 10^{-2}\) (\(3.7\times 10^{-3}\)) & \(\mathbf{1.2\times 10^{-3}}\) (\(1.5\times 10^{-3}\)) & \(1.0\times 10^{-1}\) (\(1.0\times 10^{-1}\)) \\ & 0.2 & 0.8 & 0.5 & \(1.4\times 10^{-2}\) (\(3.0\times 10^{-3}\)) & \(\mathbf{1.1\times 10^{-3}}\) (\(1.2\times 10^{-3}\)) & \(1.0\times 10^{-1}\) (\(1.0\times 10^{-1}\)) \\ & 0.2 & 0.2 & 0.1 & \(5.4\times 10^{-3}\) (\(2.0\times 10^{-3}\)) & \(\mathbf{1.0\times 10^{-3}}\) (\(9.1\times 10^{-4}\)) & \(2.3\times 10^{-1}\) (\(2.7\times 10^{-1}\)) \\ & 0.8 & 0.8 & 0.1 & \(1.9\times 10^{-3}\) (\(2.0\times 10^{-3}\)) & \(\mathbf{6.4\times 10^{-4}}\) (\(1.0\times 10^{-3}\)) & \(1.2\times 10^{-1}\) (\(1.2\times 10^{-1}\)) \\ & 0.2 & 0.8 & 0.1 & \(2.7\times 10^{-3}\) (\(1.7\times 10^{-3}\)) & \(\mathbf{4.3\times 10^{-4}}\) (\(5.5\times 10^{-4}\)) & \(1.4\times 10^{-1}\) (\(1.6\times 10^{-1}\)) \\ \hline \hline & \(\phi_{A,A}\) & \(\phi_{B,B}\) & \(\gamma_{A}^{2},\gamma_{B}^{2}\) & \(r_{A,B}^{AC}\) & \(r_{A,B}^{CLA}\) & \(r_{A,B}^{CA}\) \\ \cline{2-7} & 0.6 & 0.6 & 0.5 & \(1.0\times 10^{-2}\) (\(3.8\times 10^{-3}\)) & \(\mathbf{7.0\times 10^{-4}}\) (\(1.1\times 10^{-3}\)) & \(1.6\times 10^{-3}\) (\(1.9\times 10^{-3}\)) \\ & 0.8 & 0.8 & 0.5 & \(1.0\times 10^{-2}\) (\(4.0\times 10^{-3}\)) & \(\mathbf{7.9\times 10^{-4}}\) (\(1.2\times 10^{-3}\)) & \(1.0\times 10^{-3}\) (\(1.4\times 10^{-3}\)) \\ & 0.6 & 0.8 & 0.5 & \(1.0\times 10^{-2}\) (\(3.9\times 10^{-3}\)) & \(\mathbf{7.2\times 10^{-4}}\) (\(1.1\times 10^{-3}\)) & \(1.0\times 10^{-3}\) (\(1.6\times 10^{-3}\)) \\ & 0.6 & 0.6 & 0.1 & \(1.3\times 10^{-3}\) (\(1.5\times 10^{-3}\)) & \(\mathbf{7.7\times 10^{-4}}\) (\(1.0\times 10^{-3}\)) & \(1.7\times 10^{-3}\) (\(2.0\times 10^{-3}\)) \\ & 0.8 & 0.8 & 0.1 & \(1.4\times 10^{-3}\) (\(1.6\times 10^{-3}\)) & \(\mathbf{7.5\times 10^{-4}}\) (\(1.0\times 10^{-3}\)) & \(1.1\times 10^{-3}\) (\(1.4\times 10^{-3}\)) \\ & 0.6 & 0.8 & 0.1 & \(1.3\times 10^{-3}\) (\(1.6\times 10^{-3}\)) & \(\mathbf{7.7\times 10^{-4}}\) (\(1.0\times 10^{-3}\)) & \(1.3\times 10^{-3}\) (\(1.7\times 10^{-3}\)) \\ \hline \hline & \(\phi_{A,A}\) & \(\phi_{B,B}\) & \(\gamma_{A}^{2},\gamma_{B}^{2}\) & \(r_{A,B}^{AC}\) & \(r_{A,B}^{CLA}\) & \(r_{A,B}^{CA}\) \\ \cline{2-7} & 8 & 8 & 0.5 & \(1.0\times 10^{-2}\) (\(2.3\times 10^{-3}\)) & \(\mathbf{4.6\times 10^{-3}}\) (\(2.4\times 10^{-3}\)) & \(8.8\times 10^{-2}\) (\(1.4\times 10^{-2}\)) \\ & 12 & 12 & 0.5 & \(1.0\times 10^{-2}\) (\(2.8\times 10^{-3}\)) & \(\mathbf{2.4\times 10^{-3}}\) (\(1.9\times 10^{-3}\)) & \(2.5\times 10^{-2}\) (\(8.2\times 10^{-3}\)) \\ & 8 & 12 & 0.5 & \(9.4\times 10^{-3}\) (\(2.5\times 10^{-3}\)) & \(\mathbf{4.2\times 10^{-3}}\) (\(2.3\times 10^{-3}\)) & \(5.3\times 10^{-2}\) (\(1.1\times 10^{-2}\)) \\ & 8 & 8 & 0.1 & \(\mathbf{9.1\times 10^{-4}}\) (\(7.9\times 10^{-4}\)) & \(8.9\times 10^{-3}\) (\(3.8\times 10^{-3}\)) & \(9.3\times 10^{-2}\) (\(1.3\times 10^{-2}\)) \\ & 12 & 12 & 0.1 & \(\mathbf{1.0\times 10^{-3}}\) (\(1.0\times 10^{-3}\)) & \(4.5\times 10^{-3}\) (\(2.8\times 10^{-3}\)) & \(2.6\times 10^{-2}\) (\(8.4\times 10^{-3}\)) \\ & 8 & 12 & 0.1 & \(\mathbf{7.3\times 10^{-4}}\) (\(7.8\times 10^{-4}\)) & \(7.7\times 10^{-3}\) (\(3.3\times 10^{-3}\)) & \(5.6\times 10^{-2}\) (\(1.1\times 10^{-2}\)) \\ \hline \end{tabular} \end{table} Table 2: Mean and standard deviation (in parenthesis) of the squared error over 50 replicates for different simulation scenarios and different estimators. The inter-correlation \(\rho^{A,B}\) is set to 0.3. healthy human subjects (from the HCP dataset) are shown in Figure 8. In brain functional connectivity studies, point estimates for each pair of regions are needed to construct a correlation matrix. A thresholding step is then applied to obtain a binary connectivity network where only the edges corresponding to the highest correlation values remain. In this section, we will therefore mostly focus on evaluating the regional-level entries of these correlation matrices. #### 5.4.1 Rat Data Figure 6: Sample pairwise correlation matrices for different rats and brain region pairs. Voxels are ordered by clusters. The diagonal blocks correspond to the voxel-to-voxel sample intra-correlation \(r_{i,i^{\prime}}^{A,A}\), while the off-diagonal blocks correspond to the sample inter-correlation between clusters \(r_{\nu_{A},\nu_{B}}^{CLA}\). Dead RatsNo functional activity should be detected in dead rats, unlike in live rats. Dead rats hence provide experimental data where the ground-truth inter-correlation is zero. We can therefore compute the MSE across all pairs of regions (each region pair is a replicate). We expect as well that the intra-correlation is zero within all regions. In fact, no discernible structure of the dead rat's intra-correlation can be noted in Figure 6, where motor (M1_l, M1_r) and sensory (S1_l, S1_r) regions are represented. We find the MSE of \(r_{A,B}^{CLA}\) is slightly higher than that of \(r_{A,B}^{AC}\) (cf. Table 3). Nonetheless, they are both very low and several orders of magnitude lower than the MSE of \(r_{A,B}^{CA}\). This indicates that for dead rat data, \(r_{A,B}^{CLA}\) displays similar quality to \(r_{A,B}^{AC}\), and a considerable improvement over the standard \(r_{A,B}^{CA}\). Live RatsTo further illustrate the advantages of our proposed approach, we consider three live rats under different anesthetics. Unlike for dead rats, no ground-truth inter-correlation is available. We thus inspect directly the values of the estimated inter-correlations. We can first remark correlation values are visually very different in live and dead rats. Indeed, both intra- and inter-correlations are higher, in addition to displaying an apparent structure (cf. Figure 6 ). While we could not clearly demarcate \(r_{A,B}^{AC}\) from \(r_{A,B}^{CLA}\) using solely the dead rat data, we can note in Figure 7 that for any pair of regions, \(r_{A,B}^{CLA}\) is both larger than \(r_{A,B}^{AC}\) and further away from zero, which corresponds to dead rat connectivity. In the context of functional connectivity, this implies that, when applying a thresholding step, \(r_{A,B}^{CLA}\) may allow us to increase the number of rightfully detected edges in the corresponding connectivity network. \begin{table} \begin{tabular}{c c c c} \hline \hline Dead Rat ID & \(r_{A,B}^{AC}\) & \(r_{A,B}^{CLA}\) & \(r_{A,B}^{CA}\) \\ \hline 16 & \(5.2\times 10^{-6}\) & \(5.6\times 10^{-5}\) & \(1.3\times 10^{-2}\) \\ 18 & \(4.7\times 10^{-6}\) & \(5.4\times 10^{-5}\) & \(1.3\times 10^{-2}\) \\ 9 & \(5.7\times 10^{-6}\) & \(6.0\times 10^{-5}\) & \(1.3\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 3: MSE across all pairs of regions for different dead rats and different estimators. #### 5.4.2 HCP Data We then illustrate our proposed approach on human data from healthy live subjects. No ground-truth is available. Figure 8 showcases sample correlations of the Precentral regions (Pr_l, Pr_r), which are large regions containing about 1700 voxels, and Heschl's gyri (H_l, H_r), which are ten times smaller. We can first note that the intra-correlation displays some structure, as in the live rats. Nonetheless, overall, subject 2 seems to have both lower sample intra- and inter-correlation values, compared to most other subjects (including subjects 1 and 3). Subject 2 has in fact a benign anatomical brain anomaly. Our proposed approach hence allowed us to single out an unusual subject just by visually inspecting its sample intra- and inter-correlation values. We can then compare the sample distribution of our proposed estimator \(r^{CLA}_{A,B}\) with that of the standard estimator \(r^{CA}_{A,B}\) (cf. Figure 9) and of \(r^{AC}_{A,B}\) (cf. Figure 10). Overall, and as expected from equations (4) and (7) and Corollary 1, the correlation of averages \(r^{CA}_{A,B}\) values are higher than that of \(r^{CLA}_{A,B}\), while the sample values of the average of correlations estimator \(r^{AC}_{A,B}\) are lower. In terms of functional connectivity, this means using the \(r^{CA}_{A,B}\) estimator could lead to falsely detecting edges, while using \(r^{AC}_{A,B}\) could lead to missing edges. These results are in accordance with what was observed in the rat data. Figure 7: Sample inter-correlation coefficients estimated using \(r^{AC}_{A,B}\) against our proposed estimator \(r^{CLA}_{A,B}\) for three live rats under different anesthetics. Each point represents a pair of brain regions. Figure 8: Sample pairwise correlation matrices for different HCP subjects and brain region pairs. Voxels are ordered by clusters. The diagonal blocks correspond to the voxel-to-voxel sample intra-correlation \(r^{A,A}_{i,i^{\prime}}\), while the off-diagonal blocks correspond to the sample inter-correlation between clusters \(r^{CLA}_{\nu_{A},\nu_{B}}\). Figure 9: Inter-correlation coefficients estimated using \(r^{CA}_{A,B}\) against our proposed estimator \(r^{CLA}_{A,B}\) for three HCP subjects. Each point represents a pair of brain regions. Since we have access to two separate sessions for each subject, we then evaluate the reproducibility of our estimator. To do so, for each subject, we calculate the Concordance Correlation Coefficient (CCC) (Lin, 1989) between the inter-correlations estimates from their two sessions. The CCC is scaled between \(-1\) and \(1\), with \(1\) corresponding to a perfect concordance. This means that the higher the CCC, the more reproducible the estimator. The estimator \(r^{CLA}_{A,B}\) exhibits the highest CCC, with an average (variance) across the \(35\) subjects of \(0.69\) (\(0.03\)), while that of \(r^{CA}_{A,B}\) is \(0.63\) (\(0.02\)) and \(r^{AC}_{A,B}\) is \(0.67\) (\(0.04\)). Our proposed estimator hence improves reproducibility over existing estimators. ## 6 Conclusion In this paper, we proposed a novel and non-parametric estimator of the correlation between groups of arbitrarily dependent variables in the presence of noise. We devised a clustering-based approach that simultaneously reduces the impact of noise and intra-correlation through judicious aggregation. We then proved that for an appropriate choice of cut-off heights of the dendrograms thus generated, our proposed estimator is a consistent estimator of the population inter-correlation. Moreover, our method yields both point estimates and a corresponding sample distribution that could be used, for instance, for uncertainty quantification. We conducted experiments on synthetic data that showed our proposed estimator surpasses popular existing methods in terms of quality, and demon Figure 10: Inter-correlation coefficients estimated using \(r^{AC}_{A,B}\) against our proposed estimator \(r^{CLA}_{A,B}\) for three HCP subjects. Each point represents a pair of brain regions. strated the effectiveness and reproducibility of our approach on real-world datasets. Supplementary MaterialsProofs of the Theorems are available in the appendix. Discussion about the relaxation of assumptions on the noise, as well as additional details and results on the synthetic datasets are available in the supplementary materials. Source code, including a notebook detailing how to reproduce the figures of this paper, is available at: [https://gitlab.inria.fr/q-func/clustcorr](https://gitlab.inria.fr/q-func/clustcorr). FundingThis work was supported by the project Q-FunC from Agence Nationale de la Recherche under grant number ANR-20-NEUC-0003-02 and the NSF grant IIS-2135859. ## Appendix A Proof of Theorem 1 The proof follows from the properties of hierarchical clustering. In the context of Ward's linkage, the distance between two clusters \(\nu_{1}\) and \(\nu_{2}\) is defined according to Kaufman and Rousseeuw (2005, p. 230) as: \[D(\nu_{1},\nu_{2})=\sqrt{\frac{2\cdot|\nu_{1}|\cdot|\nu_{2}|}{|\nu_{1}|+|\nu_{2 }|}\cdot\big{\|}\,\overline{\mathbf{U}}^{\nu_{1}}-\overline{\mathbf{U}}^{\nu_ {2}}\big{\|}^{2}}, \tag{18}\] where \(\overline{\mathbf{U}}^{\nu_{1}}\) is the centroid and \(|\nu_{1}|\) the cardinality of cluster \(\nu_{1}\). Consider a region \(A\) and fix a cut-off height \(h_{A}\). Then, from properties of agglomerative clustering, for any cluster \(\nu_{A}\), and for all pairs of U-scores \(\mathbf{U}_{i}^{A},\mathbf{U}_{i^{\prime}}^{A}\) inside \(\nu_{A}\), \(D(\{\mathbf{U}_{i}^{A}\},\{\mathbf{U}_{i^{\prime}}^{A}\})\leq h_{A}\). Therefore, by combining this inequality with properties of the U-scores (Hero and Rajaratnam, 2011), the sample intra-correlation can be lower-bounded by a function of \(h_{A}\): \[1-\frac{h_{A}^{2}}{2}\ \leq\ 1-\frac{\|\mathbf{U}_{i}^{A}-\mathbf{U}_{i^{\prime}}^ {A}\|^{2}}{2}\ =r_{i,i^{\prime}}^{A,A}, \tag{19}\] which implies the left-hand side of (8). The right-hand side follows from properties of correlation coefficients. This concludes the proof. Proof of Theorem 2 For two clusters \(\nu_{A},\nu_{B}\) in regions \(A,B\), from (11), \[r^{CLA}_{\nu_{A},\nu_{B}}=\frac{\widehat{Cov}(\,\overline{\mathbf{Y}}^{\nu_{A}}, \overline{\mathbf{Y}}^{\nu_{B}})}{\sqrt{\widehat{Var}(\,\overline{\mathbf{Y}}^ {\nu_{A}})\cdot\widehat{Var}(\,\overline{\mathbf{Y}}^{\nu_{B}})}}. \tag{20}\] Since we have assumed variables are temporally i.i.d., and according to the model definition (cf. Section 3), as \(n\) tends towards infinity, \[\widehat{Cov}(\,\overline{\mathbf{Y}}^{\nu_{A}},\overline{\mathbf{Y}}^{\nu_{B }})\stackrel{{ a.s.}}{{\rightarrow}}Cov(\,\overline{\mathbf{Y}}^ {\nu_{A}}(t),\overline{\mathbf{Y}}^{\nu_{B}}(t)), \tag{21}\] for any time point \(t\) and where \[Cov(\,\overline{Y}^{\nu_{A}}(t),\overline{Y}^{\nu_{B}}(t)) =\frac{1}{|\nu_{A}|\cdot|\nu_{B}|}\sum_{i=1}^{|\nu_{A}|}\sum_{j=1 }^{|\nu_{B}|}Cov(Y_{i}^{A}(t),Y_{j}^{B}(t))\] \[=\frac{1}{|\nu_{A}|\cdot|\nu_{B}|}\sum_{i=1}^{|\nu_{A}|}\sum_{j=1 }^{|\nu_{B}|}\sigma_{A}\sigma_{B}\rho^{A,B}\] \[=\sigma_{A}\sigma_{B}\rho^{A,B}, \tag{22}\] and, from equation (1), \[\widehat{Var}(\,\overline{\mathbf{Y}}^{\nu_{A}})\stackrel{{ a.s.}}{{\rightarrow}}Var(\,\overline{Y}^{\nu_{A}}(t))\ =\ \sigma_{A}^{2}\cdot\frac{1}{|\nu_{A}|^{2}}\cdot\sum_{i,i^{ \prime}=1}^{|\nu_{A}|}\eta_{i,i^{\prime}}^{A}+\frac{\gamma_{A}^{2}}{|\nu_{A}|}, \tag{23}\] which gives (14), and concludes the proof.
2307.02230
Statistical properties and lensing effect on the repeating fast radio burst FRB 180916.J0158+65
FRB 180916.J0158+65 is a well-known repeating fast radio burst with a period ($16.35~\rm days$) and an active window ($5.0~\rm days$). We give out the statistical results of the dispersion measures and waiting times of bursts of FRB 180916.J0158+65. We find the dispersion measures at the different frequencies show a bimodal distribution. The peaking dispersion measures of the left mode of the bimodal distributions increase with frequency, but the right one is inverse. The waiting times also present the bimodal distribution, peaking at 0.05622s and 1612.91266s. The peaking time is irrelevant to the properties of bursts, either for the preceding or subsequent burst. By comparing the statistical results with possible theoretical models, we suggest that FRB 180916.J0158+65 suffered from the plasma lensing effects in the propagation path. Moreover, this source may be originated from a highly magnetized neutron star in a high-mass X-ray binary.
Yu-Bin Wang, Abdusattar Kurban, Xia Zhou, Yun-Wei Yu, Na Wang
2023-07-05T12:21:59Z
http://arxiv.org/abs/2307.02230v1
# Statistical properties and lensing effect on the repeating fast radio burst FRB 180916.J0158+65 ###### Abstract FRB 180916.J0158+65 is a well-known repeating fast radio burst with a period (16.35 days) and an active window (5.0 days). We give out the statistical results of the dispersion measures and waiting times of bursts of FRB 180916.J0158+65. We find the dispersion measures at the different frequencies show a bimodal distribution. The peaking dispersion measures of the left mode of the bimodal distributions increase with frequency, but the right one is inverse. The waiting times also present the bimodal distribution, peaking at 0.05622s and 1612.91266s. The peaking time is irrelevant to the properties of bursts, either for the preceding or subsequent burst. By comparing the statistical results with possible theoretical models, we suggest that FRB 180916.J0158+65 suffered from the plasma lensing effects in the propagation path. Moreover, this source may be originated from a highly magnetized neutron star in a high-mass X-ray binary. keywords: pulsars: general \(-\) stars: individual (FRB 180916.J0158+65) \(-\) transients: fast radio bursts \(-\) interstellar medium \(-\) scattering ## 1 Introduction Fast radio bursts (FRBs) are mysterious bright (emitting \(\sim\) Jy) radio sources that can release extremely high energy within milliseconds of time scale. Up to now, more than 600 FRBs have been detected, and 24 of them are repeaters (Petroff et al., 2016; Luo et al., 2020; Amiri et al., 2021). Their unusually high dispersion measures (DMs) indicate that they are extragalactic or cosmic origin rather than Galactic (Thornton et al., 2013). Recently, FRB 20200428 from the magnetar SGR 1935+2154 has been detected the similar properties as the repeaters (Andersen et al., 2019), and the bursts of FRB 180301 have the analogous polarization angle swings to pulsars (Luo et al., 2020). These suggest that the repeating sources could be the possibility of luminous coherent emission processes around pulsars or magnetars (Kumar et al., 2017; Andersen et al., 2019, 2020; Li et al., 2021). However, FRBs are still topical events and keep mysterious in its physical nature. The repeating FRBs may have two origins, i.e., the isolated neutron stars (NSs) and the NS binary systems (Plats et al., 2019; Kurban et al., 2022). An isolated NS requires the accumulation of enough energy to release the next burst randomly, which presents the propriety of aperiodicity. Thus, the investigation for waiting time (\(\Delta t\)), the time interval between two adjacent bursts within an observational campaign, is pretty important for understanding the physical nature of FRBs. For example, the waiting time with unimodal distribution has been found in the bursts activities from the magnetars in the Milky Way (Cheng et al., 2020; Younes et al., 2020) or the giant pulses from the isolated pulsar (Abbate et al., 2020; Kuiack et al., 2020; Geyer et al., 2021). However, the waiting times of FRB 121102 presented a bimodal distribution, which is uncorrelated with the fluences, peak flux densities, pulse widths, and high energy components of bursts (Li et al., 2019; Li et al., 2021). The waiting times of FRB 20201124A also has a bimodal distribution (Xu et al., 2022). These indicate that some repeaters, like FRB 121102 and FRB 20201124A, may originate from the activity in the binary systems (Du et al., 2021; Geng et al., 2021; Wang et al., 2022) rather than the isolated NS. FRB 180916.J0158+65 exhibited similar observational properties to FRB 121102 and FRB 20201124A. For example, some bursts among the three repeaters present shorter delay times at low frequency than that at high frequency (Chamma et al., 2021; Platts et al., 2021; Kumar et al., 2022). In addition, they are located behind the clump of plasma (Tendulkar et al., 2017; Marcote et al., 2020; Xu et al., 2022); they occupy an optical counterpart (Tendulkar et al., 2017; Li et al., 2022; Ravi et al., 2022); they have the periodicity and an active window (Amiri et al., 2020; Rajwade et al., 2020; Mao et al., 2022); and they produce a delay time of about tens of milliseconds between bursts (Amiri et al., 2020; Li et al., 2021; Xu et al., 2022). These indicate FRB 180916.J0158+65 may have similar statistical properties to FRB 121102. Thus, we give out the statistic of the DM and waiting time of FRB 180916.J0158+65, which is detected at frequencies from 110 to 5.4 GHz. Coupled with the statistical results for the repeater, the effects in the propagation path (gravitational lensing and plasma lens) will be discussed, which may help us to reveal the physical nature of this source. This paper is organized as follows. In Section 2, we give out the statistical properties of FRB 180916.J0158+65, especially for waiting time and DMs. The lensing effects of the propagation path are discussed in Section 3. Finally, a summary and discussion are given in Section 4. ## 2 Statistical properties of FRB 180916.J0158+65 FRB 180916.J0158+65 with Galactic longitude \(l=129.7^{\circ}\) and latitude \(b=3.7^{\circ}\) is reported a possible \(P_{\rm orb}=16.35\) day periodicity and a 5.0-day phase window approximately. The Milky Way DM contribution is \(200\,{\rm pc\,cm^{-3}}\)(NE2001, Cordes & Lazio, 2002) or \(325.23\,{\rm pc\,cm^{-3}}\)(YMW16, Yao et al., 2017; Amiri et al., 2020). It is located in a nearby spiral galaxy with redshift \(z_{d}=0.0337\pm 0.0002\) and luminosity distance \(d_{\rm os}=149.0\pm 0.9\) Mpc (Marcote et al., 2020). The projected distance of the repeating source (roughly 4.7 kpc) is far from the core of the host galaxy. There is no other comparable large and bright galaxy in its observing field of view, but a young stellar clump with size \(\sim 380\,{\rm pc}\) around the repeater (the source environment \(\sim 30-60\,{\rm pc}\)) (Marcote et al., 2020; Tendulkar et al., 2021). Over 195 bursts have been detected from this source by different instruments at frequencies ranging from 110 MHz to 5.4 GHz (Amiri et al., 2020; Chawla et al., 2020; Marcote et al., 2020; Marthi et al., 2020; Pearlman et al., 2020; Pilla et al., 2020; Pastor-Marazuela et al., 2021; Pleunis et al., 2021; Bethapudi et al., 2022; Mckinven et al., 2023). These instruments are including the 100-m Effelsberg Radio Telescope (Effelsberg; 4-5.4 GHz), the European VLBI Network (EVN; 1636-1764 MHz), the Westerbork Synthesis Radio Telescope (WSRT) with the Apertifi Radio Transient System (Apertifi, 1220-150 MHz), the upgraded Giant Metrewave Radio Telescope (uGMRT; 550-750 MHz and 250-500 MHz), the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst Project (CHIME/FRB; 400-600 MHz), the Green Bank Telescope (GBT; 300-400 MHz), the Sardinia Radio Telescope (SRT; 296-360 MHz) and the Low-Frequency Array (LOFAR; 110\(-\)188 MHz). The pulse widths of these bursts are ranging from a few milliseconds to about one hundred milliseconds (Amiri et al., 2020; Marthi et al., 2020; Pleunis et al., 2021). ### The statistical properties of DM In this subsection, we give out the statistical properties of DMs of FRB 180916 observed by different telescopes. Their arrival times, telescope names, DMs, and references are listed in Table 1. Since all bursts observed by LOFAR with the given DMs are around 160 MHz (Pastor-Marazuela et al., 2021; Pleunis et al., 2021), we take the 160 MHz as a center frequency of bursts observed by LOFAR in the following discussions. The DM variation of the burst (\(\Delta{\rm DM}\sim 21\,{\rm pc\,cm^{-3}}\)) at MJD 58883.02020163 is significantly higher than that of other bursts detected by CHIME/FRB (Mckinven et al., 2023). However, the DM variations of other repeaters are within the range of \(20\,{\rm pc\,cm^{-3}}\)(Li et al., 2021; Xu et al., 2022). This means that the abnormal bursts reported by Pearlman et al. 2020 may have a distinctly different origin or de-dispersion algorithm, we will ignore these bursts here. Figure 1 shows the MJD-DM relationship as well as a histogram of DMs. We also give out a linear fitting of the DMs and the slope is \(0.028(0.12)\,{\rm pc\,cm^{-3}\,yr^{-1}}\), indicating that the DM increases over time. The Kolmogorov-Smirnov (K-S) test from scipy.stats.kstest is used to examine the Gaussian property of DMs; their p-value (0.0012) is less than the significant level of 0.05. To perform a K-S test on the random sampling DMs of all data, we take the error bar of each DM observation as the standard deviation and sample 1000 points using a Gaussian random distribution. We discover that the p-value is also less than the 0.05 level of significance. The total DMs are not Gaussian distribution but may have multiple components. To reveal the real distributions, the distribution density of DM with weighted error and different frequencies have been used to estimate the distributions of DMs (Yusifov & Kucuk, 2004). Using this method, we find a relatively large difference between the estimated and actual DM, which may be due to a small number of DM samples. As a result, the upper panel of Figure 2 depicts the kernel density estimation (KDE) of randomly sampled DMs at different frequencies. The sub-plot demonstrates the KDEs of DMs between \(350.3\,{\rm pc\,cm^{-3}}\) and \(350.9\,{\rm pc\,cm^{-3}}\). According to Figure 2, DMs for three frequencies have two main peaks, but the correct distribution Figure 1: MJD-DM relation (upper) and histogram distribution of DM (bottom) with the 0.16 bin size. In the upper panel, the black, cyan, blue, green, red, brown and purple dots are the observational data from CHIME, EVN, Apertifi, GBT, SET, uGMRT (650 MHz) and LOFAR, respectively. And the red dashed line in the upper panel is the linear fitting result. for 600 MHz may have multiple peaks. The histogram of DMs for 600 MHz is shown in the bottom panel of Figure 2. It also shows that the DMs have two main peaks, thus we take the peak DM around 349.734 pc cm\({}^{-3}\) at this frequency as the right peak. We use the KDE approach for DMs to evaluate two DM peaks, which are repeated 1000 times for each frequency. The mean value of peaking DMs is approximated for each peak. Their standard deviation is set as an error. Table 2 finally shows the peaking DMs at four frequencies. The right peaking DM at 1370 MHz appears more than 250 times but exists, causing a significantly higher inaccuracy in Table 2, whereas the other peaking DMs appear 1000 times. Thus, the DM of the left peaks decreases with the frequency increases, but the reverse is true for the right ones. ### The properties of waiting time The waiting time is calculated by two adjacent bursts within an observational campaign, thus the bursts are taken into account for data from EVN (Marcote et al., 2020), Apertif (Pastor-Marazuela et al., 2021), GBT (Chawla et al., 2020), sRT (Pilia et al., 2020), uGMRT (Marthi et al., 2020; Pleunis et al., 2021), LOFAR (Pastor-Marazuela et al., 2021; Pleunis et al., 2021), and CHIME. Considering the lack of pulse width, fluence, and peak flux density in Mckinven et al. 2023, we will use the data given by Amiri et al. 2020 to maintain consistency in the following discussions. Since the bursts of the repeater appear random at observable frequency bandwidth and an occasional time in the phase window, the histogram of waiting times in the logarithm coordinate is given \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline MJD & Telescope & DM & Ref MJD & Telescope & DM & Ref MJD & Telescope & DM & Ref \\ & \multicolumn{3}{c}{pc cm\({}^{-3}\)} & & pc cm\({}^{-3}\) & & pc cm\({}^{-3}\) & \\ \hline [MISSING_PAGE_POST] BT & \(348.9\pm 0.1\) & [3] & 58981.776626 & CHIME & \(349.507\pm 0.084\) & [1] & 59225.10428 & CHIME & \(348.99\pm 0.13\) & [1] \\ 5888.08461892 & GBT & in Figure 3, which shows a typical bimodal distribution. The waiting times of most bursts are in the range of 31.352 s to 11397.10608 s. In the right part, 89 waiting times are well fitted by the Gaussian function with a reduced Chi-Square of \(\sim 2.737\) and an Adjusted R-Square of \(\sim 0.918\), their peaking value is at \(\Delta t=1612.91266\) s. We used the K-S test to examine the right part of the waiting time and found that the p-value (0.974) is greater than the 0.05 significance level. Thus the waiting times on the right part cannot reject a Gaussian distribution. We are also aware that CHIME/FRB only has an exposure of 12 min or \(\sim\)40 min each time (Amiri et al., 2020; Chawla et al., 2020), and other telescopes usually have few-hour exposures each time, so it is difficult to see pairs longer than the CHIME/FRB telescope observation length, which may lead to the decrease in counts greater than 10\({}^{3}\) s (17 min). However, our result can be consistent with the burst rate (\(\sim\)1.8 bursts hr\({}^{-1}\)) of the repeater (Amiri et al., 2020; Chawla et al., 2020). In the left part, seven waiting times vary around tens of milliseconds, appearing in the 300\(-\)800 MHz and frequency-dependent. The mean waiting times at different frequencies are 37.4 ms (350 MHz), 52.7 ms (600 MHz), and 86.4 ms (650 MHz), respectively. We also use the Gaussian function to fit the left part and obtain the peaking value at \(\Delta t=0.05622\) s. The Anderson-Darling (A-D) test gives out their statistic value (0.421) and critical value (0.742). These results suggest that waiting time follows a Log-normal distribution. We thought the decrease in the waiting time less than 10 ms may be related to the width of the bursts and the definition of individual bursts, which may be caused by the different physical properties discussed in the following. The accumulated energy of an FRB burst for the next one depends on the duration of two consecutive bursts. The possible correlation between the waiting time and the burst intensity could appear in the preceding or subsequent bursts, which is presented in Figure 4. The DMs of subsequent bursts have a higher mean value than that in previous bursts. The mean fluence and peak flux density for the subsequent bursts are provided with relatively lower values, and the mean pulse width of previous bursts is narrow, they suggest that the width may have opposite properties with the mean fluence and peak flux density. It is easily found that all plots are scattered data points, so these parameters have no obvious correlation with the waiting time. We also take the observational data in Figure 4 to examine the correlations between these parameters and the frequency. In Figure 5, the pulse width, fluence, and peak flux \begin{table} \begin{tabular}{l c c c c} \hline \hline Center frequency & 160 MHz & 350 MHz & 600 MHz & 1370 MHz \\ \hline DM\({}_{\rm obs,L}\) (pc cm\({}^{-3}\)) & 348.988(1.9) & 348.854(10.2) & 348.820(6.9) & 348.755(12.6) \\ DM\({}_{\rm obs,R}\) (pc cm\({}^{-3}\)) & 349.410(2.0) & 349.453(12.3) & 349.742(8.1) & 350.436(110.5) \\ ADM\({}_{\rm obs}\) (pc cm\({}^{-3}\)) & 0.422(2.8) & 0.599(16.0) & 0.992(10.6) & 1.774(111.2) \\ \hline \hline \end{tabular} \end{table} Table 2: Two peaking DMs at different frequencies, which are based on one thousand times of KDEs. The subscript ‘L’ denotes the DMs at the left peaks, ‘R’ denotes the right one. Figure 3: Histogram for the distribution of the waiting time in units of the second. The bin size is 0.3, and the red solid and black dash lines are the fitting curves of the right and left parts of Gaussian functions, respectively. Figure 2: (Upper panel) KDEs of these DMs at different frequencies. Each DM is sampled 1000 points by using a Gaussian random distribution with the error bar of each DM observation as the standard deviation. (Bottom panel) Histogram of DMs at 600 MHz. The red solid line is fitted by two-component Gaussians, and the Adjusted R-Square is 0.93, the peaks of the Gaussians are at 348.826 pc cm\({}^{-3}\) and 349.734 pc cm\({}^{-3}\) respectively. density show anti-correlative with the frequency in the Log-Log coordinate. The variations of mean values for the three parameters of the previous and subsequent bursts are irrelevant to the frequency. Coupled with the results of previous statistical analysis on the DMs and waiting time, some external mechanisms and the effects in the propagation path are a more probable contribution to the repeater. ## 3 The lensing effects on FRB 180916_J0158+65 In this section, we will discuss multiple images caused by the lens. Furthermore, the delay time difference of multiple images within the lens model will be calculated through the statistical results of DMs. For the axially symmetric gravitational lensing in the vacuum, the light rays propagate along the null geodesic curve and are converged by the gravitational field. Since the light rays from the two sides of the lens could have a propagation path difference, an observer may detect two frequency-independent images with different arrival times and magnifications, e.g., the review by Bisnovatyi-Kogan and Tsupko 2017. The delay time difference between two images should be smaller than \(15\,\mathrm{ms}\) for primordial black holes (Munoz et al., 2016). The two images with different propagation paths could also undergo different non-magnetized cold plasma. Thus the dispersive delay difference depends on the contribution of their DM difference (\(\Delta\mathrm{DM}(\nu)\)) and is simplified as \[\Delta t_{\mathrm{dis}}=4.15\,\mathrm{ms}\,\frac{\Delta\mathrm{DM}(\nu)}{\nu _{\mathrm{GHz}}^{2}}, \tag{1}\] where \(\nu_{\mathrm{GHz}}=\nu/\mathrm{GHz}\) is the frequency of the image in the unit of GHz. From Table 2, \(\Delta t_{\mathrm{dis}}\) at \(600\,\mathrm{MHz}\), \(350\,\mathrm{MHz}\) and \(160\,\mathrm{MHz}\) are \(11.44\,\mathrm{ms}\), \(20.29\,\mathrm{ms}\) and \(68.41\,\mathrm{ms}\), respectively. Coupled with the gravitational lensing effects, they are still incompatible with the results of Figure 3. Some other mechanisms may exist in the propagation path of the repeater. The radio signals passing through the plasma lens can diverge as multiple images with different propagation paths. The DMs of images are the frequency-dependence and can present the multiple-components DM distributions in the observation (our previous paper, i.e., Wang et al.2022). Moreover, the delay time difference between two images depends on the variations of their DMs and propagation paths. Thus, the higher-order effects of DM variations in the theoretical prediction should be introduced to the delay time differences, which are roughly expressed as \[\Delta t_{\mathrm{PL}}=\chi+4.15\,\mathrm{ms}\,\frac{\Delta\mathrm{DM}}{\nu _{\mathrm{GHz}}^{2}}-b\,\frac{\Delta\mathrm{DM}^{2}}{\nu_{\mathrm{GHz}}^{4}}, \tag{2}\] where \(\chi\) is the geometric delay time difference contributed by the repeater itself (Wang et al., 2019) and some plasma clouds with relation \(\sum\limits_{i=1}^{n}\Delta\mathrm{DM}_{i}\propto\nu^{2}\)(Tuntsov et al., 2021), the second term in the right side of equation is the classical dispersion relation, the third term is the delay time difference due to the geometric effects, and \(b\) is a free parameter. From the results at \(350\,\mathrm{MHz}\) and \(600\,\mathrm{MHz}\) in Section 2, we can get \(\chi=52.51\,\mathrm{ms}\) and \(b=1.48\,\mathrm{ms}\) through Equation (2). Consequently, the delay time differences at the \(160\,\mathrm{MHz}\) and \(1370\,\mathrm{MHz}\) are \(-281.38\,\mathrm{ms}\) and \(55.11\,\mathrm{ms}\), respectively. Due to the physical properties of the plasma lens, the frequencies of corresponding DMs should be taken as positive values. These bursts produced by a single source cannot occupy two peaking DMs while \(\nu\rightarrow\infty\). We found the two-component Gaussian function is the best fit for the peaking DMs in the Table 2 (an Adjusted R-square of \(\sim 0.967\)). For the DM distribution, the two peaking DMs at five frequencies, the DM differences (\(\Delta\mathrm{DM}_{\mathrm{G}}\)) for each frequency and the delay time differences are given in Table 3. The delay times differences are decreasing with the decreasing frequency except for the value at \(160\,\mathrm{MHz}\). We found that \(\mathrm{DM}_{\mathrm{G,R}}\) at \(160\,\mathrm{MHz}\) is relatively lower than \(\mathrm{DM}_{\mathrm{obs,R}}\). The DMs listed in Table 1, except for the burst at MJD 58950.54130169, are around \(348.98\,\mathrm{pc\,cm^{-3}}\) at \(160\,\mathrm{MHz}\), and their scattering timescales are around \(46\,\mathrm{ms}\) at \(150\,\mathrm{MHz}\)(Pastor-Marazuela et al., 2021). All these results suggested that the scattering effect may contribute \(\sim 0.249\,\mathrm{pc\,cm^{-3}}\) to the DM of burst at MJD 58950.54130169. Furthermore, it can cause a more significant delay time difference than a single burst of Width \(\gtrsim 40\,\mathrm{ms}\). Therefore, the delay time differences and the frequency-dependent DMs are consistent with the plasma lensing model (Wang et al., 2022). FRB 180916_J0158+65 may be affected by the plasma lens with the two-component Gaussian model describing DM distributions. ## 4 Summary and discussions We investigate the repeating features of FRB 180916_J0158+65 in terms of statistics, especially waiting times and DMs. The DMs have a precise bimodal distribution at different frequencies. The peaking values of DMs increase with frequency drop for the left distribution but decrease with frequency decrease for the right distribution, demonstrating that repeating bursts at different frequencies could come from different propagation paths (Wang et al., 2022). This viewpoint is supported by the degree of linear polarization with frequency-dependent characteristics in the repeated FRBs (Feng et al., 2022). We fit these DM peaks and find that the two-component Gaussian function is more appropriate for the DM peaks at different frequencies. The waiting time is also a discontinuous bimodal distribution peaking at \(56.22\,\mathrm{ms}\) and \(1612.91266\,\mathrm{s}\), the mean waiting time in the left distribution decreases as the frequency decrease. The parameters of the preceding or subsequent bursts, including pulse width, fluence, peak flux density and DM, do not correlate with the waiting times. Considering the preceding and subsequent burst parameters, external effects may contribute to the repeater. Based on the statistics for observed bursts of FRB 180916_J0158+65, the repeater suffered from lensing effects in the propagation path has been examined. The delay time difference between two images produced by the gravitational lensing is inconsistent with the waiting time. The higher-order term from the plasma lensing effects can be consistent with the left peak distribution of the waiting times and the variations of DMs, except at \(160\,\mathrm{MHz}\). As a relatively clean line-of-sight path at \(160\,\mathrm{MHz}\)(Pleunis et al., 2021), the scattering effect may highly contribute to the burst at the right distribution of DMs. The frequency-dependent DM may contribute to the properties of near-source plasma and the intervening galaxy, such as the supernova remnants around the source, the pulsar wind nebulae produced by the source, HII regions, and galactic halo (Yang and Zhang, 2017; Prochaska et al., 2019; Er and Mao, 2022). The multi-frequency observations of some repeaters are an important way to reveal their propagating effects and the plasma lensing model. FRB 180916_J0158+65 may originate from the binary system. For the NS\(-\)white dwarf (WD) scenario (Gu et al., 2016, 2020), the waiting times vary from \(100\,\mathrm{\,to\,}1.59392\times 10^{4}\,\mathrm{s}\) when the Eddington limit are considered (\(\dot{M}\lesssim 10^{18}\,\mathrm{g\,s^{-1}}\)). But the energy of a burst from the gravitational potential energy of the accreted material is incompatible with the observation (Frank et al., 2002). In the NS-asteroid belt collision scenario (Geng and Huang, 2015; Dai et al., 2016; Dai and Zhong, 2020), the typical energy of a burst (\(10^{36}\) - \(10^{38}\) erg) depends on the gravitational energy of the asteroid (Dai et al., 2016; Smallwood et al., 2019), but the long time to the next collision (\(\sim 0.8\) h) (Dai et al., 2016) is also inconsistent with the observations of the repeater. For the "cosmic comb" model (Zhang, 2017), the stellar wind in the NS-NS binary scenario or NS-black hole binary scenario cannot be strong enough to explain the properties of the repeater FRB 180916J0158+65 (Du et al., 2021). According to the max Figure 4: Waiting time vs. pulse width (the left upper panel), fluence (the right upper panel), peak flux density (the left bottom panels) and DM (the right bottom panel) of FRB bursts. The black dots represent the data of the previous bursts, the red dots are for these from the subsequent bursts, and the dashed lines are the mean values. imal energy (\(\sim 6\times 10^{38}\) erg) of the repeater, the dipole magnetic field of NS at 1 GHz requires \(\beta_{\rm d}\gtrsim 4.18\times 10^{13}\) G (Kumar et al., 2017; Metzger et al., 2019). For an high mass X-ray binary (HMXB) system, the model requires the pulsar with the spin period of \(P\gtrsim 1.09\) s \(\rm B_{d,12}^{1/3}\)(Walter et al., 2015). That means the highly magnetized NS in HMXB may be of origin of FRB 180916.J0158+65. In addition, the multiple bursts in this model can be produced in the pulsar's magnetosphere (Wang et al., 2019; Levkov et al., 2022). The interval time between two bursts is \(\lesssim 18.70\) ms when taken \(P=23.5\) s and the radial radius within the light cylinder radius, which suggests that the zero-order term of FRB 180916.J0158+65 given by Equation (2) may be mainly contributed by other effects in the propagation path (Tuntsov et al., 2021). Till now, 3 in 24 repeaters, including FRB 180916.J0158+65, FRB 121102 and FRB 20201124A, have some similar statistical properties (Li et al., 2021b; Xu et al., 2022), FRB 20201124A may reside in a magnetar/Be star binary (Wang et al., 2022a). This type of repeater may be related to a highly magnetized NS in the HMXB. ## Acknowledgements This work is supported in part by the Opening Foundation of Xinjiang Key Laboratory (No. 2021D04016), the National Natural Science Foundation of China (Nos. 12033001, 12288102, 11833003, 12273028, 12041304), the Natural Science Foundation of Xinjiang Uygur Autonomous Region (No. 2022D01A363), the Major Science and Technology Program of Xinjiang Uygur Autonomous Region (No. 2022A03013-1, 2022A03013-3), the Chinese Academy of Sciences (CAS) "Light of West China" Program (No. 2018-XBQNXZ-B-025), and the special research assistance project of the CAS. X. Z. would like to thank Qing-min Li for her fruitful suggestions. ## Data availability Observational data used in this paper are quoted from the cited works. Additional data generated from computations can be made available upon reasonable request.
2306.05676
Improving quantum dot based single-photon source with continuous measurements
We propose a technique to improve the probability of single-photon emission with an electrically pumped quantum dot in an optical microcavity, by continuously monitoring the energy state of the dot and using feedback to control when to stop pumping. The goal is to boost the probability of single-photon emission while bounding the probability of two or more photons. We model the system by a stochastic master equation that includes post-measurement operations. Ideally, feedback should be based on the entire continuous measurement record, but in practice, it may be difficult to do such processing in real-time. We show that even a simple threshold-based feedback scheme using measurements at a single time can improve performance over deterministic (open-loop) pumping. This technique is particularly useful for strong dot-cavity coupling with lower rates of pumping, as can be the case for electrical pumping. It is also numerically tractable since we can perform ensemble averaging with a single master equation rather than averaging over a large number of quantum trajectories.
Anirudh Lanka, Todd Brun
2023-06-09T05:21:39Z
http://arxiv.org/abs/2306.05676v2
# Improving quantum dot based single-photon source with continuous measurements ###### Abstract We propose a technique to improve the probability of single-photon emission with an electrically pumped quantum dot in an optical microcavity, by continuously monitoring the dot's energy state and using feedback to control when to stop pumping. The goal is to boost the probability of single-photon emission while bounding the probability of two or more photons. We model the system by a stochastic master equation that includes post-measurement operations. Ideally, feedback should be based on the entire continuous measurement record, but in practice, it may be difficult to do such processing in real-time. We show that even a simple threshold-based feedback scheme using measurements at a single time can improve performance over deterministic (open-loop) pumping. This technique is particularly useful for strong dot-cavity coupling with lower rates of pumping, as can be the case for electrical pumping. It is also numerically tractable since we can perform ensemble averaging with a single master equation rather than averaging over a large number of quantum trajectories. ## I Introduction Single-photon sources that produce high-quality photons on demand would be a technological development of great importance [1; 2]. Such sources have a myriad of applications spanning quantum information processing and computing, efficient quantum key distribution (QKD) [3; 4; 5], metrology, and quantum imaging, among others. In this work, we are particularly interested in linear optical quantum computation (LOQC) for which a key requirement is the availability of high-quality single photons. Quantum processors utilizing photonic integrated circuitry (PIC) could be operated at room temperature and would not require expensive cryogenic cooling due to their weak coupling to the external environment [6]. In addition, photons propagate at the speed of light and offer large bandwidth for data transmission, making them efficient data carriers. Hence, realizing a perfect single-photon source with ideal emission characteristics is very much desired. While there exist many methods to produce single photons, a quantum-dot based approach is of particular interest due to their high excitation and collection efficiency. For useful quantum information processing, an ideal photonic quantum computer should have single photon sources that satisfy three main properties. First, the probability to generate a single photon should be close to one, while keeping the multi-photon emission rate below a maximum tolerable level (two goals that may be somewhat in conflict). Second, the single photons emitted should be indistinguishable, which means that dephasing should be suppressed [7; 8; 9; 10]. Indistinguishability requires that each photon emitted be absolutely identical to every other: they must have the same polarization, spatial mode and temporal profile. This property is particularly crucial for applications such as boson sampling [11]. A method to improve indistinguishability using continuous weak measurements was presented in [12]. Third, the emitted photons must be collected with near-unit efficiency in the preferred quantum channel. There are various processes that affect this: pumping the quantum dot, emission into the cavity, cavity leakage to non-waveguide modes, and photon loss, among others[13; 14]. Although most of these processes depend on fabrication techniques and the quality of materials used, the duration of pumping can be controlled and optimized, which is the main focus here. Photons are generated from a quantum dot by pumping it (either optically or electrically) under favorable bias conditions for a finite duration. The number of photons released depends on the duration of the pumping. Optical pumping is more straightforward experimentally; electrical pumping is better suited for large scale integration. Also, as electrical pumping does not directly insert photons into the cavity, it opens up the possibility of pumping directly into an energy level resonant with the cavity mode, which can reduce timing uncertainties and improve purity in photon emission [12]. In electrical pumping, a bias voltage is applied on a \(p\)-\(i\)-\(n\) diode in which the insulator region contains a quantum dot embedded in an optical microcavity. This allows for an electron to tunnel through from the \(n\)-region to the dot. Further pumping will then enable a hole to tunnel through from the \(p\)-region to the dot and recombine with the electron. Ideally, this is when the pumping should be stopped so that no new electrons or holes tunnel. After a successful recombination event occurs, the dot will release a photon into the microcavity where it ultimately leaks into an external waveguide. In the case of strong pumping--as optical pumping often is--the duration of pumping can be short compared to the emission time of the dot. This simplifies the pumping process, but to avoid scattering extra photons into the microcavity one must generally pump the dot to a nonresonant state and rely on incoherent processes to drop into the resonant state. These incoherent processes increase timing uncertainty, dephasing, and the likelihood of photon loss. Electrical pumping can avoid this by pumping directly into the resonant state. However, weakly pumped systems can complicate the process of exciting the quantum dot. For a successful recombination, the system might need to be pumped for times comparable to the emission time of the dot. This increases the probability either of multi-photon emission, on the one hand, or emitting no photons at all, on the other. For many applications, having multiple photons is more harmful than having a slightly lower single-photon probability. Thus, in this work, we always impose a constraint that the multi-photon emission rate is upper bounded by a small threshold. With this in place, we improve the single-photon probability by making continuous quantum measurements on the system. The idea is to monitor the dot's energy state continuously and determine when to stop pumping based on the information obtained. We show numerically that this mechanism substantially improves the single-photon probability while limiting the multi-photon emission rate in the weak pumping regime. This improvement exists even if we condition only on immediate measurement results, without the difficult task of processing an entire measurement record in real time. This method is also numerically tractable, since we can perform ensemble averaging with a single master equation rather than averaging over a large number of quantum trajectories. The emitter considered in this paper is a \(p\)-\(n\) diode operated as a single-photon LED, though these techniques are applicable to other systems. We discuss the device setup and the corresponding physical processes and interactions in Sec. II. We then talk about the evolution of the system in Sec. III; we present a stochastic master equation and discuss the relevant parameter regime of operation. We briefly describe the evolution in the deterministic pumping case in Sec. III.1 before introducing the threshold-based switching technique in Sec. III.2. We present our numerical results in Sec. IV: We first give the result in the deterministic case, and then analyze the numerical performance of the threshold-based switching technique. We conclude in Sec. V. ## II System model The system we are considering is a quantum dot embedded in a microcavity, which is contained within an insulator sandwiched within a \(pn\)-junction diode. Fig. 1 illustrates this. We assume that the diode is in the Coulomb blockade regime. When it is forward biased, an electron from the \(n\)-region tunnels through to the quantum dot, and raises its energy level. The energy level is reduced again when a hole tunnels from the \(p\)-region to the dot and recombines with the electron. This leads to the emission of a photon into the cavity, which subsequently leaks out into the waveguide. While the photon generation process described above is intuitively straightforward, it is challenging to generate exactly one photon with high probability. Let a pumping cycle define the average tunneling duration that produced a single recombination event (and hence a single photon). The number of photons emitted then follows a Poisson counting process with the pumping cycle as the interarrival time. In the absence of specific information about tunneling times, the best that can be done is to time the pumping a priori, either to maximize the probability of a single photon, or keep the multi-photon probability below a given threshold. (These are not the same, in general.) ### Formalism Our model comprises four subsystems: a quantum dot, an optical microcavity, an external waveguide (which we model as a bath with a single oscillator mode), and a control switch. Thus, the Hilbert space is \[\mathcal{H}=\mathcal{H}_{dot}\otimes\mathcal{H}_{cavity}\otimes\mathcal{H}_{bath }\otimes\mathcal{H}_{control}. \tag{1}\] Our model of the dot includes two energy levels: the ground state \(|G\rangle\) and the excited state \(|X\rangle\). The cavity and the bath contains photons and are represented by the photon number notation: \(|0\rangle\), \(|1\rangle\), \(|2\rangle\) etc. For computational purposes, we truncate this space to 3 dimensions, with \(|2\rangle\) representing the emission of 2 or more photons with probability \(p(2+)\). The control switch also has two energy levels, with \(|1\rangle\) (\(|0\rangle\)) representing the ON (OFF) state of the pumping. Figure 1: Quantum dot in a microcavity embedded within a \(pn\) diode. The device comprises an insulator sandwiched between \(p\)- and \(n\)-type silicon. A quantum dot is fabricated inside the insulator, and this is contained within an optical microcavity. The diode is biased in the forward direction, such that a single electron tunnels through from the \(n\)-side to the dot. The electron remains in the dot until a hole tunnels through to the dot from the \(p\)-side. The electron-hole pair in the dot recombines to emit a photon into the cavity, which subsequently leaks out to the bath. The photon emission probabilities \(p(0)\), \(p(1)\), and \(p(2+)\) can be calculated from the state of the bath subsystem. They quantitatively measure how well the system is performing. Ideally, we want the single-photon probability \(p(1)\) to be close to one. Unfortunately that is not possible for many practical parameter regimes. Moreover, certain applications are intolerant of multi-photon emission. Thus, we desire a parameter regime that maximizes \(p(1)\) subjected to the constraint that \(p(2+)<\epsilon\), for a small \(\epsilon>0\). ### Interactions The total Hamiltonian can be written as \(\hat{H}=\hat{H}_{S}+\hat{H}_{I}\), where we model the dot-cavity interaction by the Jaynes-Cummings Hamiltonian: \[\hat{H}_{I}=i\hbar g(\hat{a}^{\dagger}\hat{\sigma}^{-}-\hat{a}\hat{\sigma}^{+}), \tag{2}\] where \(g\) is the interaction strength, \(\hat{a}^{\dagger}\) (\(\hat{a}\)) is the creation (annihilation) operator acting on the cavity mode, \(\hat{\sigma}^{-}=\ket{G}\bra{X}\) and \(\hat{\sigma}^{+}=\ket{X}\bra{G}\). In the interaction picture, the entire Hamiltonian is just \(\hat{H_{I}}\). The system is initially decoupled with the pumping ON, in the state \(\ket{G,0,0,1}\). Starting from \(\ket{G,0,0,1}\), we wish to transition into the state \(\ket{G,0,1,0}\), representing the dot in the ground state along with a single photon in the bath while the pumping is turned OFF. We model the system dynamics by a set of coherent and incoherent processes as follows: * **Electrical pumping**. This is the process of electron tunneling from the \(n\)-region to the dot under favourable bias conditions. Since this is stochastic, it is modeled as an incoherent \(G\to X\) transition at a rate \(\Omega\). * **Dot-cavity coupling**. This is a coherent process described by the Hamiltonian (2) with coupling strength \(g\). After the electron recombines with the hole, the dot transitions back to the ground state while emitting a photon into the cavity. * **Spontaneous emission**. Instead of emission into the cavity, it is possible that the excitation will be lost by emission into a different mode, or photon absorption. This incoherent process is the \(X\to G\) transition at rate \(\Gamma\). * **Dephasing**. Once the dot is excited to \(\ket{X}\), there exists Rabi oscillations between the states \(\ket{X,0}\) and \(\ket{G,1}\) (with the 2 modes corresponding to the dot and cavity) due to the interaction Hamiltonian. Superpositions of these states can evolve into mixtures by two processes: coupling to an external environment (whether or not the system is being monitored), or the effect of a continuous measurement (in the monitored case). These can both be modeled as dephasing at a rate \(\gamma\). * **Photon leakage**. This is the process of a photon transitioning from the cavity mode to the bath mode (waveguide) at a rate \(\kappa\). Treating this as an incoherent process allows us to use a simple one-mode model for the bath. ## III System and evolution Fig. 2 illustrates the energy ladder and allowed transitions for the system, in which solid lines depicts coherent evolution and dashed lines represents incoherent processes. The dot will be pumped as long as the control mode is in \(\ket{1}\) state, during which the system moves up the energy ladder. Depending on the dot's energy state, the control mode has different rates to make a transition from \(\ket{1}\rightarrow\ket{0}\). (The rate is faster when the dot is in the state \(\ket{X}\).) Both the cavity and the bath in principle can hold infinitely many photons; but we are only interested in producing single photons with high probability. We assume that the cavity-bath coupling is stronger than the dot-cavity coupling, and hence we truncate the higher energy levels in Fig. 2. This means that the state \(\ket{G,0,2,1}\) represents the emission of 2 or more photos, and the corresponding probability is the multi-photon probability \(p(2+)\). Note that this approximation neglects some effects that could contribute in principle to the single photon probability, such as series of re-absorption and spontaneous emissions. However since we start with the dot in the ground state, and the spontaneous emission rate is assumed to be low, the neglected effects should have little impact on the single photon probability estimates. In general, quantum dynamics are described by quantum dynamical maps, \[\rho(t)=\Phi_{t}[\rho(0)], \tag{3}\] where \(\Phi_{t}\) is a completely positive trace-preserving (CPTP), time-dependent map with \(\Phi_{0}=I\). In the case of a Markovian quantum master equation, the dynamical map superoperator \(\Phi_{t}\) can be written as \[\Phi_{t}=T\exp\left[\int_{0}^{t}\mathcal{L}_{\tau}d\tau\right], \tag{4}\] where \(T\) denotes Dyson time ordering and \(\mathcal{L}_{\tau}\) is the time-dependent GKSL generator [15], \[\mathcal{L}_{\tau}\rho=-i[H,\rho]+\sum_{k}\alpha_{k}\left(L_{k}\rho L_{k}^{ \dagger}-\frac{1}{2}\left\{L_{k}^{\dagger}L_{k},\rho\right\}\right), \tag{5}\] where \(\{L_{k}\}\) are the Lindblad operators with corresponding rates \(\alpha_{k}\geq 0\). Therefore, Eq. (3) becomes \[\frac{d}{dt}\rho(t)=\mathcal{L}\rho(t). \tag{6}\] In our case, the quantum map does not change with time. Hence, we have a time-independent GKSL generator and Eq. (4) becomes \(\Phi_{t}=e^{\mathcal{L}t}\). We can then evolve the state \(\rho_{0}\) by solving the vector differential equation, \[\rho(t)=e^{\mathcal{L}t}\rho(0)=\sum_{j}c_{j}e^{\lambda_{j}t}\mathbf{v}_{j}, \tag{7}\] where \(\mathbf{v}_{j}\) is an eigenvector of \(\mathcal{L}\) with the corresponding eigenvalue \(\lambda_{j}\) and coefficient \(c_{j}\). As \(t\rightarrow\infty\), all the components in Eq. (7) with \(\mathrm{Re}\{\lambda_{j}\}<0\) decay to \(0\), and only the stable components (with \(\mathrm{Re}\{\lambda_{j}\}=0\)) remain. These states represent the asymptotic states that result from the quantum evolution. In our case, we wish to pump the quantum dot (which can be thought of as an evolution in accordance with GKSL generator \(\mathcal{A}\)) for a certain amount of time (say, \(T_{s}\)), then turn off the pumping and continue the evolution (now with different evolution dynamics, in accordance with GKSL generator \(\mathcal{B}\)). The quantum state during the transition between the two evolution procedures can be expressed as \[\rho(T_{s})=\sum_{k}\beta_{k}\mathbf{u}_{k}=\sum_{j}\alpha_{j}e^{\lambda_{j}}T _{s}\mathbf{v}_{j}, \tag{8}\] where \(\mathbf{v}\) (respectively, \(\mathbf{u}\)) are the eigenvectors of \(\mathcal{A}_{\tau}\) (respectively, \(\mathcal{B}_{\tau}\)), with eigenvalues \(\lambda\) (\(\lambda^{\prime}\)) and vector coefficients \(\alpha\) (\(\beta\)). At times before \(T_{s}\) the system evolves using the first set of eigenvectors and eigenvalues; after \(T_{s}\), the evolution continues using the new eigenvectors and eigenvalues. (One must therefore change basis at time \(T_{s}\).) In the next section, we explain the evolution under the deterministic pumping scenario; in the section after, we focus on the main proposal of this paper: the threshold-based switching technique with the aid of continuous measurements. ### Deterministic pumping We introduce the system evolution in the simplest case by giving a brief description in the deterministic pumping scenario. This is the case with no information about the pumping duration. Since there are no measurements, the system evolves deterministically, which corresponds to zero dephasing, \(\gamma=0\) (though in general, of course, there could be dephasing due to environmental interactions; for the moment we assume this is negligible on these time scales). Also since there is no way to know the dot's energy, the control subsystem is trivial and can be dropped from the system. Then the evolution of the system can be described by the following Lindblad master equation: \[d\rho=-\frac{i}{\hbar}[\hat{H},\rho]dt+\left(\Omega\mathcal{H}[\hat{\sigma}^{ +}]+\Gamma\mathcal{H}[\hat{\sigma}^{-}]+\kappa\mathcal{H}[\hat{a}\hat{b}^{ \dagger}]\right)\rho dt, \tag{9}\] where \(a^{\dagger}\) (respectively, \(b^{\dagger}\)) represents the creation operator of the cavity (bath), and \(\mathcal{H}\) is a superoperator (a Figure 2: STATE DIAGRAM. A state of the system is described by four quantum numbers: the dot’s energy level, the cavity’s photon number, the external waveguide’s photon number, and the ON/OFF control signal for the pumping. This diagram only includes states with at most two quanta of energy, and shows the dynamical processes that connect them. Broken arrows indicate incoherent processes, and solid arrows indicate coherent evolution. The system is initialized in the state \(|G,0,0,1\rangle\), and the goal is to obtain the state \(|G,0,1,0\rangle\) (corresponding to the green box) with finite pumping. Other possible final states that are undesired are \(|G,0,0,0\rangle\), \(|X,1,0,1\rangle\), and \(|X,0,1,1\rangle\) (corresponding to the red boxes). States with higher energy have been omitted, but could be reached via the \(|X,1,0,1\rangle\) and \(|X,0,1,1\rangle\) states. Lindblad term) defined as: \[\mathcal{H}[\hat{A}]\rho=\hat{A}\rho\hat{A}^{\dagger}-\frac{1}{2}\{\hat{A}^{ \dagger}\hat{A},\rho\}. \tag{10}\] The first Lindblad term represents pumping of the quantum dot, with rate \(\Omega\); the second represents spontaneous emission, with rate \(\Gamma\); and the third represents leakage from the cavity to the external waveguide (bath) with rate \(\kappa\). The coupling between the quantum dot and the cavity is contained in the Hamiltonian of form (2), and has coupling strength \(g\). In deterministic evolution, the system evolves according to this master equation up to a fixed time \(T_{s}\), at which point the pumping is turned off, and the evolution continues with \(\Omega=0\). The changeover from one evolution equation to another is done using the basis change in Eq. (8) at time \(T_{s}\). ### Threshold-based switching As mentioned earlier, deterministic pumping may not lead to efficient generation of single photons due to the stochastic nature of the pumping cycle. We thus consider a technique that weakly measures the energy of the quantum dot at each instant and use that as a feedback to control the pumping. To obtain the energy of the dot, we measure the projection operator onto the excited state of the dot. Hence the measurement outcome should be a noisy estimate of the population in the excited state. This measurement outcome is then used to control the pumping by simply "flagging" if the energy exceeds a given threshold \(\tau\) or not. To avoid a false-negative flag, the pumping is turned off after a maximum pumping time \(T_{s}\) even if the threshold was never exceeded. Just as in the deterministic case, the changeover from one evolution equation to another is done using the basis change in Eq. (8) at time \(T_{s}\). The threshold \(\tau\) and maximum time \(T_{s}\) are chosen to optimize the performance of this protocol. This threshold-based approach is not optimal; by only switching using the measurement output at one time, we neglect the possibility of extracting information from the entire measurement record. However, there are reasons why we might prefer such a threshold-based protocol. It is much simpler, with lower latency, than storing and processing an entire measurement record, which may not be practical in real time. Moreover, solving a single master equation is sufficient to estimate the performance of this method for a given set of parameters, rather than averaging over many stochastic quantum trajectories. The output signal obtained from the continuous weak measurement (in rescaled units) is given by \[I(t)=\langle\hat{\mathcal{P}}_{X}\rangle(t)+\beta\frac{dW_{t}}{dt}, \tag{11}\] where \(\beta=(\eta\gamma)^{-1/2}\). Here, \(\gamma\) is the measurement rate, \(\eta\) is the measurement efficiency and \(dW_{t}\) is the Wiener process with variance \(dt\). So \(\mathbb{E}[dW_{t}]=0\) and \(\mathbb{E}[dW_{t}dW_{s}]=\delta(t-s)dsdt\). \(\hat{\mathcal{P}}_{X}=\left|X\right\rangle_{\text{dot}}\left\langle X\right|\) is the projection operator onto the dot's excited state, and \(\langle\cdot\rangle=\text{Tr}[\cdot\rho(t)]\) is the quantum expectation. As described in Sec. II, the pumping of the dot is regulated by the control mode. In the approach we are considering, the control mode evolves based only on the current, weakly-measured state of the quantum dot; it does not have access to the measurement record at earlier times. The goal is to stop pumping as soon as the dot is excited to a higher energy level, which involves switching the control mode from \(\left|1\right\rangle\rightarrow\left|0\right\rangle\). This switching is based on the measurement output in Eq. (11) exceeding a threshold \(\tau\) or not. This can be modeled in the master equation as an incoherent process in which the switching of the control mode occurs at a rate that is dependent on the dot's energy state: if the dot is in the ground state, the rate is lower than when the dot is in a higher energy level. To calculate these rates and their dependence on \(\tau\), first consider the output signal averaged over a succession of intervals of size \(\Delta t\). Since it is a Gaussian random variable, the probability of the dot's energy exceeding a threshold \(\tau\) is given by \[\begin{split} p(\bar{I}>\tau)&=\int_{\tau}^{\infty }p(\bar{I})\,d\bar{I}\\ &\approx\frac{\exp\left(-\eta\gamma\Delta t\left(\frac{\tau}{ \Delta t}-\mu\right)^{2}/2\right)}{\sqrt{2\pi\eta\gamma\Delta t}(\frac{\tau}{ \Delta t}-\mu)}.\end{split} \tag{12}\] where \(\mu=\langle\hat{\mathcal{P}}_{X}\rangle\). A detailed derivation of the above approximation is given in Appendix A. Then the rate at which we stop pumping is equal to \[\nu_{\mu}=\frac{p(\bar{I}>\tau)}{\Delta t}. \tag{13}\] Clearly, this quantity depends on the quantum dot's energy state. Moreover, the rates to stop pumping corresponding to the quantum dot being completely in the ground state and the excited state are related to each other (as derived in Appendix B): \[\nu_{0}\approx\nu_{1}\exp\left[-\sqrt{(2\eta\gamma\Delta t)\ln\left(\frac{1}{ \nu_{1}\Delta t\sqrt{2\pi\eta\gamma\Delta t}}\right)}\right]. \tag{14}\] Finally, the evolution of the system under the threshold-based switching approach is described by the Lindblad master equation (15): \[d\rho=-\frac{i}{\hbar}[\hat{H},\rho]dt+\left(\Omega\mathcal{H}[\hat{\sigma}^ {+}\hat{\xi}]+\Gamma\mathcal{H}[\hat{\sigma}^{-}]+\kappa\mathcal{H}[\hat{a} \hat{b}^{\dagger}]+\gamma\mathcal{H}[\hat{\mathcal{P}}_{X}]+\nu_{1}\mathcal{H} [\hat{c}\,\hat{\mathcal{P}}_{X}]+\nu_{0}\mathcal{H}[\hat{c}\,(I-\hat{\mathcal{ P}}_{X})]\right)\rho dt, \tag{15}\] where \(\hat{\xi}=|1\rangle_{\text{control}}\langle 1|\) is non-zero only when the control is not \(|0\rangle\), and \(\hat{c}\) is the annihilation operator for the control signal. ### Parameter regime The spontaneous emission rate \(\Gamma\) should be much smaller than the pumping rate \(\Omega\) that translates to a favourable forward bias of the diode. Electrical pumping typically corresponds to weak coupling between the dot and the cavity (small \(g\)). We also assume that the photon leakage rate \(\kappa\) is dominant, so that photons in the cavity are promptly transferred to the external waveguide. (In practice, there are some trade-offs between these parameters--high \(\kappa\) limits the quality factor \(Q\) of the cavity, which reduces \(g\) and may prevent us from making \(\Gamma\) as small as we might like. We do not explicitly model these trade-offs in this work.) Since this technique involves continuous measurements of the dot that are generally slower than the coherent evolution, we assume that the dot-cavity coupling strength \(g\) is comparable to the pumping rate, to ensure that there is sufficient time to make a decision to turn off the pumping. However, in the case where \(g\ll\Omega\), this approach still works by turning on the pumping for a time much less than the emission time scale. In this regime, feedback gives little advantage. Considering the above constraints, the following parameter regime is where we expect this technique to give a significant advantage in performance: \[\Gamma\ll\nu_{0}<\Omega,g<\nu_{1}\leq\gamma,\kappa. \tag{16}\] ## IV Numerical simulations Above, we presented an approach to improve the probability of emitting single photons using threshold-based switching. It is a simple method, since it only requires comparing the instantaneous measurement output to a fixed threshhold, and numerically tractable, since we can perform ensemble averaging with a single master equation. The decision to turn off the pumping is implicit within the master equation. Moreover, the controllable parameters \(\nu_{1}\) (or, equivalently, \(\tau\)), \(\gamma\) and stopping time \(T_{s}\) can be set such that we never perform worse than the deterministic (open-loop) case for the same fixed parameter values \(\Omega\)\(g\) and \(\kappa\) (assuming \(\Gamma\) is very small). As we will see, in some parameter ranges we do significantly better. We first benchmark the results obtained by the deterministic approach, and then compare those with the ones obtained by the threshold-based switching technique. We optimize the parameters to find the maximum attainable single-photon probability while also limiting the multi-photon probability. We expect that this method will be particularly useful with relatively high measurement strength. Although that translates to a high dephasing rate, we show that it is possible to choose a parameter regime that still out-performs the deterministic case. ### Time scale Dynamics of our system depend on a multitude of Lindblad rates-- \(\Omega,g,\Gamma,\gamma\), and \(\kappa\). We rescale all the parameters with respect to the rate of emission into the waveguide \(\kappa\). This establishes the dimensionless units for the simulation. In these units, the parameter value ranges we are particularly interested in are: \[\Omega,g \in[0.01,0.1], \tag{17}\] \[\gamma \in\{0.1,1.0,10.0\},\] (18) \[\kappa =1.0,\] (19) \[\Gamma \ll 0.1. \tag{20}\] ### Parameter optimization Here, we consider evolution with deterministic pumping for a fixed time \(T_{s}\). Our goal is to maximize the single-photon probability while keeping the multi-photon probability below a certain limit \(\epsilon\). (This optimization approach can be generalized to the threshold-based switching technique with some addi Figure 3: DETERMINISTIC EVOLUTION. We plot the asymptotic photon probabilities——\(p_{0}\), \(p_{1}\) and \(p_{2+}\) as a function of the stopping time \(T_{s}\). The parameter values for this example are \(\Omega=0.1,g=0.1,\Gamma=0.001\) and \(\kappa=1.0\). The system begins in the state \(|G,0,0\rangle\). When \(T_{s}=0\), we do not pump at all, hence \(p_{0}(0)=1\) asymptotically. As we increase \(T_{s}\), the system undergoes non-trivial evolution according to Eq. (9); \(p_{1}\) reaches its maximum value and then starts to decrease, while \(p_{2+}\) increases monotonically with \(T_{s}\) as the system keeps emitting more photons to the waveguide. will see.) To evolve the system deterministically, we numerically integrate \(d\rho\) as defined in Eq. (9) using the \(4^{\text{th}}\)-order Runge-Kutta method. Fig. 3 shows the asymptotic probabilities (i.e., at long times after all emissions have occurred) of emitting photons as a function of the stopping time \(p_{i}(Ts)\) where \(i\in\{0,1,2+\}\) corresponding to zero-photon, single-photon and multi-photon probabilities. The figure shows a particular set of Lindblad rates, but the qualitative behavior is similar for all parameter values. Let us define two times: \(t^{\prime}\), which is the time when \(p_{2}(t^{\prime})=\epsilon\), and \(t^{\prime\prime}=\arg\max_{t}p_{1}(t)\). We observe that, depending on the parameter regime, we are in one of 2 cases: \(t^{\prime}<t^{\prime\prime}\) or \(t^{\prime}\geq t^{\prime\prime}\). In the former case, we have not reached the maximum attainable \(p_{1}\), but we have already reached the maximum tolerable \(p_{2}\). In the latter case, we have already crossed the maximum attainable \(p_{1}\). Hence, the optimal time to stop pumping is the minimum of \(t^{\prime}\) and \(t^{\prime\prime}\). Suppose we wish to keep \(p_{2}(T_{s})\leq\epsilon\). Then for each set of parameters \(\Omega,g\), we calculate the following: \[t_{\epsilon} \equiv t^{\prime}\;\;\text{where}\;\;p_{2}(t^{\prime})=\epsilon, \tag{21}\] \[t_{m} \equiv\operatorname*{arg\,max}_{t}p_{1}(t),\] (22) \[t_{\text{opt}} =\min(t_{\epsilon},t_{m}). \tag{23}\] Then the maximum attainable single-photon probability while keeping the multi-photon probability below \(\epsilon\) is \(p_{1}(t_{\text{opt}})\). In the deterministic case, there is an unavoidable trade-off between \(p_{2}\) and \(p_{1}\); if we want to limit the former, we cannot maximize the latter. Continuous measurements and feedback can ease this trade-off. Optimizing in the threshold-based switching technique imposes additional constraints on \(\nu_{1}\) and \(\gamma\). In this paper, we only considered a discrete set of values of \(\gamma\), thus easing the optimization procedure, but numerically optimizing over a full range of values could be done. We analytically find the asymptotic probabilities \(p_{i}(\nu_{1},T_{s})\), for \(i\in\{0,1,2+\}\), as a function of \(\nu_{1}\) and \(T_{s}\). Then following a similar approach as above, for each set of parameters \(\Omega,g,\gamma\), we calculate the following: \[t_{\epsilon}(\nu_{1}) \equiv t^{\prime}\;\;\text{where}\;\;p_{2}(\nu_{1},t^{\prime})=\epsilon, \tag{24}\] \[p_{\epsilon}^{(1)} \equiv\max_{\nu_{1}}p_{1}(\nu_{1},t_{\epsilon}(\nu_{1})),\] (25) \[\nu_{1}^{\prime},t^{\prime} =\operatorname*{arg\,max}_{\nu_{1},t}p_{1}(\nu_{1},t),\] (26) \[p_{1}^{\prime} =p_{1}(\nu_{1}^{\prime},t^{\prime}),\] (27) \[p_{2}^{\prime} =p_{2}(\nu_{1}^{\prime},t^{\prime}). \tag{28}\] Then the maximum attainable single-photon probability while keeping the multi-photon probability below \(\epsilon\) is \[p_{1}^{*}=\begin{cases}p_{1}^{\prime},&\text{if}\;p_{2}^{\prime}\leq\epsilon\\ p_{\epsilon}^{(1)},&\text{otherwise}\end{cases} \tag{29}\] ### Effect of pumping rate It is straightforward to see that the single photon probability increases as we increase the pumping rate. But for a weakly pumped system, such as the electrical pumping considered in this work, we wish to have a good \(p_{1}\) even in the low pumping rate regime. The threshold-based switching technique addresses this issue by "flagging" the control signal as soon as the dot is excited to a higher energy level. This in turn allows the system to promptly switch off the pumping. In Fig. 4, we plot the maximum attainable single photon probability as a function of pumping rate \(\Omega\). There are 4 cases corresponding to different measurement strengths \(\gamma\): (\(i\)) deterministic pumping (no measurement), (\(ii\)) low measurement rate (\(\gamma=0.1\)), (\(iii\)) intermediate measurement rate (\(\gamma=1\)), and (\(iv\)) high measurement rate (\(\gamma=10\)); high measurement rate (\(iv\)) leads to the best performance, while low measurement rate (\(ii\)) performs the worst among the measurement-based schemes, although still marginally better than the deterministic case. A lower pumping rate means that the electron tunneling events are spread out in time, thus giving the threshold-based switching technique a long time to make the right decision. For higher rates, this time window is diminished, leading to a lesser performance improvement. Figure 4: Maximum attainable single photon probability as a function of pumping rate \(\Omega\) for \(g=0.1\). There are 4 cases corresponding to different measurement strengths \(\gamma\): (\(i\)) deterministic pumping (no measurement), (\(ii\)) low measurement rate (\(\gamma=0.1\)), (\(iii\)) intermediate measurement rate (\(\gamma=1\)), (\(iv\)) high measurement rate (\(\gamma=10\)). High measurement rate (\(iv\)) leads to the best performance, while the low measurement rate (\(ii\)) performs the worst among the measurement-based schemes, although still marginally better than the deterministic case. ### Effect of coupling strength At low coupling strengths between the dot and the cavity, the threshold-based switching method is not particularly useful. It does not perform appreciably better than the deterministic case. However, as the coupling strength is increased, a significant fraction of the population in the \(|X,0,0\rangle\) state is transferred to the \(|G,1,0\rangle\) state very quickly. If the pumping is still turned on, more photons are added to the cavity, reducing the single photon probability. However, if the pumping is turned off just as the dot is excited to the excited state, the coupling rate will not strongly affect the single photon probability. This is illustrated in Fig. 5. In the low-coupling regime, both methods produce single photons with about \(85\%\) probability. As we increase the coupling strength, the deterministic method drops the single photon rate rapidly, while the threshold-based approach keeps it at \(85\%\). ## V Conclusions Threshold-based switching is a simple yet powerful technique that can significantly improve the single-photon probability without increasing the multi-photon probability, by continuously measuring a quantum dot's energy and using feedback to control when to stop pumping. We modeled the system by a stochastic master equation that includes post-measurement operations. This technique is useful particularly at low pumping rates and high coupling rates, where it outperforms the deterministic case. However, in regions of strong pumping or low coupling, the improvement over the deterministic pumping (no monitoring at all) is only marginal. Numerical simulations showed that even a simple threshold-based feedback scheme using continuous measurements can improve performance over deterministic (open-loop) pumping in certain parameter regimes, and always performs at least as well. ###### Acknowledgements. The authors would like to thank Yi-Hsieng Chen, Namit Anand, Christopher Sutherland and Prithviraj Prabhu for useful discussions. This work is supported by the National Science Foundation under Award No. 1911089.
2302.06850
A new boundary of the mapping class group
Based on the action of the mapping class group on the space of measured foliations, we construct a new boundary of the mapping class group and study the structure of this boundary. As an application, for any point in Teichmuller space, we consider the orbit of this point under the action of the mapping class group and describe the closure of this orbit in the Thurston compactification and the Gardiner-Masur compactification of Teichmuller space. We also construct some new points in the Gardiner-Masur boundary of Teichmuller space.
Lixin Liu, Yaozhong Shi
2023-02-14T05:57:34Z
http://arxiv.org/abs/2302.06850v1
# A new boundary of the mapping class group+ ###### Abstract Based on the action of the mapping class group on the space of measured foliations, we construct a new boundary of the mapping class group and study the structure of this boundary. As an application, for any point in Teichmuller space, we consider the orbit of this point under the action of the mapping class group and describe the closure of this orbit in the Thurston compactification and the Gardiner-Masur compactification of Teichmuller space. We also construct some new points in the Gardiner-Masur boundary of Teichmuller space. _Keywords_: Mapping class group, Measured foliation, Teichmuller space _MSC (2010)_: 30F60, 32G15, 57M99 ## 1 Introduction In order to study the structure of a group \(G\), it is natural to equip \(G\) with a boundary. For example, considering the Cayley graph of a finite generated group and rescaling the lengths of the edges by a summable function, we obtain a compact completion of the graph under this new metric. The boundary of this completion is the Floyd boundary of \(G\) (see [5]). Besides, a probability measure \(\nu\) on \(G\) determines a random walk on \(G\). There is also a boundary determined by this random walk, which is called the Poisson boundary. The Poisson boundary is strongly related to the harmonic functions corresponding to the random walk (see [10]). Let \(S\) be an oriented surface of genus \(g\) with \(n\) punctures. We assume that \(N=3g-3+n\geq 1\). In this paper, we study a special group: the mapping class group \(Mod(S)\) of \(S\). \(Mod(S)\) acts on two spaces: \(\mathcal{T}(S)\) and \(\mathcal{MF}\), where \(\mathcal{T}(S)\) is the Teichmuller space of \(S\) and \(\mathcal{MF}\) is the space of measured foliations on \(S\). Let \(\mathcal{PMF}=\mathcal{MF}-\{0\}/R_{+}\) be the space of projective measured foliations. Based on the action of \(Mod(S)\) on \(\mathcal{T}(S)\), \(\mathcal{MF}\) and \(\mathcal{PMF}\), we can study the structure of \(Mod(S)\). The most important result in the study of mapping class group may be the Nielsen-Thurston classification theorem, which states that every \(f\in Mod(S)\) is one of three special types: periodic, reducible, or pseudo-Anosov. The structure of subgroups of \(Mod(S)\) was also studied by the action of \(Mod(S)\) on \(\mathcal{T}(S)\) and \(\mathcal{MF}\) (see [14], [9], etc.). Different boundaries of \(Mod(S)\) were studied by various people (see [13], [11], [8], etc.). In particular, Kaimanovich and Masur (see [11]) studied the Poisson boundary of \(Mod(S)\). They proved that under some natural conditions, the Poisson boundary of \(Mod(S)\) is \(\mathcal{PMF}\) equipped with a unique measure. In order to obtain their main result, they considered the action of \(Mod(S)\) on \(\mathcal{PMF}\) and analyzed the asymptotic behaviour of the action of an infinite sequence \(\{f_{n}\}_{n=1}^{\infty}\) on \(\mathcal{PMF}\) (see Subsection 1.5 in [11]). Inspired by their idea, we study the asymptotic behaviour of \(\{f_{n}\}_{n=1}^{\infty}\) on \(\mathcal{MF}\). In the special case that \(f_{n}=f^{n}\) for a fixed \(f\in Mod(S)\), we know that (1) when \(f\) is a Dehn Twist determined by a simple closed curve \(\alpha\), \(\lim_{n\rightarrow\infty}\frac{1}{n}f^{n}(F)=i(\alpha,F)\alpha\) for any \(F\in\mathcal{MF}\); (2) when \(f\) is a pseudo-Anosov map with \(\lambda>1\), \(f(F^{u})=\lambda F^{u}\), \(f(F^{s})=\lambda^{-1}F^{s}\) and \(i(F^{u},F^{s})=1\), \(\lim_{n\rightarrow\infty}\lambda^{-n}f^{n}(F)=i(F^{s},F)F^{u}\) for any \(F\in\mathcal{MF}\). A natural generalization of these two classical results is **Problem 1.1**.: _Is the action of \(Mod(S)\) on \(\mathcal{MF}\) "projectively precompact"? That is, for any sequence \(f_{n}\in Mod(S)\), are there a subsequence \(f_{n_{k}}\) and a sequence of positive numbers \(t_{k}\) such that \(t_{k}f_{n_{k}}:\mathcal{MF}\rightarrow\mathcal{MF}\) converges to some map \(f_{0}:\mathcal{MF}\rightarrow\mathcal{MF}\)?_ Note that it is necessary to take a subsequence \(f_{n_{k}}\) and a positive scalar \(t_{k}\) in Problem 1.1, since without these two operations, a generic sequence \(f_{n}\) is not convergent. We settle Problem 1.1 by embedding \(Mod(S)\) into an appropriate space and constructing a new boundary of \(Mod(S)\). For this, we need some notations (see Section 3). Let \(\Omega(\mathcal{MF})\) be the set of all homogeneous measurable functions from \(\mathcal{MF}\) to \(\mathcal{MF}\). Note that \(R_{+}\) acts on \(\Omega(\mathcal{MF})\) by multiplication. Let \(P\Omega(\mathcal{MF})=\Omega(\mathcal{MF})-\{0\}/R_{+}\) be the projective space of \(\Omega(\mathcal{MF})\). Endow \(\Omega(\mathcal{MF})\) with the topology of pointwise convergence and \(P\Omega(\mathcal{MF})\) with the quotient topology. Considering the action of \(Mod(S)\) on \(\mathcal{MF}\), there is a natural map \(I:Mod(S)\to P\Omega(\mathcal{MF})\) (see Section 3). Up to a finite normal subgroup \(ker(I)\), we can identify \(Mod(S)\) with its image \(I(Mod(S))\), which is denoted by \(E\) for simplicity. Thus \(Mod(S)\) is nearly embedded into \(P\Omega(\mathcal{MF})\) and \(P\Omega(\mathcal{MF})\) is the appropriate space for settling Problem 1.1. With these notations, we prove that the closure \(Cl(E)\) of \(E=I(Mod(S))\) in \(P\Omega(\mathcal{MF})\) is metrizable and compact (Theorem 3.1). Note that this result answers Problem 1.1 completely: by the definition of \(P\Omega(\mathcal{MF})\), identifying \(Mod(S)\) with \(E=I(Mod(S))\), the compactness of \(Cl(E)\) means that for any sequence \(f_{n}\in Mod(S)\), there are a subsequence \(f_{n_{k}}\) and a sequence of positive numbers \(t_{k}\) such that \(t_{k}f_{n_{k}}:\mathcal{MF}\rightarrow\mathcal{MF}\) converges to a map \(f_{0}:\mathcal{MF}\rightarrow\mathcal{MF}\). Identifying \(Mod(S)\) with \(E=I(Mod(S))\), \(\partial E=Cl(E)-E\) is a boundary of \(Mod(S)\). For the structure of \(\partial E\), we prove (see Section 4) * In \(Cl(E)\), \(E\) is discrete and \(\partial E\) is closed (Proposition 4.1). * The operations of multiplication and inverse on \(Mod(S)\) extend continuously to \(Cl(E)\) (Proposition 4.3, 4.4, 4.5). But \(Cl(E)\) is not indeed a group (Remark 4.6). * Any point \(p\in\partial E\) can be represented as \(\left[\sum_{i=1}^{k}i(E_{i},\cdot)F_{i}\right]\), where \(\{F_{i}\}\left(\{E_{i}\}\right)\) are disjoint measured foliations (Theorem 4.8). * Some special points of \(\partial E\) are constructed (Proposition 4.13, 4.15, 4.16). In particular, \(\partial E\neq\emptyset\). We also consider the actions of \(Mod(S)\) on the Thurston compactification \(\mathcal{T}^{Th}(S)\) and the Gardiner-Masur compactification \(\mathcal{T}^{GM}(S)\) of \(\mathcal{T}(S)\). The Thurston boundary is precisely \(\mathcal{PMF}\); while the structure of the Gardiner-Masur boundary \(GM\) is complex. See [6], [15], [16] and [18] for more details on the Gardiner-Masur boundary. \(Mod(S)\) acts on \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\) naturally. Considering the actions of \(Mod(S)\) on \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\), we have two maps \[\Pi_{Th}:Mod(S)\times\mathcal{T}^{Th}(S)\rightarrow\mathcal{T}^{Th}(S),\,(f,p) \mapsto f(p)\] and \[\Pi_{GM}:Mod(S)\times\mathcal{T}^{GM}(S)\rightarrow\mathcal{T}^{GM}(S),\,(f,p )\mapsto f(p).\] If we endow \(Mod(S)\) with the discrete topology, then \(\Pi_{Th}\) and \(\Pi_{GM}\) are both continuous. Since \(Cl(E)=E\bigcup\partial E\) is a completion of \(Mod(S)\) in some sense, we extend the domains of \(\Pi_{Th}\) and \(\Pi_{GM}\) to \(Cl(E)\times\mathcal{T}^{Th}(S)\) and \(Cl(E)\times\mathcal{T}^{GM}(S)\), respectively (see Theorem 5.1 and Remark 5.2). As an application, we prove Theorem 5.4, which answers the following problem: **Problem 1.2**.: _For any \(x_{0}\) in \(\mathcal{T}(S)\), considering the orbit \(\Gamma(x_{0})\) of \(x_{0}\) under the action of \(Mod(S)\), how to describe the closure of \(\Gamma(x_{0})\) in \(\mathcal{T}^{Th}(S)\) or \(\mathcal{T}^{GM}(S)\)?_ Besides, using the new boundary \(\partial E\), we construct some new points in the Gardiner-Masur boundary of \(\mathcal{T}(S)\) (see Remark 5.7). The new boundary \(\partial E\) is related to a special boundary of \(Mod(S)\). For a base point \(x\in\mathcal{T}(S)\), identifying \(Mod(S)\) with the orbit \(\Gamma(x)\) of \(x\) by a map \(Mod(S)\ni f\mapsto f(x)\in\Gamma(x)\) and then taking boundary in \(\mathcal{T}^{Th}(S)\), we get a boundary of \(Mod(S)\) depending upon the base point \(x\). Note that this boundary is indeed the whole \(\mathcal{PMF}\) (see Theorem 5.4). Thus we may call it the "Thurston boundary" of \(Mod(S)\) with base point \(x\). And then the new boundary \(\partial E\) covers each "Thurston boundary" of \(Mod(S)\) (see Remark 5.6). It may be interesting to study the relations between the new boundary \(\partial E\) of \(Mod(S)\) and some known boundaries of \(Mod(S)\), such as the Floyd boundary, the Poisson boundary, etc. We will study these relations in coming future. Besides, Hamenstadt introduced a new boundary of \(Mod(S)\) (see Section 8 in [7]). It is also interesting to compare our new boundary with Hamenstadt's boundary. We may consider a more general space than \(\mathcal{MF}\), that is, the space of geodesic currents (see [1]). Note that the space of geodesic currents includes \(\mathcal{MF}\) and \(\mathcal{T}(S)\). Since the construction of the new boundary \(\partial E\) is based on the action of \(Mod(S)\) on \(\mathcal{MF}\) and \(Mod(S)\) also acts continuously on the space of geodesic currents, it is natural to ask the following interesting problem: **Problem 1.3**.: _If we replace the space of measured foliations by the space of geodesic currents, does the construction of the new boundary work?_ This paper is organized as follows. In Section 2, we introduce background materials on measured foliations, Teichmuller space and the action of the mapping class group. In Section 3, we construct the new boundary of \(Mod(S)\). In Section 4, we study the structure of the new boundary. In Section 5, we give some applications of our new boundary. ## 2 Preliminaries ### Measured foliations Let \(\mathcal{S}=\mathcal{S}(S)\) be the set of isotopy classes of essential simple closed curves on \(S\). For any \(\alpha,\beta\) in \(\mathcal{S}\), denote by \(i(\alpha,\beta)\) the geometric intersection number between \(\alpha\) and \(\beta\). Let \(R_{\geq 0}=\{x\in R:x\geq 0\}\) and \(R_{+}=\{x\in R:x>0\}\). Let \(R_{\geq 0}^{\mathcal{S}}\) be the space of non-negative functions on \(\mathcal{S}\) endowed with the topology of pointwise convergence. Denote the set of weighted simple closed curves by \(R_{+}\times\mathcal{S}=\{t\cdot\alpha:t>0,\alpha\in\mathcal{S}\}\). It is known that \[i_{*}:R_{+}\times\mathcal{S}\to R_{\geq 0}^{\mathcal{S}},\] \[t\cdot\alpha\mapsto t\cdot i(\alpha,\cdot)\] is injective and induces a topology on \(R_{+}\times\mathcal{S}\). With this topology, \(i_{*}\) is an embedding. The closure of \(i_{*}(R_{+}\times\mathcal{S})\) in \(R_{\geq 0}^{\mathcal{S}}\) is called the space of measured foliations on \(S\), which is denote by \(\mathcal{MF}\). \(R_{+}\) acts on \(R_{\geq 0}^{\mathcal{S}}\) by multiplication. Denote \(R_{\geq 0}^{\mathcal{S}}-\{0\}/R_{+}\) by \(PR_{\geq 0}^{\mathcal{S}}\) and \(\mathcal{MF}-\{0\}/R_{+}\) by \(\mathcal{PMF}\). \(\mathcal{PMF}\) is called the space of projective measured foliations. For \(F\in\mathcal{MF}-\{0\}\), denote \([F]\in\mathcal{PMF}\) to be the projective class of \(F\). Note that \(\mathcal{S}\) is embedded in \(PR_{\geq 0}^{\mathcal{S}}\), and the closure of \(\mathcal{S}\) in \(PR_{\geq 0}^{\mathcal{S}}\) is \(\mathcal{PMF}\). It is well known that \(\mathcal{MF}\) is homeomorphic to \(R^{6g-6+2n}\) and \(\mathcal{PMF}\) is homeomorphic to \(S^{6g-7+2n}\) (see [4]). For two weighted simple closed curves \(t\alpha,s\beta\in R_{+}\times\mathcal{S}\), define their intersection number by the homogeneous equation \(i(t\alpha,s\beta)=tsi(\alpha,\beta)\). Then the intersection number function \(i\) extends continuously to \(i:\mathcal{MF}\times\mathcal{MF}\to R_{\geq 0}\). Any \(F\in\mathcal{MF}-\{0\}\) is represented by a singular foliation with a transverse measure \(\mu\) in the sense that for any simple closed curve \(\alpha\), \[i(F,\alpha)=\inf_{\alpha^{{}^{\prime}}}\int_{\alpha^{{}^{\prime}}}d\mu,\] where the infimum is over all simple closed curves \(\alpha^{{}^{\prime}}\) homotopic to \(\alpha\). Besides, we need the definition of ergodic decomposition of a measured foliation. A saddle connection of a foliation is a leaf connecting two singularities (not necessarily distinct). The critical graph of a foliation is defined to be the union of all saddle connections. The complement of the critical graph contains finitely many connected components. Each connected component is either a cylinder swept out by closed leaves or a so-called minimal component (each leaf is dense). On every minimal component \(D\), there exists a finite set of ergodic transverse measures \(\mu_{1},...,\mu_{n}\) such that any transverse measure \(\mu\) on \(D\) can be written as \(\mu=\sum_{j=1}^{n}f_{j}\mu_{j}\) for some non-negative coefficients \(\{f_{j}\}\). An indecomposable component of a measured foliation is either a cylinder with a positive weight or a minimal component \(D\) with a ergodic measure \(\mu_{j}\). A measured foliation is called indecomposable if it contains only one indecomposable component. With these notations, any measured foliation \(F\) can be uniquely represented as \[F=\sum_{i=1}^{k}F_{i},\] where each \(F_{i}\) is an indecomposable measured foliation such that \(i(F_{i},F_{j})=0\) and \([F_{i}]\neq[F_{j}]\) for \(i\neq j\). We call this the ergodic decomposition of \(F\). Finitely many simple closed curves \(\alpha_{1},\alpha_{2},...,\alpha_{k}\) fill up \(S\) if for any \(F\in\mathcal{MF}-\{0\}\), \[\sum_{i=1}^{k}i(\alpha_{i},F)>0.\] We need the following result (Lemma 6.3, [17]). **Lemma 2.1**.: _Let \(\{F_{i}:i=0,1,2,...,k\}\) be some projectively distinct indecomposable measured foliations such that \(i(F_{i},F_{j})=0\,(i\neq j)\). Then for any \(\epsilon>0\), there exists a simple closed curve \(\alpha\) such that_ \[i(F_{i},\alpha)<\epsilon i(F_{0},\alpha),\,i=1,2,...,k.\] ### Teichmuller space and its compactifications Let \(\mathcal{T}(S)\) be the Teichmuller space of \(S\). There are two equivalent definitions of \(\mathcal{T}(S)\): the set of isotopy classes of hyperbolic metrics on \(S\) and the set of isotopy classes of conformal structures on \(S\). For two hyperbolic metric \(m_{1},m_{2}\) of finite area on \(S\), \(m_{1}\) is equivalent to \(m_{2}\) if there exists an orientation-preserving homeomorphism \(f:S\to S\) isotopic to the identity map such that \(f_{*}m_{1}=m_{2}\), where \(f_{*}m_{1}\) is the push-forward of \(m_{1}\) by \(f\). \(\mathcal{T}(S)\) is defined to be the set of equivalence classes of hyperbolic metrics of finite area on \(S\). For two conformal structure \(\mu_{1},\mu_{2}\) on \(S\), \(\mu_{1}\) is equivalent to \(\mu_{2}\) if there exists an orientation-preserving homeomorphism \(f:S\to S\) isotopic to the identity map such that \(f_{*}\mu_{1}=\mu_{2}\), where \(f_{*}\mu_{1}\) is the push-forward of \(\mu_{1}\) by \(f\). \(\mathcal{T}(S)\) is also defined to be the set of equivalence classes of conformal structures on \(S\). By the uniformization theorem, these two definitions are consistent. For the definition corresponding to hyperbolic metric, we consider the hyperbolic length function on \(\mathcal{T}(S)\). For any \(x\in\mathcal{T}(S)\) and \(\alpha\in\mathcal{S}\), let \(l(x,\alpha)\) be the hyperbolic length of the geodesic isotopic to \(\alpha\) in the hyperbolic metric corresponding to \(x\). The hyperbolic length of a simple closed curve extends continuously to the hyperbolic length of a measured foliation. The map \[l(\cdot,\cdot):\mathcal{T}(S)\times\mathcal{MF}\to R,\] \[(x,F)\mapsto l(x,F)\] is continuous. Thurston constructed a compactification of Teichmuller space by the hyperbolic lengths of simple closed curves. Define a map \(\widetilde{\varphi}_{Th}\) by \[\widetilde{\varphi}_{Th}:\mathcal{T}(S)\to R_{\geq 0}^{\mathcal{S}},\] \[x\mapsto(l(x,\alpha))_{\alpha\in\mathcal{S}}.\] Let \(pr:R_{\geq 0}^{\mathcal{S}}-\{0\}\to PR_{\geq 0}^{\mathcal{S}}\) be the projective map. Then the map \(\varphi_{Th}=pr\circ\widetilde{\varphi}_{Th}\) is an embedding and the closure of the image is compact. Moreover, \(Cl(\mathcal{T}(S))-\mathcal{T}(S)=\mathcal{PMF}\). Thus we have a compactification of \(\mathcal{T}(S)\) denoted by \(\mathcal{T}^{Th}(S)=\mathcal{T}(S)\bigcup\mathcal{PMF}\). \(\mathcal{T}^{Th}(S)\) is the Thurston compactification and \(\mathcal{PMF}\) is the Thurston boundary. A sequence \(\{x_{n}\}_{n=1}^{\infty}\) in \(\mathcal{T}(S)\) converges to a boundary point \([F]\) in \(\mathcal{PMF}\) if and only if there exists a positive sequence \(\{t_{n}\}_{n=1}^{\infty}\) such that \(\lim_{n\to\infty}t_{n}=0\) and \(\lim_{n\to\infty}t_{n}l(x_{n},\alpha)=i(F,\alpha)\) for any \(\alpha\in\mathcal{S}\). For the definition corresponding to conformal structure, we consider the extremal length function on \(\mathcal{T}(S)\). For any \(x\in\mathcal{T}(S)\) and \(\alpha\in\mathcal{S}\), let \(Ext(x,\alpha)\) be the extremal length of \(\alpha\) in the conformal structure corresponding to \(x\). The extremal length of a simple closed curve extends continuously to the extremal length of a measured foliation. For more details on extremal length, see [12]. The map \[Ext(\cdot,\cdot):\mathcal{T}(S)\times\mathcal{MF}\to R,\] \[(x,F)\mapsto Ext(x,F)\] is continuous. Gardiner and Masur constructed a compactification of Teichmuller space by the extremal lengths of simple closed curves in [6]. Define a map \(\widetilde{\varphi}_{GM}\) by \[\widetilde{\varphi}_{GM}:T(S)\to R_{\geq 0}^{\mathcal{S}},\] \[x\mapsto(Ext^{\frac{t}{2}}(x,\alpha))_{\alpha\in{\cal S}}.\] The map \(\varphi_{GM}=pr\circ\widetilde{\varphi}_{GM}\) is an embedding and the closure of the image is compact. Thus we have a compactification of \({\cal T}(S)\) denoted by \({\cal T}^{GM}(S)={\cal T}(S)\bigcup GM\). \({\cal T}^{GM}(S)\) is the Gardiner-Masur compactification and \(GM\) is the Gardiner-Masur boundary. Different from the Thurston boundary \({\cal PMF}\), the structure of Gardiner-Masur boundary \(GM\) is much more complex. For more details on its structure, see [15], [16], [18]. ### The action of the mapping class group Let \(Mod(S)\) be the mapping class group of surface \(S\), which is the set of isotopy classes of orientation-preserving homeomorphisms of \(S\). \(Mod(S)\) acts on \({\cal MF}\) and \({\cal T}(S)\) by push-forward. Precisely, for \(f\in Mod(S)\) and \(x\in{\cal T}(S)\), if \(m\) and \(\mu\) are a hyperbolic metric and a conformal structure in the equivalence class \(x\), respectively, define \(f(x)\) to be the the equivalence class of \(f_{*}m\) or \(f_{*}\mu\). For a measured foliation \((F,\nu)\), define \(f(F,\nu)\) to be \((f(F),f_{*}\nu)\). And its action on \({\cal T}(S)\) extends naturally to the Thurston compactification and the Gardiner-Masur compactification of \({\cal T}(S)\). For more details on \(Mod(S)\), see [3]. In this paper, we use the following convention: for any \(x\in{\cal T}(S)\), \(f\in Mod(S)\), \(F\in{\cal MF}\), \[l(f(x),F)=l(x,f^{-1}(F)),\,Ext(f(x),F)=Ext(x,f^{-1}(F)).\] ## 3 Construction of the new boundary Based on the action of \(Mod(S)\) on the measured foliation space \({\cal MF}={\cal MF}(S)\), we construct a new boundary of \(Mod(S)\) in this section. For any \(f\in Mod(S)\), \(f\) acts on \({\cal MF}\) as a homogeneous continuous map \(f:{\cal MF}\to{\cal MF}\), which is measurable in particular. Recall that \(f\) is homogeneous if for any \(F\in{\cal MF}\), \(k\geq 0\), \(f(kF)=kf(F)\). Denote the set of all homogeneous measurable maps from \({\cal MF}\) to \({\cal MF}\) by \(\Omega({\cal MF})\). We endow \(\Omega({\cal MF})\) with the topology of pointwise convergence. Since \(R_{+}\) acts on \({\cal MF}\) by multiplication, multiplying any \(f\in\Omega({\cal MF})\) by a positive number \(k\), we get a homogeneous measurable map \(kf:{\cal MF}\to{\cal MF},\,F\mapsto kf(F)\). Thus \(R_{+}\) also acts on \(\Omega({\cal MF})\) by multiplication. Then we have the projective space \(P\Omega({\cal MF})=\Omega({\cal MF})-\{0\}/R_{+}\), where \(0\) is the zero element in \(\Omega({\cal MF})\). Let \(\pi:\Omega({\cal MF})-\{0\}\to P\Omega({\cal MF})\) be the projective map. Denote \([f]=\pi(f)\) to be the projective class of \(f\in\Omega({\cal MF})\). We endow \(P\Omega({\cal MF})\) with the quotient topology induced by \(\pi\). Precisely, for a sequence \(\{[f_{n}]\}_{n=0}^{\infty}\) in \(P\Omega({\cal MF})\), \(\lim_{n\to\infty}[f_{n}]=[f_{0}]\) if and only if there exists a positive sequence \(\{t_{n}\}_{n=1}^{\infty}\) such that \(t_{n}f_{n}\) converges to \(f_{0}\) in the topology of pointwise convergence. Sending \(f\in Mod(S)\) to its action \(f:{\cal MF}\to{\cal MF}\), we have a natural map \[\widetilde{I}:Mod(S)\to\Omega({\cal MF}).\] Composing it with \(\pi\), we have another map \[I=\pi\circ\widetilde{I}:Mod(S)\to P\Omega({\cal MF}).\] The kernel \(ker(I)=\{f\in Mod(S):[f:{\cal MF}\to{\cal MF}]=[id_{{\cal MF}}]\}\) is finite. In fact, if \(f\in ker(I)\), then there exists a positive number \(k\) such that for any \(F\in{\cal MF}\), \(kf(F)=F\). Since \(R_{+}{\cal S}\) is dense in \({\cal MF}\), this is equivalent to that for any \(\alpha\in\mathcal{S}\), \(f(\alpha)=k\alpha\). Since \(f\) sends a simple closed curve to a simple closed curve, we have \(k=1\). Thus \(f\in kerI\) is equivalent to that \(f\) fixes the isotopy class of each essential simple closed curve. By the result in page 344 of [3], we know that when the topology type \((g,n)\) of \(S\) is \((2,0),(1,1),(1,2)\) or \((0,4)\), \(ker(I)\) is a subgroup of order \(2\) or \(4\); in the other cases, \(ker(I)\) is trivial. So up to the finite normal subgroup \(ker(I)\) (with order \(1,2\) or \(4\)), we can identify \(Mod(S)\) with its image \(I(Mod(S))\). For simplicity, denote the image \(I(Mod(S))\) by \(E\). Let \(Cl(E)\) be the closure of \(E\) in \(P\Omega(\mathcal{MF})\). The main result of this section is **Theorem 3.1**.: \(Cl(E)\) _is metrizable and compact._ Thus \(Cl(E)\) is a completion of \(Mod(S)\) and \(\partial E=Cl(E)-E\) is a boundary of \(Mod(S)\) in some sense. We need some preparations to prove Theorem 3.1. In order to give a clear description to the topology of \(\mathcal{MF}\), we choose \(N\) simple closed curves \(\{\alpha_{1},\alpha_{2},...,\alpha_{N}\}\) filling up the surface \(S\) such that the map \[\Phi:\mathcal{MF}\to R^{N},\] \[F\mapsto\big{(}i(\alpha_{1},F),i(\alpha_{2},F),...,i(\alpha_{N},F)\big{)}\] is an embedding (see [4]). As a result, we identify \(\mathcal{MF}\) with the image \(\Phi(\mathcal{MF})\) which is endowed with the Euclidean metric on \(R^{N}\). Let \(l(\cdot)=\sum_{i=1}^{N}i(\alpha_{i},\cdot):\mathcal{MF}\to R_{\geq 0}\) be the length function on \(\mathcal{MF}\) corresponding to \(\{\alpha_{1},\alpha_{2},...,\alpha_{N}\}\). Since \(\{\alpha_{1},\alpha_{2},...,\alpha_{N}\}\) fill up the surface, \(l(F)=0\) if and only if \(F=0\). Recall a result from [1]: **Lemma 3.2**.: _For any \(M>0\), \(\{F\in\mathcal{MF}:l(F)\leq M\}\) is compact in \(\mathcal{MF}\)._ From Lemma 3.2, we have **Lemma 3.3**.: \(\Phi(\mathcal{MF})\) _is closed in \(R^{N}\)._ Proof.: Suppose \(\lim_{n\to\infty}\Phi(F_{n})=f_{0}\) for some sequence \(\{F_{n}\}_{n=1}^{\infty}\subseteq\mathcal{MF}\) and \(f_{0}=(a_{1},a_{2},...,a_{N})\in R^{N}\). Then \(\lim_{n\to\infty}i(\alpha_{i},F_{n})=a_{i}\) for \(i=1,2,...,N\). So \(l(F_{n})=\sum_{i=1}^{N}i(\alpha_{i},F_{n})\leq M\) for some \(M>0\). From Lemma 3.2, there exists a subsequence \(\{F_{n_{k}}\}_{k=1}^{\infty}\) such that \(\lim_{k\to\infty}F_{n_{k}}=F_{0}\) for some \(F_{0}\in\mathcal{MF}\). Since \(\Phi\) is continuous, we have \(\Phi(F_{0})=f_{0}\). Thus \(f_{0}\in\Phi(\mathcal{MF})\), which completes the proof. Let \(\Omega^{{}^{\prime}}(\mathcal{MF})\subseteq\Omega(\mathcal{MF})\) be the set of homogeneous continuous maps from \(\mathcal{MF}\) to \(\mathcal{MF}\). And let \(P\Omega^{{}^{\prime}}(\mathcal{MF})\subseteq P\Omega(\mathcal{MF})\) be the projective space of \(\Omega^{{}^{\prime}}(\mathcal{MF})\). Since \(Mod(S)\) acts continuously on \(\mathcal{MF}\), \(E\subseteq P\Omega^{{}^{\prime}}(\mathcal{MF})\). Now we proceed to construct a metric on \(P\Omega^{{}^{\prime}}(\mathcal{MF})\). Set \(\mathcal{MF}_{1}=\{F\in\mathcal{MF}:l(F)\leq 1\}\). For any \([f]\) in \(P\Omega^{{}^{\prime}}(\mathcal{MF})\), we define the normalized lift of \([f]\) to \(\Omega^{{}^{\prime}}(\mathcal{MF})\) by \[\widehat{f}(\cdot)=\frac{f(\cdot)}{L(f)}:\mathcal{MF}\to\mathcal{MF},\] where \(L(f)=\sup_{\mathcal{MF}_{1}}l(f(\cdot))\). Note that \(L(f)\) is finite because of the compactness of \(\mathcal{MF}_{1}\). Let \(d\) be the Euclidean metric on \(\mathcal{MF}\) induced by \(\Phi\): for any \(F,G\in\mathcal{MF}\), \(d(F,G)=|\Phi(F)-\Phi(G)|\), where \(|\cdot|\) is the Euclidean norm on \(R^{N}\). We define a map \(\widehat{d}:P\Omega^{{}^{\prime}}(\mathcal{MF})\times P\Omega^{{}^{\prime}}( \mathcal{MF})\to R\) as follows: for any \([f],[g]\in P\Omega^{{}^{\prime}}(\mathcal{MF})\), \[\widehat{d}([f],[g])=\sup_{F\in\mathcal{MF}_{1}}d(\widehat{f}(F),\widehat{g}( F)).\] Note that \(\widehat{d}\) is a metric on \(P\Omega^{{}^{\prime}}(\mathcal{MF})\). In fact, the symmetry and the triangle inequality come from these two properties of metric \(d\); the positive definiteness comes from the definition of the normalized lift \(\widehat{f}\). For the metric \(\widehat{d}\), we have **Lemma 3.4**.: _For any \(\{[f_{n}]\}_{n=0}^{\infty}\subseteq P\Omega^{{}^{\prime}}(\mathcal{MF})\), \(\lim_{n\to\infty}\widehat{d}([f_{n}],[f_{0}])=0\) if and only if there exists a positive sequence \(\{t_{n}\}_{n=1}^{\infty}\) such that \(t_{n}f_{n}\) converges uniformly to \(f_{0}\) on any compact subset of \(\mathcal{MF}\)._ Proof.: Suppose that \(\lim_{n\to\infty}\widehat{d}([f_{n}],[f_{0}])=0\). Then from the definition of metric \(\widehat{d}\), we know that \(f_{n}(\cdot)/L(f_{n})\) converges uniformly to \(f_{0}(\cdot)/L(f_{0})\) on \(\mathcal{MF}_{1}\). For any compact subset \(A\) of \(\mathcal{MF}\), set \(l(A)=\sup_{F\in A}l(F)\). Note that for any \(F\in A\), \(F/l(A)\in\mathcal{MF}_{1}\). Thus we have \(f_{n}(\cdot)/l(A)L(f_{n})\) converges uniformly to \(f_{0}(\cdot)/l(A)L(f_{0})\) on \(A\), which also implies that \(L(f_{0})f_{n}(\cdot)/L(f_{n})\) converges uniformly to \(f_{0}(\cdot)\) on \(A\). Conversely, suppose that there exists a sequence \(\{t_{n}\}_{n=1}^{\infty}\subseteq R_{+}\) such that \(t_{n}f_{n}\) converges uniformly to \(f_{0}\) on any compact subset of \(\mathcal{MF}\). In particular, \(t_{n}f_{n}\) converges uniformly to \(f_{0}\) on \(\mathcal{MF}_{1}\), which implies that \(\frac{t_{n}f_{n}(\cdot)}{L(t_{n}f_{n})}\) converges uniformly to \(\frac{f_{0}(\cdot)}{L(f_{0})}\) on \(\mathcal{MF}_{1}\). Thus \[\lim_{n\to\infty}\hat{d}([f_{n}],[f_{0}])=\lim_{n\to\infty}\sup_{\mathcal{MF} _{1}}d(\frac{f_{n}(\cdot)}{L(f_{n})},\frac{f_{0}(\cdot)}{L(f_{0})})=\lim_{n \to\infty}\sup_{\mathcal{MF}_{1}}d(\frac{t_{n}f_{n}(\cdot)}{L(t_{n}f_{n})}, \frac{f_{0}(\cdot)}{L(f_{0})})=0.\] In the metric space \((P\Omega^{{}^{\prime}}(\mathcal{MF}),\widehat{d})\), \(E\) is precompact: **Lemma 3.5**.: _For any sequence \(\{[f_{n}]\}_{n=1}^{\infty}\subseteq E\), there exists a subsequence \(\{[f_{n_{k}}]\}_{k=1}^{\infty}\) such that_ \[\lim_{k\to\infty}\widehat{d}([f_{n_{k}}],[f_{0}])=0\] _for some \([f_{0}]\in P\Omega^{{}^{\prime}}(\mathcal{MF})\)._ Proof.: From the definition of \(E\), we assume that \(f_{n}\in Mod(S)\) (\(n=1,2,...\)). Take a point \(x_{0}\in\mathcal{T}(S)\). Note that the action of \(Mod(S)\) on \(\mathcal{T}(S)\) is properly discontinuous. Thus by the definition of the Thurston compactification of \(\mathcal{T}(S)\), one of the followings holds: (1) \(f_{n_{k}}\equiv f_{0}\) for some subsequence \(\{f_{n_{k}}\}_{k=1}^{\infty}\) and \(f_{0}\in Mod(S)\); (2) \(\lim_{k\to\infty}f_{n_{k}}(x_{0})=[F_{0}]\) for some subsequence \(\{f_{n_{k}}\}_{k=1}^{\infty}\) and \([F_{0}]\in\mathcal{PMF}\). For the case (1), \(\widehat{d}([f_{n_{k}}],[f_{0}])\equiv 0\). For the case (2), there exists a sequence of positive numbers \(\{t_{k}\}_{k=1}^{\infty}\) such that for any \(F\in\mathcal{MF}\), \[\lim_{k\to\infty}t_{k}l(x_{0},f_{n_{k}}^{-1}(F))=\lim_{k\to\infty}t_{k}l(f_{n_ {k}}(x_{0}),F)=i(F_{0},F).\] In particular, we have \(\lim_{k\to\infty}l(x_{0},t_{k}f_{n_{k}}^{-1}(\alpha_{i}))=i(F_{0},\alpha_{i})\) for \(i=1,2,...,N\). Since \(\{\alpha_{i}\}_{i=1}^{N}\) fill up the surface, we have \(l(F_{0})=\sum_{i=1}^{N}i(\alpha_{i},F_{0})>0\), which implies that \(m\leq\sum_{i=1}^{N}l(x_{0},t_{k}f_{n_{k}}^{-1}(\alpha_{i}))\leq M\) for some \(m,M>0\). Note that \(\{F\in\mathcal{MF}:l(x_{0},F)\leq M\}\) is compact. Thus passing to a subsequence again, we assume that \(\lim_{k\to\infty}t_{k}f_{n_{k}}^{-1}(\alpha_{i})=F_{i}\) for some \(F_{i}\in\mathcal{MF}\) and \(F_{i_{0}}\neq 0\) for some \(i_{0}\). By the definition of \(\Phi\), we have \[t_{k}\Phi\circ f_{n_{k}}(\cdot)=\big{(}i(\alpha_{i},t_{k}f_{n_{k}}\cdot)\big{)} _{i=1}^{N}=\big{(}i(t_{k}f_{n_{k}}^{-1}(\alpha_{i}),\cdot)\big{)}_{i=1}^{N}.\] Since \(i(\cdot,\cdot):\mathcal{MF}\times\mathcal{MF}\to R_{\geq 0}\) is continuous and \(\lim_{k\to\infty}t_{k}f_{n_{k}}^{-1}(\alpha_{i})=F_{i}\), we know that \(t_{k}\Phi\circ f_{n_{k}}(\cdot)=\big{(}i(t_{k}f_{n_{k}}^{-1}(\alpha_{i}),\cdot) \big{)}_{i=1}^{N}\) converges uniformly to \(\big{(}i(F_{i},\cdot)\big{)}_{i=1}^{N}\neq 0\) on any compact subset of \(\mathcal{MF}\). By Lemma 3.3, for any \(F\in\mathcal{MF}\), \(\big{(}i(F_{i},F)\big{)}_{i=1}^{N}=\lim_{k\to\infty}\big{(}i(\alpha_{i},t_{k}f_{ n_{k}}(F))\big{)}_{i=1}^{N}\in\Phi(\mathcal{MF})\), which implies that \(f_{0}=\Phi^{-1}\big{(}i(F_{i},\cdot)\big{)}_{i=1}^{N}\) is a homogeneous continuous map from \(\mathcal{MF}\) to \(\mathcal{MF}\). Since \(\Phi\) is a homeomorphism, \(t_{k}f_{n_{k}}\) converges uniformly to \(f_{0}\) on any compact subset of \(\mathcal{MF}\). By Lemma 3.4, \(\lim_{k\to\infty}\widehat{d}([f_{n_{k}}],[f_{0}])=0\). By Lemma 3.4 and Lemma 3.5, we prove Theorem 3.1 now. **Proof of Theorem 3.1.** Firstly, we prove that \(Cl(E)\subseteq P\Omega^{{}^{\prime}}(\mathcal{MF})\). Naturally, \(E\subseteq P\Omega^{{}^{\prime}}(\mathcal{MF})\). Suppose that a sequence \([f_{n}]\in E\) converges to \([f_{0}]\in Cl(E)\) in \(P\Omega(\mathcal{MF})\). By Lemma 3.5, there exists a subsequence \(\{[f_{n_{k}}]\}_{k=1}^{\infty}\) such that \(\lim_{k\to\infty}\widehat{d}([f_{n_{k}}],[f_{0}^{{}^{\prime}}])=0\) for some \([f_{0}^{{}^{\prime}}]\in P\Omega^{{}^{\prime}}(\mathcal{MF})\). By Lemma 3.4, there exists a sequence of positive numbers \(\{t_{k}\}_{k=1}^{\infty}\) such that \(t_{k}f_{n_{k}}\) converges uniformly to \(f_{0}^{{}^{\prime}}\) on any compact subset of \(\mathcal{MF}\). Since the uniform convergence on compact sets is stronger than the pointwise convergence, \(t_{k}f_{n_{k}}\) converges to \(f_{0}^{{}^{\prime}}\) in the topology of pointwise convergence. By the topology of \(P\Omega(\mathcal{MF})\), \([f_{n_{k}}]\) converges to \([f_{0}^{{}^{\prime}}]\) in \(P\Omega(\mathcal{MF})\), which implies that \([f_{0}]=[f_{0}^{{}^{\prime}}]\in P\Omega^{{}^{\prime}}(\mathcal{MF})\). Thus \(Cl(E)\subseteq P\Omega^{{}^{\prime}}(\mathcal{MF})\). Secondly, we prove that the topology on \(Cl(E)\) is coincident with that induced by the metric \(\widehat{d}\), that is, \(Cl(E)\) is metrizable. From Lemma 3.4 and the fact that the uniform convergence on compact sets is stronger than the pointwise convergence, we know that for any sequence \(\{[f_{n}]\}_{n=0}^{\infty}\) in \(Cl(E)\), \(\lim_{n\to\infty}\widehat{d}([f_{n}],[f_{0}])\) implies that \([f_{n}]\) converges to \([f_{0}]\) in the topology of \(Cl(E)\). For the inverse direction, suppose that \([f_{n}]\) converges to \([f_{0}]\) in the topology of \(Cl(E)\). We wish to prove that \(\lim_{n\to\infty}\widehat{d}([f_{n}],[f_{0}])=0\). We prove this by contradiction. Suppose that there exists a subsequence \(\{[f_{n_{k}}]\}_{k=1}^{\infty}\) such that \(\widehat{d}([f_{n_{k}}],[f_{0}])>\varepsilon\) for some \(\varepsilon>0\). Then by Lemma 3.5, passing to a subsequence again, we can assume that \(\lim_{k\to\infty}\widehat{d}([f_{n_{k}}],[f_{0}^{{}^{\prime}}])=0\) for some \([f_{0}^{{}^{\prime}}]\in P\Omega^{{}^{\prime}}(\mathcal{MF})\), which implies that \([f_{0}^{{}^{\prime}}]\in Cl(E)\) and \([f_{n_{k}}]\) converges to \([f_{0}^{{}^{\prime}}]\) in the topology of \(Cl(E)\). Thus \([f_{0}^{{}^{\prime}}]=[f_{0}]\). By \[0=\lim_{k\to\infty}\widehat{d}([f_{n_{k}}],[f_{0}^{{}^{\prime}}])=\lim_{k\to \infty}\widehat{d}([f_{n_{k}}],[f_{0}])\geq\varepsilon,\] we get a contradiction. Finally, by Lemma 3.5, as a dense subset of metric space \((Cl(E),\widehat{d})\), \(E\) is precompact. Thus \(Cl(E)\) is compact. By the proof of Theorem 3.1, we have two useful corollaries. **Corollary 3.6**.: _The boundary point set \(\partial E=Cl(E)-E\) is included in \(P\Omega^{{}^{\prime}}(\mathcal{MF})\), that is, any boundary point \(p\) can be represented by \(p=[f_{p}]\), where \(f_{p}\) is a homogeneous continuous map from \(\mathcal{MF}\) to \(\mathcal{MF}\)._ **Corollary 3.7**.: _For any sequence \(\{[f_{n}]\}_{n=0}^{\infty}\) in \(Cl(E)\), the followings are equivalent: (1) \(\lim_{n\to\infty}[f_{n}]=[f_{0}]\); (2) there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(t_{n}f_{n}\) converges to \(f_{0}\) in the topology of pointwise convergence; (3) there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(t_{n}f_{n}\) converges uniformly to \(f_{0}\) on any compact subset of \(\mathcal{MF}\)._ Corollary 3.7 means that in \(Cl(E)\), the pointwise convergence and the uniform convergence on compact sets are equivalent, which is not true in general. ## 4 The structure of the boundary In this section, we study the structure of the boundary \(\partial E=Cl(E)-E\) in details. Recall that we endow \(P\Omega(\mathcal{MF})\) with the quotient topology from the pointwise convergence on \(\Omega(\mathcal{MF})\). \(Cl(E)\) is the closure of in this topology. **Proposition 4.1**.: _In \(Cl(E)\), \(E\) is discrete and \(\partial E\) is closed._ Proof.: For any \([f]\) in \(E\), if \([f]\) is not an isolated point in \(Cl(E)\), then there exists a sequence \(\{[f_{n}]\}_{n=1}^{\infty}\) in \(E\) such that \(f_{n}\neq f_{m}\) for \(n\neq m\) and \(\lim_{n\to\infty}[f_{n}]=[f]\). So there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(\lim_{n\to\infty}t_{n}f_{n}=f\). Choosing a point \(x_{0}\) in \(\mathcal{T}(S)\), we have \[\lim_{n\to\infty}t_{n}l(f_{n}^{-1}(x_{0}),\cdot)=\lim_{n\to\infty}l(x_{0},t_{n }f_{n}(\cdot))=l(x_{0},f(\cdot))=l(f^{-1}(x_{0}),\cdot).\] Thus \(f_{n}^{-1}(x_{0})\) converges to \(f^{-1}(x_{0})\) in \(\mathcal{T}(S)\), which contradicts the properly discontinuity of the action of \(Mod(S)\) on \(\mathcal{T}(S)\). Thus any point of \(E\) is isolated in \(Cl(E)\), which implies that \(E\) is discrete in \(Cl(E)\). Since \(E\) is discrete in \(Cl(E)\), \(E\) is open in \(Cl(E)\). Thus \(\partial E\) is closed in \(Cl(E)\). The operations of multiplication and inverse on \(Mod(S)\) extend continuously to \(Cl(E)\). For this, we need some notations. Let \(\widetilde{E}=\pi^{-1}(E)\) be the inverse image of \(E\) in \(\Omega(\mathcal{MF})\) and \(Cl(\widetilde{E})\) be the closure of \(\widetilde{E}\) in \(\Omega(\mathcal{MF})\). By Corollary 3.6, \(Cl(\widetilde{E})\subseteq\Omega^{{}^{\prime}}(\mathcal{MF})\). Similar to Corollary 3.7, we have **Corollary 4.2**.: _For any sequence \(\{f_{n}\}_{n=0}^{\infty}\) in \(Cl(\widetilde{E})-\{0\}\), the followings are equivalent: (1) \(\lim_{n\to\infty}f_{n}=f_{0}\), that is, \(f_{n}\) converges to \(f_{0}\) in the topology of pointwise convergence; (2) \(f_{n}\) converges uniformly to \(f_{0}\) on any compact subsets of \(\mathcal{MF}\)._ Proof.: Since the uniform convergence on compact sets is stronger than the pointwise convergence, (2) implies (1). For the inverse direction, suppose that \(f_{n}\) converges to \(f_{0}\) in the topology of pointwise convergence. Then by Corollary 3.7, there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(t_{n}f_{n}\) converges uniformly to \(f_{0}\) on any compact subset of \(\mathcal{MF}\). In particular, \(t_{n}f_{n}\) converges to \(f_{0}\) in the topology of pointwise convergence, which implies that \(\lim_{n\to\infty}t_{n}=1\). Therefore, \(f_{n}=\frac{t_{n}f_{n}}{t_{n}}\) converges uniformly to \(f_{0}\) on any compact subsets of \(\mathcal{MF}\). By Corollary 4.2, we have **Proposition 4.3**.: _The map \(M:Cl(\widetilde{E})-\{0\}\times Cl(\widetilde{E})-\{0\}\to\Omega^{{}^{\prime} }(\mathcal{MF})\) defined by \((f,g)\mapsto f\circ g\) is continuous. And for any \(f,g\in Cl(\widetilde{E})-\{0\}\), \(f\circ g\in Cl(\widetilde{E})\)._ Proof.: Firstly, we prove the continuity of \(M\). Suppose that \(\lim_{n\to\infty}f_{n}=f\) and \(\lim_{n\to\infty}g_{n}=g\) in \(Cl(\widetilde{E})-\{0\}\). By Corollary 4.2, \(f_{n}\) and \(g_{n}\) converge uniformly to \(f\) and \(g\) on any compact subset of \(\mathcal{MF}\), respectively. We need to prove that \(f_{n}\circ g_{n}\) converges to \(f\circ g\) in the topology of pointwise convergence. Let \(d\) be the Euclidean metric on \(\mathcal{MF}\) induced by \(\Phi\). For any \(F\in\mathcal{MF}\), since \(\lim_{n\to\infty}g_{n}(F)=g(F)\), we know that for any \(\epsilon>0\), there is \(N_{1}>0\) such that for any \(n>N_{1}\), \[d(f\circ g_{n}(F),f\circ g(F))<\frac{\epsilon}{2}\] and \(\{g_{n}(F)\}_{n=1}^{\infty}\subseteq B\) for some compact neighbourhood \(B\) of \(g(F)\). Since \(f_{n}\) converges uniformly to \(f\) on \(B\), there exists \(N_{2}>0\) such that for any \(n>N_{2}\) and \(F\in B\), \(d\big{(}f_{n}(F),f(F)\big{)}<\frac{\epsilon}{2}\). Thus for any \(n>\max\{N_{1},N_{2}\}\), \[d(f_{n}\circ g_{n}(F),f\circ g(F))\leq d(f_{n}\circ g_{n}(F),f\circ g_{n}(F))+d (f\circ g_{n}(F),f\circ g(F))<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon,\] which implies that \(f_{n}\circ g_{n}(F)\) converges to \(f\circ g(F)\). Thus \(M\) is continuous. Secondly, we prove that \(f,g\in Cl(\widetilde{E})-\{0\}\) implies \(f\circ g\in Cl(\widetilde{E})\). Take \(f_{n},g_{n}\in Mod(S)\) and \(t_{n},k_{n}>0\) such that \(\lim_{n\to\infty}t_{n}f_{n}=f\) and \(\lim_{n\to\infty}k_{n}g_{n}=g\). Since \(M\) is continuous, \(\lim_{n\to\infty}t_{n}k_{n}f_{n}\circ g_{n}=f\circ g\), which implies that \(f\circ g\in Cl(\widetilde{E})\). By Proposition 4.3, for any two element \([f],[g]\) in \(Cl(E)\), if \(f\circ g\neq 0\), we can define the product of \([f]\) and \([g]\) by \([f\circ g]\in Cl(E)\). In particular, restricting to \(Mod(S)\), it is coincident with the multiplication operation on \(Mod(S)\). Thus the multiplication operation on \(Mod(S)\) extends continuously to \(Cl(E)\) except in some degenerated cases (\(f\circ g=0\)). For the inverse operation, we have **Proposition 4.4**.: _For any \(f\) in \(Cl(\widetilde{E})\), there exists a unique element \(\overline{f}\) in \(Cl(\widetilde{E})\) such that \(i\big{(}f(F),G\big{)}=i\big{(}F,\overline{f}(G)\big{)}\) for any \(F,G\) in \(\mathcal{MF}\). And the map \(\varphi:Cl(\widetilde{E})\to Cl(\widetilde{E}),\,f\mapsto\overline{f}\) is a homeomorphism._ Proof.: Firstly, we prove the existence of \(\overline{f}\). For any \(f\) in \(Cl(\widetilde{E})\), if \(f=Kf_{0}\) for some \(K\geq 0\) and \(f_{0}\in Mod(S)\), we set \(\overline{f}=Kf_{0}^{-1}\). For other cases, we assume that \(\lim_{n\to\infty}t_{n}f_{n}=f\) for some \(t_{n}>0\), \(f_{n}\in Mod(S)\,(n=1,2,...)\). Then we have \[\lim_{n\to\infty}\Phi(t_{n}f_{n}^{-1}(\cdot))=\lim_{n\to\infty}\big{(}i( \alpha_{i},t_{n}f_{n}^{-1}(\cdot))\big{)}_{i=1}^{N}=\lim_{n\to\infty}\big{(}i( t_{n}f_{n}(\alpha_{i}),\cdot)\big{)}_{i=1}^{N}=\big{(}i(f(\alpha_{i}),\cdot) \big{)}_{i=1}^{N}.\] From Lemma 3.3, we set \(f_{0}=\Phi^{-1}\big{(}\big{(}i(f(\alpha_{i}),\cdot)\big{)}_{i=1}^{N}\big{)}\) and then \[\lim_{n\to\infty}t_{n}f_{n}^{-1}=f_{0}.\] Thus for any \(F,G\) in \(\mathcal{MF}\), we have \[i(f(F),G)=\lim_{n\to\infty}i\big{(}t_{n}f_{n}(F),G\big{)}=\lim_{n\to\infty}i \big{(}F,t_{n}f_{n}^{-1}(G)\big{)}=i\big{(}F,f_{0}(G)\big{)}.\] So we set \(\overline{f}=f_{0}\). Secondly, we prove the uniqueness of \(\overline{f}\). For any \(f\in Cl(\widetilde{E})\), suppose there are two elements \(f_{1},f_{2}\) such that for any \(F,G\in\mathcal{MF}\), \[i\big{(}f(F),G\big{)}=i\big{(}F,f_{1}(G)\big{)}=i\big{(}F,f_{2}(G)\big{)}.\] Then \[\Phi\big{(}f_{1}(\cdot)\big{)}=\Phi\big{(}f_{2}(\cdot)\big{)}=\big{(}i(f( \alpha_{i}),\cdot)\big{)}_{i=1}^{N}.\] Since \(\Phi\) is an embedding, we know that \(f_{1}=f_{2}\). Now we prove that \(\varphi\) is a homeomorphism. Obviously, we have \(\overline{\overline{f}}=f\) for any \(f\) in \(Cl(\widetilde{E})\), which implies that \(\varphi^{2}=id:Cl(\widetilde{E})\to Cl(\widetilde{E})\). Thus we only need to prove that \(\varphi\) is continuous. Suppose \(\lim_{n\to\infty}f_{n}=f_{0}\) for \(\{f_{n}\}_{n=0}^{\infty}\) in \(Cl(\widetilde{E})\). Then we have \[\lim_{n\to\infty}\Phi(\overline{f_{n}}(\cdot))=\lim_{n\to\infty}\big{(}i( \alpha_{i},\overline{f_{n}}(\cdot))\big{)}_{i=1}^{N}=\lim_{n\to\infty}\big{(}i( f_{n}(\alpha_{i}),\cdot)\big{)}_{i=1}^{N}\] \[=\big{(}i(f_{0}(\alpha_{i}),\cdot)\big{)}_{i=1}^{N}=\big{(}i(\alpha_{i}, \overline{f_{0}}(\cdot))\big{)}_{i=1}^{N}=\Phi(\overline{f_{0}}(\cdot)).\] Since \(\Phi\) is an embedding, \(\lim_{n\to\infty}\overline{f_{n}}=\overline{f_{0}}\). Thus \(\varphi\) is continuous. We call \(\overline{f}\) defined in Proposition 4.4 the conjugate of \(f\). For any \([f]\in Cl(E)\), define the conjugate of \([f]\) by \([\overline{f}]\). In particular, for any \(f\in Mod(S)\), the conjugate of \(f\) is exactly the inverse of \(f\) in \(Mod(S)\). Thus the inverse operation on \(Mod(S)\) extends continuously to \(Cl(E)\). There is a natural relation between the operations of multiplication and conjugate on \(Cl(\widetilde{E})\). **Proposition 4.5**.: _For any \(f,g\in Cl(\widetilde{E})\), \(\overline{f\circ g}=\overline{g}\circ\overline{f}\)._ Proof.: From the definition of the conjugate operation, we know that for any \(F,G\in\mathcal{MF}\), \[i(f\circ g(F),G)=i(g(F),\overline{f}(G))=i(F,\overline{g}\circ\overline{f}( G)),\] which implies that \(\overline{f\circ g}=\overline{g}\circ\overline{f}\). **Remark 4.6**.: _By proposition 4.3, 4.4, 4.5, the natural group structure of \(Mod(S)\) extends continuously to \(Cl(E)\). But \(Cl(E)\) is not a group, since there are some degenerated cases that the multiplication is not defined and the conjugate operation on \(Cl(E)\) is not indeed an inverse operation for a group._ Now we prove a lemma: **Lemma 4.7**.: _Suppose \(\lim_{n\to\infty}[f_{n}]=[f_{0}]\) for some \(\{f_{n}\}_{n=1}^{\infty}\) in \(Mod(S)\) and \([f_{0}]\in\partial E\). Then there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(\lim_{n\to\infty}t_{n}=0\) and \(\lim_{n\to\infty}t_{n}f_{n}=f_{0}\)._ Proof.: Since \(\lim_{n\to\infty}[f_{n}]=[f_{0}]\), there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(\lim_{n\to\infty}t_{n}f_{n}=f_{0}\). Now we need to prove that \(\lim_{n\to\infty}t_{n}=0\). Choosing a point \(x_{0}\) in \(\mathcal{T}(S)\), we have \[\lim_{n\to\infty}t_{n}l(f_{n}^{-1}(x_{0}),\cdot)=\lim_{n\to\infty}l(x_{0},t_{ n}f_{n}(\cdot))=l(x_{0},f_{0}(\cdot)).\] From the properly discontinuity of the action of \(Mod(S)\) on \(\mathcal{T}(S)\), we know that \(f_{n}^{-1}(x_{0})\to\infty\) in \(\mathcal{T}(S)\). By the definition of the Thurston compactification, we know that \(\lim_{n\to\infty}t_{n}l(f_{n}^{-1}(x_{0}),\cdot)=l(x_{0},f_{0}(\cdot))=i(F,\cdot)\) for some \(F\) in \(\mathcal{MF}\). Then we have \(\lim_{n\to\infty}t_{n}=0\). Now we give a description of the points in \(\partial E\). **Theorem 4.8**.: _For any \([f]\) in \(\partial E\), we have_ \[f(\cdot)=\sum_{i=1}^{m}i(E_{i},\cdot)F_{i},\] _where \(E_{i},F_{i}\) are some measured foliations with \(i(E_{i},E_{j})=0\) and \(i(F_{i},F_{j})=0\) for \(i,j=1,2,...,m\)._ Proof.: Since \([f]\in\partial E\), there exists a sequence \(\{f_{n}\}_{n=1}^{\infty}\) in \(Mod(S)\) such that \(\lim_{n\to\infty}[f_{n}]=[f]\). By Lemma 4.7, there exists a sequence of positive numbers \(\{t_{n}\}_{n=1}^{\infty}\) such that \(\lim_{n\to\infty}t_{n}=0\) and \(\lim_{n\to\infty}t_{n}f_{n}=f\) in \(\Omega(\mathcal{MF})\). As \(f\) is a map from \(\mathcal{MF}\) to \(\mathcal{MF}\), let \(Imf=\{f(F):F\in\mathcal{MF}\}\subseteq\mathcal{MF}\) be the image of \(f\). We claim that for any \(F,G\in Imf\), \(i(F,G)=0\). For any \(F,G\) in \(Imf\), there are \(F_{1},G_{1}\in\mathcal{MF}\) such that \(f(F_{1})=F\) and \(f(G_{1})=G\). Since \(\lim_{n\to\infty}t_{n}f_{n}=f\) and \(\lim_{n\to\infty}t_{n}=0\), we have \[i(F,G)=i(f(F_{1}),f(G_{1}))=\lim_{n\to\infty}i(t_{n}f_{n}(F_{1}),t_{n}f_{n}(G_{ 1}))=\lim_{n\to\infty}t_{n}^{2}i(F_{1},G_{1})=0.\] From this claim, there are \(m\) pairwise disjoint indecomposable measured foliations \(\{F_{i}\}_{i=1}^{m}\) and some nonnegative function \(f_{i}(\cdot)\) on \(\mathcal{MF}\) such that \[f(\cdot)=\sum_{i=1}^{m}f_{i}(\cdot)F_{i}.\] For \(1\leq i\leq m\), from Lemma 2.1, there exists a sequence of simple closed curves \(\{\gamma_{n}\}_{n=1}^{\infty}\) such that \[i(\gamma_{n},F_{i})>0,\,\frac{i(\gamma_{n},F_{j})}{i(\gamma_{n},F_{i})}<\frac {1}{n}\,(j\neq i).\] Then \[\frac{i(\gamma_{n},f(\cdot))}{i(\gamma_{n},F_{i})}=f_{i}(\cdot)+\sum_{j\neq i }^{m}\frac{i(\gamma_{n},F_{j})}{i(\gamma_{n},F_{i})}f_{j}(\cdot)\leq f_{i}( \cdot)+\frac{1}{n}\sum_{j\neq i}^{m}f_{j}(\cdot).\] Set \(G_{n}=\frac{1}{i(\gamma_{n},F_{i})}\gamma_{n}\). Then \[\lim_{n\to\infty}i(\overline{f}(G_{n}),\cdot)=\lim_{n\to\infty}\frac{i(\gamma _{n},f(\cdot))}{i(\gamma_{n},F_{i})}=f_{i}(\cdot).\] In particular, for the filling curves \(\{\alpha_{j}\}_{j=1}^{N}\) defined in Section 3, \[\lim_{n\to\infty}i(\overline{f}(G_{n}),\alpha_{j})=f_{i}(\alpha_{j})\,\,(j=1, 2,...,N).\] Thus there exists a constant \(M>0\) such that \[l(\overline{f}(G_{n}))=\sum_{j=1}^{N}i(\overline{f}(G_{n}),\alpha_{j})\leq M.\] From Lemma 3.2, there exists a subsequence \(\{G_{n_{k}}\}_{k=1}^{\infty}\) such that \[\lim_{k\to\infty}\overline{f}(G_{n_{k}})=E_{i}\] for some \(E_{i}\) in \(\mathcal{MF}\). Then we have \[f_{i}(\cdot)=\lim_{k\to\infty}i(\overline{f}(G_{n_{k}}),\cdot)=i(E_{i},\cdot).\] For \(1\leq i\leq m\), we construct a measured foliation \(E_{i}\) as above. Since for any \(F,G\in Im\overline{f}\), \(i(F,G)=0\), we have \(i(E_{i},E_{j})=0\) for \(i\neq j\). Thus we have \[f=\sum_{i=1}^{m}i(E_{i},\cdot)F_{i},\] and \(E_{i},F_{i}\) are measured foliations with \(i(E_{i},E_{j})=0\) and \(i(F_{i},F_{j})=0\) for \(i,j=1,2,...,m\), which completes the proof. **Remark 4.9**.: _If \(f(\cdot)=\sum_{i=1}^{m}i(E_{i},\cdot)F_{i}\) as in Theorem 4.8, then \(\overline{f}(\cdot)=\sum_{i=1}^{m}i(F_{i},\cdot)E_{i}\)._ **Problem 4.10**.: _Does the converse of Theorem 4.8 hold: for any \(\{E_{i}\}_{i=1}^{m},\{F_{i}\}_{i=1}^{m}\) in \(\mathcal{MF}\) with \(i(E_{i},E_{j})=0\) and \(i(F_{i},F_{j})=0\) (\(i,j=1,2,...,m\)), \([\sum_{i=1}^{m}i(E_{i},\cdot)F_{i}]\in\partial E\)?_ Now we construct some special points in \(\partial E\). Firstly, we consider the limit of the sequence \(\{f^{n}\}_{n=1}^{\infty}\) for some \(f\) in \(Mod(S)\). We need two results (see [9]). **Proposition 4.11**.: _Let \(f=T_{a_{1}}^{n_{1}}\circ T_{a_{2}}^{n_{2}}\circ\cdots\circ T_{a_{k}}^{n_{k}}\), where \(\alpha_{1},...,\alpha_{k}\) are pairwise disjoint simple closed curves, \(T_{\alpha_{i}}\) is the Dehn Twist of \(\alpha_{i}\) and \(n_{i}\in Z\)\((i=1,2,...,k)\). Then for any \(F\in\mathcal{MF}\), we have_ \[\lim_{n\rightarrow\pm\infty}\frac{f^{n}(F)}{|n|}=\sum_{i=1}^{k}|n_{i}|i( \alpha_{i},F)\alpha_{i}.\] **Proposition 4.12**.: _Let \(f\in Mod(S)\) be a pseudo-Anosov element such that \(f(F^{s})=\lambda^{-1}F^{s},\)\(f(F^{u})=\lambda F^{u}\) with \(\lambda>1,\)\(F^{s},F^{u}\in\mathcal{MF}\) and \(i(F^{s},F^{u})=1\). Then for any \(F\in\mathcal{MF}\), we have_ \[\lim_{n\rightarrow\infty}\frac{f^{n}(F)}{\lambda^{n}}=i(F^{s},F)F^{u},\ \lim_{n \rightarrow\infty}\frac{f^{-n}(F)}{\lambda^{n}}=i(F^{u},F)F^{s}.\] From Proposition 4.11 and Proposition 4.12, we have **Proposition 4.13**.: _(1) With the assumption of Proposition 4.11, we have_ \[\lim_{n\rightarrow\pm\infty}[f^{n}(\cdot)]=[\sum_{i=1}^{k}|n_{i}|i(\alpha_{i},\cdot)\alpha_{i}]\in\partial E.\] _(2) With the assumption of Proposition 4.12, we have_ \[\lim_{n\rightarrow\infty}[f^{n}(\cdot)]=[i(F^{s},\cdot)F^{u}]\in\partial E\ \ \lim_{n \rightarrow\infty}[f^{-n}(\cdot)]=[i(F^{u},\cdot)F^{s}]\in\partial E.\] It is well known that the action of \(Mod(S)\) on \(\mathcal{PMF}\) is minimal, that is, the orbit of any element of \(\mathcal{PMF}\) under the action of \(Mod(S)\) is dense in \(\mathcal{PMF}\) (see [4]). We extend this result a little: **Lemma 4.14**.: _Let \(Mod^{{}^{\prime}}(S)\subseteq Mod(S)\) be the set of all mapping classes preserving the punctures of \(S\) pointwise. Then the action of \(Mod^{{}^{\prime}}(S)\) on \(\mathcal{PMF}\) is minimal, that is, the orbit of any element of \(\mathcal{PMF}\) under the action of \(Mod^{{}^{\prime}}(S)\) is dense in \(\mathcal{PMF}\)._ Proof.: Since \(\mathcal{S}\) is dense in \(\mathcal{PMF}\), we only need to prove that for any \(\alpha,\beta\in\mathcal{S}\), \(\beta\in Cl(Mod^{{}^{\prime}}(S)(\alpha))\), where \(Cl(Mod^{{}^{\prime}}(S)(\alpha))\subseteq\mathcal{PMF}\) is the closure of the orbit of \(\alpha\) under the action of \(Mod^{{}^{\prime}}(S)\). Take \(\gamma\in\mathcal{S}\) such that \(i(\alpha,\gamma)\neq 0\) and \(i(\beta,\gamma)\neq 0\). Let \(T_{\gamma}\) and \(T_{\beta}\) be the Dehn Twist of \(\gamma\) and \(\beta\), respectively. Note that \(T_{\gamma}\) and \(T_{\beta}\) preserve each puncture of \(S\). Thus \(T_{\gamma},T_{\beta}\in Mod^{{}^{\prime}}(S)\). By Theorem 4.11, \(\lim_{n\rightarrow\infty}\frac{T_{\gamma}^{n}(\alpha)}{n}=i(\alpha,\gamma)\gamma\), which implies that \(\gamma\in Cl(Mod^{{}^{\prime}}(S)(\alpha))\). Using Theorem 4.11 again, we have \(\lim_{n\rightarrow\infty}\frac{T_{\beta}^{n}(\gamma)}{n}=i(\gamma,\beta)\beta\), which implies that \(\beta\in Cl(Mod^{{}^{\prime}}(S)(\gamma))\). Thus we have \(\beta\in Cl(Mod^{{}^{\prime}}(S)(\alpha))\). From Lemma 4.14, we have **Proposition 4.15**.: _For any \(F,G\in\mathcal{MF}\), \([i(F,\cdot)G]\in\partial E\)._ Proof.: From Lemma 4.14, for a simple closed curve \(\alpha\) in \(S\), there are two sequences \(\{f_{n}\}_{n=1}^{\infty},\{g_{n}\}_{n=1}^{\infty}\) in \(Mod^{{}^{\prime}}(S)\) such that \(\lim_{n\rightarrow\infty}[f_{n}(\alpha)]=[F]\) and \(\lim_{n\rightarrow\infty}[g_{n}(\alpha)]=[G]\) in \(\mathcal{PMF}\). Note that \([f_{\alpha}(\cdot)]=[i(\alpha,\cdot)\alpha]\in\partial E\) by Proposition 4.13(1). Then \(\lim_{n\rightarrow\infty}[g_{n}\circ f_{\alpha}\circ f_{n}^{-1}(\cdot)]=\lim_{n \rightarrow\infty}[i(f_{n}(\alpha),\cdot)g_{n}(\alpha)]=[i(F,\cdot)G]\in \partial E\). We extend the result of Proposition 4.15 by operation on subsurfaces. Let \(\gamma_{1},...,\gamma_{p}\) be disjoint essential simple closed curves in \(S\). After cutting along these curves, we have some connected subsurfaces \(S_{1},...,S_{k}\). Then we have **Proposition 4.16**.: _For any \(a_{i}\geq 0\)\((i=1,2,...,p)\), \(F_{j},G_{j}\) in \(\mathcal{MF}(S_{j})\)\((j=1,2,...,k)\),_ \[[\sum_{i=1}^{p}a_{i}i(\gamma_{i},\cdot)\gamma_{i}+\sum_{j=1}^{k}i(F_{j},\cdot)G _{j}]\in\partial E.\] Proof.: For \(j=1,2,...,k\), take a simple closed curve \(\beta_{j}\) in \(S_{j}\). Using Lemma 4.14 in each subsurface \(S_{j}\) (seen as a punctured surface), we find two sequences \(\{f_{n}\}_{n=1}^{\infty},\{g_{n}\}_{n=1}^{\infty}\) in \(Mod(S)\) such that \(\lim_{n\rightarrow\infty}[f_{n}(\beta_{j})]=[F_{j}]\), \(\lim_{n\rightarrow\infty}[g_{n}(\beta_{j})]=[G_{j}]\) for \(j=1,2,...,k\) and \(f_{n}(\gamma_{i})=\gamma_{i},g_{n}(\gamma_{i})=\gamma_{i}\) for \(i=1,2,...,p\). Thus for \(j=1,2,...,k\), there are two sequences of positive numbers \(\{t_{n}^{j}\}_{n=1}^{\infty}\), \(\{s_{n}^{j}\}_{n=1}^{\infty}\) such that \[\lim_{n\rightarrow\infty}t_{n}^{j}f_{n}(\beta_{j})=F_{j},\ \lim_{n\rightarrow \infty}s_{n}^{j}g_{n}(\beta_{j})=G_{j}\,(j=1,2,...,k).\] From Proposition 4.13(1) and the denseness of the set of rational numbers in \(R\), we have \[[h_{n}(\cdot)]=[\sum_{i=1}^{p}a_{i}i(\gamma_{i},\cdot)+\sum_{j=1}^{k}t_{n}^{j }s_{n}^{j}i(\beta_{j},\cdot)\beta_{j}]\in\partial E\,(n=1,2,...).\] Thus \[[g_{n}\circ h_{n}\circ f_{n}^{-1}(\cdot)]=[\sum_{i=1}^{p}a_{i}i(\gamma_{i}, \cdot)+\sum_{j=1}^{k}i(t_{n}^{j}f_{n}(\beta_{j}),\cdot)s_{n}^{j}g_{n}(\beta_{ j})]\in\partial E\,(n=1,2,...).\] Note that \[\lim_{n\rightarrow\infty}[g_{n}\circ h_{n}\circ f_{n}^{-1}(\cdot)]=[\sum_{i= 1}^{p}a_{i}i(\gamma_{i},\cdot)\gamma_{i}+\sum_{j=1}^{k}i(F_{j},\cdot)G_{j}],\] which implies that \([\sum_{i=1}^{p}a_{i}i(\gamma_{i},\cdot)\gamma_{i}+\sum_{j=1}^{k}i(F_{j},\cdot )G_{j}]\in\partial E\). ## 5 Some applications Since \(Mod(S)\) acts continuously on the Thurston compactification \(\mathcal{T}^{Th}(S)=\mathcal{T}(S)\bigcup\mathcal{PMF}\) and the Gardiner-Masur compactification \(\mathcal{T}^{GM}(S)=\mathcal{T}(S)\bigcup GM\) of \(\mathcal{T}(S)\), we have two maps \[\Pi_{Th}:Mod(S)\times\mathcal{T}^{Th}(S)\rightarrow\mathcal{T}^{Th}(S),\,(f,p )\mapsto f(p)\] and \[\Pi_{GM}:Mod(S)\times\mathcal{T}^{GM}(S)\rightarrow\mathcal{T}^{GM}(S),\,(f,p )\mapsto f(p).\] If we endow \(Mod(S)\) with the discrete topology, then \(\Pi_{Th}\) and \(\Pi_{GM}\) are both continuous. Since \(Cl(E)=E\bigcup\partial E\) is a completion of \(Mod(S)\) in some sense, it may be natural to extend the domains of \(\Pi_{Th}\) and \(\Pi_{GM}\) to \(Cl(E)\times\mathcal{T}^{Th}(S)\) and \(Cl(E)\times\mathcal{T}^{GM}(S)\), respectively. For this, we need equivalent models of \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\). From the definitions of \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\), a point in \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\) are represented by \([p_{1}:\mathcal{S}\to R_{\geq 0}]\) and \([p_{2}:\mathcal{S}\to R_{\geq 0}]\), respectively, where \([p_{i}]\in PR_{\geq 0}^{\mathcal{S}}\) is the projective class of \(p_{i}\in R_{\geq 0}^{\mathcal{S}}\). Since \(R_{+}\times\mathcal{S}\) is dense in \(\mathcal{MF}\), \(p_{1}\) and \(p_{2}\) extend to homogeneous continuous functions on \(\mathcal{MF}\) (see [1] and [15]). Thus a point in \(\mathcal{T}^{Th}(S)\) or \(\mathcal{T}^{GM}(S)\) can be represented by the projective class of a homogeneous continuous function on \(\mathcal{MF}\). Using these notations, the actions of \(Mod(S)\) on \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\) are defined as follows: for any \(f\in Mod(S)\), \(p_{1}=[p_{1}:\mathcal{MF}\to R_{\geq 0}]\in\mathcal{T}^{Th}(S)\) and \(p_{2}=[p_{2}:\mathcal{MF}\to R_{\geq 0}]\in\mathcal{T}^{GM}(S)\), \(f(p_{1})=[p_{1}\circ f^{-1}]\) and \(f(p_{2})=[p_{2}\circ f^{-1}]\). Since the inverse operation \((\cdot)^{-1}\) on \(Mod(S)\) extends to the conjugate operation \(\overline{(\cdot)}\) on \(Cl(E)\), we define the extensions of \(\Pi_{Th}\) and \(\Pi_{GM}\) as **Theorem 5.1**.: _Let \(\Delta_{1}=\{([f],[p])\in Cl(E)\times\mathcal{T}^{Th}(S):p\circ\overline{f}( \cdot)\neq 0\}\) and \(\Delta_{2}=\{([f],[p])\in Cl(E)\times\mathcal{T}^{GM}(S):p\circ\overline{f}( \cdot)\neq 0\}\). The two maps \(\Psi_{Th}:\Delta_{1}\rightarrow\mathcal{T}^{Th}(S)\) and \(\Psi_{GM}:\Delta_{2}\rightarrow\mathcal{T}^{GM}(S)\) defined by \(\Psi_{Th}([f],[p])=[p\circ\overline{f}(\cdot)]\) and \(\Psi_{GM}([f],[p])=[p\circ\overline{f}(\cdot)]\), respectively, are continuous._ Proof.: We only prove the continuity of \(\Psi_{GM}\). The continuity of \(\Psi_{Th}\) can be proved by a similar argument. Suppose \(\{[p_{n}]\}_{n=0}^{\infty}\subseteq\mathcal{T}^{GM}(S),\{[f_{n}]\}_{n=0}^{ \infty}\subseteq Cl(E)\) and \(\lim_{n\to\infty}[p_{n}]=[p_{0}]\), \(\lim_{n\to\infty}[f_{n}]=[f_{0}]\). Up to some constants, we assume that \(\lim_{n\to\infty}f_{n}=f_{0}\) in \(\Omega(\mathcal{MF})\) and \(p_{n}:\mathcal{MF}\to R_{\geq 0}\) converges uniformly to \(p_{0}:\mathcal{MF}\to R_{\geq 0}\) on any compact subsets of \(\mathcal{MF}\). By Proposition 4.4, \(\lim_{n\to\infty}\overline{f_{n}}=\overline{f_{0}}\). Observe that for any \(F\) in \(\mathcal{MF}\), \[|p_{0}\circ\overline{f_{0}}(F)-p_{n}\circ\overline{f_{n}}(F)|\leq|p_{0}\circ \overline{f_{0}}(F)-p_{0}\circ\overline{f_{n}}(F)|+|p_{0}\circ\overline{f_{n}} (F)-p_{n}\circ\overline{f_{n}}(F)|.\] Since \(\lim_{n\to\infty}\overline{f_{n}}(F)=\overline{f_{0}}(F)\), we know that for any \(\epsilon>0\), there exists \(N_{1}>0\) such that for any \(n>N_{1}\), \[|p_{0}\circ\overline{f_{0}}(F)-p_{0}\circ\overline{f_{n}}(F)|<\frac{\epsilon} {2}\] and \(\{\overline{f_{n}}(F)\}_{n=1}^{\infty}\subseteq M\) for some compact subset \(M\) of \(\mathcal{MF}\). Since \(p_{n}(\cdot)\) converges uniformly to \(p_{0}(\cdot)\) on compact set \(M\), there exists \(N_{2}>0\) such that for any \(n>N_{2}\), \[|p_{0}\circ\overline{f_{n}}(F)-p_{n}\circ\overline{f_{n}}(F)|<\frac{\epsilon} {2}.\] Thus for any \(n>\max\{N_{1},N_{2}\}\), \[|p_{0}\circ\overline{f_{0}}(F)-p_{n}\circ\overline{f_{n}}(F)|<\epsilon,\] which implies that for any \(F\in\mathcal{MF}\), \[\lim_{n\to\infty}p_{n}\circ\overline{f_{n}}(F)=p_{0}\circ\overline{f_{0}}(F).\] By the definition of \(\mathcal{T}^{GM}(S)\), \[\Psi_{GM}([f_{0}],[p_{0}])=[p_{0}\circ\overline{f_{0}}(\cdot)]=\lim_{n\to \infty}[p_{n}\circ\overline{f_{n}}(\cdot)]=\lim_{n\to\infty}\Psi_{GM}([f_{n}], [p_{n}]),\] which completes the proof. **Remark 5.2**.: _(1) For \([f]\in Cl(E)\), \([p_{1}]\in\mathcal{T}^{Th}(S)\) and \([p_{2}]\in\mathcal{T}^{GM}(S)\), it may occur that \(p_{1}\circ\overline{f}(\cdot)=0\) and \(p_{2}\circ\overline{f}(\cdot)=0\), that is, the values of \(p_{1}\), \(p_{2}\) on the image of \(\overline{f}\) are \(0\). In these cases, \(\Psi_{Th}\) and \(\Psi_{GM}\) are degenerated at \(([f],[p_{1}])\) and \(([f],[p_{2}])\), respectively. Thus we restrict the definitions of \(\Psi_{Th}\) and \(\Psi_{GM}\) on \(\Delta_{1}\) and \(\Delta_{2}\), respectively. (2) For \(f_{0}\in Mod(S)\), \(\Psi_{Th}([f_{0}],\cdot)\) and \(\Psi_{GM}([f_{0}],\cdot)\) are defined on the whole \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\), respectively. And \(\Psi_{Th}([f_{0}],\cdot)\) and \(\Psi_{GM}([f_{0}],\cdot)\) are consistent with the actions of \(f_{0}\) on \(\mathcal{T}^{Th}(S)\) and \(\mathcal{T}^{GM}(S)\), respectively. (3) For \(x_{0}\in\mathcal{T}(S)\), \(\Psi_{Th}(\cdot,x_{0})\) and \(\Psi_{GM}(\cdot,x_{0})\) are both defined on the whole \(Cl(E)\)._ By Theorem 5.1, we have **Corollary 5.3**.: _For any \(x\in\mathcal{T}(S)\) and sequence \(\{f_{n}\}_{n=1}^{\infty}\subseteq Mod(S)\), suppose that \(\lim_{n\to\infty}[f_{n}]=[f_{0}]\) in \(Cl(E)\) for some \([f_{0}]\in\partial E\), then \(\lim_{n\to\infty}f_{n}(x)=[l(x,\overline{f_{0}}(\cdot)]\in\mathcal{PMF}\) in \(\mathcal{T}^{Th}(S)\) and \(\lim_{n\to\infty}f_{n}(x)=[Ext^{\frac{1}{2}}(x,\overline{f_{0}}(\cdot))]\in GM\) in \(\mathcal{T}^{GM}(S)\)._ By Theorem 5.1 and Corollary 5.3, we answer Problem 1.2: for any \(x_{0}\) in \(\mathcal{T}(S)\), considering the orbit \(\Gamma(x_{0})\) of \(x_{0}\) under the action of \(Mod(S)\), how to describe the closure of \(\Gamma(x_{0})\) in \(\mathcal{T}^{Th}(S)\) or \(\mathcal{T}^{GM}(S)\)? For this, we set \[\partial E^{Th}(x_{0})=\{\Psi_{Th}([f],x_{0}):[f]\in\partial E\}\subseteq \mathcal{PMF};\ \partial E^{GM}(x_{0})=\{\Psi_{GM}([f],x_{0}):[f]\in\partial E\}\subseteq GM.\] Then we have **Theorem 5.4**.: _In \(\mathcal{T}^{Th}(S)\), the closure of \(\Gamma(x_{0})\) is \(\Gamma(x_{0})\cup\partial E^{Th}(x_{0})\). In \(\mathcal{T}^{GM}(S)\), the closure of \(\Gamma(x_{0})\) is \(\Gamma(x_{0})\cup\partial E^{GM}(x_{0})\). What's more, \(\partial E^{Th}(x_{0})=\mathcal{PMF}\)._ Proof.: By Corollary 5.3, \(\partial E^{Th}(x_{0})\) is included in the closure of \(\Gamma(x_{0})\) in \(\mathcal{T}^{Th}(S)\). Conversely, suppose \(p\in\mathcal{T}^{Th}(S)\) is an element of the closure of \(\Gamma(x_{0})\) in \(\mathcal{T}^{Th}(S)\) and \(p\notin\Gamma(x_{0})\). Then there exists a sequence \(\{f_{n}\}_{n=1}^{\infty}\subseteq Mod(S)\) such that \(\lim_{n\to\infty}f_{n}(x_{0})=p\) in \(\mathcal{T}^{Th}(S)\). Since \(Mod(S)\) acts properly discontinuously on \(\mathcal{T}(S)\), we know that \(p\in\mathcal{PMF}\). By Theorem 3.1 and Proposition 4.1, there exists a subsequence \(\{f_{n_{k}}\}_{k=1}^{\infty}\) such that \(\lim_{k\to\infty}[f_{n_{k}}]=[f_{0}]\) in \(Cl(E)\) for some \([f_{0}]\in\partial E\). By Corollary 5.3, we have \[p=\lim_{k\to\infty}f_{n_{k}}(x_{0})=\Psi_{Th}([f_{0}],x_{0})\in\partial E^{Th} (x_{0}).\] Thus the closure of \(\Gamma(x_{0})\) in \(\mathcal{T}^{Th}(S)\) is \(\Gamma(x_{0})\cup\partial E^{Th}(x_{0})\). Using a similar argument, we know that the closure of \(\Gamma(x_{0})\) in \(\mathcal{T}^{GM}(S)\) is \(\Gamma(x_{0})\cup\partial E^{GM}(x_{0})\). Now we prove \(\partial E^{Th}(x_{0})=\mathcal{PMF}\). From Proposition 4.13(1), we know that for any simple closed curve \(\alpha\) in \(S\), \([i(\alpha,\cdot)\alpha]\in\partial E\). Thus \([i(\alpha,\cdot)]=[l(x_{0},i(\alpha,\cdot)\alpha)]\in\partial E^{Th}(x_{0})\). Since the set of simple closed curves is dense in \(\mathcal{PMF}\), we have \(\partial E^{Th}(x_{0})=\mathcal{PMF}\). **Remark 5.5**.: _It is well-known that the action of \(Mod(S)\) on \(\mathcal{PMF}\) is minimal (see [4]). This fact also implies \(\partial E^{Th}(x_{0})=\mathcal{PMF}\): since \(Mod(S)\) acts properly discontinuously on \(\mathcal{T}(S)\), \(\partial E^{Th}(x_{0})\cap\mathcal{PMF}\neq\emptyset\). By the minimal action of \(Mod(S)\) on \(\mathcal{PMF}\), this implies \(\partial E^{Th}(x_{0})=\mathcal{PMF}\)._ **Remark 5.6**.: _The new boundary \(\partial E\) is related to a special boundary of \(Mod(S)\). Precisely, fixed a base point \(x\in\mathcal{T}(S)\), sending \(f\in Mod(S)\) to \(f(x)\in\Gamma(x)\subseteq\mathcal{T}(S)\), we can identify \(Mod(S)\) with the orbit \(\Gamma(x)\) naturally. By Theorem 5.4, the boundary of \(\Gamma(x)\) in \(\mathcal{T}^{Th}(S)\) is \(\partial E^{Th}(x)=\mathcal{PMF}\). Thus \(\partial E^{Th}(x)=\mathcal{PMF}\) can be seen as a boundary of \(Mod(S)\). As a boundary of \(Mod(S)\), \(\partial E^{Th}(x)\) is homeomorphic to \(\mathcal{PMF}\) but depends upon the base point \(x\) heavily. Thus we get a family of boundaries \(\{\partial E^{Th}(x):x\in\mathcal{T}(S)\}\) of \(Mod(S)\) in which each boundary is isomorphic to \(\mathcal{PMF}\). We may call each boundary \(\partial E^{Th}(x)\) the Thurston boundary with base point \(x\). By Theorem 5.1, the new boundary \(\partial E\) covers each boundary \(\partial E^{Th}(x)\) in this family by a surjective continuous map \(\Psi_{x}:\partial E\to\partial E^{Th}(x),\;[f]\mapsto\Psi_{Th}([f],x)\)._ **Remark 5.7**.: _Different from the case of Thurston compactification, \(\partial E^{GM}(x_{0})\) may be not the whole boundary \(GM\). From the compactness of \(\partial E\), we only know that \(\partial E^{GM}(x_{0})\) is a compact subset of \(GM\). And \(\partial E^{GM}(x_{0})\) contains some new points different from those known points in \(GM\). A special kind of boundary point was constructed in [2]:_ \[[Ext^{\frac{1}{2}}\big{(}x_{0},\sum_{i=1}^{k}n_{i}i(\alpha_{i},\cdot)\alpha_{i }\big{)}],\] _where \(\alpha_{i}\) are pairwise disjoint simple closed curves and \(n_{i}>0\). By Proposition 4.13(1), \(\partial E^{GM}(x_{0})\) contains these points._ Let \(\mathcal{T}_{\epsilon}(S)=\{x\in\mathcal{T}(S):\underline{l}(x)>\epsilon\}\) be the \(\epsilon-\)Thick part of \(\mathcal{T}(S)\), where \(\underline{l}(x)=\min_{\alpha\in S}l(x,\alpha)\). The following result characterizes the points of \(\partial E^{GM}(x_{0})\). **Theorem 5.8**.: _For any \(p\in GM\), \(p\in\partial E^{GM}(x_{0})\) for some \(x_{0}\in\mathcal{T}(S)\) if and only if there exists a sequence \(\{p_{n}\}_{n=1}^{\infty}\subseteq\mathcal{T}_{\epsilon}(S)\) for some \(\epsilon>0\) such that \(\lim_{n\to\infty}p_{n}=p\)._ Proof.: Suppose that \(p\in\partial E^{GM}(x_{0})\) for some \(x_{0}\in\mathcal{T}(S)\). Then \(p=\lim_{n\to\infty}f_{n}(x_{0})\) for some sequence \(\{f_{n}\}_{n=1}^{\infty}\subseteq Mod(S)\). Note that \(\underline{l}\big{(}f_{n}(x_{0})\big{)}\equiv\underline{l}(x_{0})\). Set \(\epsilon=\frac{1}{2}\underline{l}(x_{0})\). Then \(f_{n}(x_{0})\in\mathcal{T}_{\epsilon}(S)\). Suppose that \(\{p_{n}\}_{n=1}^{\infty}\subseteq\mathcal{T}_{\epsilon}(S)\) for some \(\epsilon>0\) and \(\lim_{n\to\infty}p_{n}=p\). Then from the Mumford's compactness criterion, we know that after projecting to the moduli space \(\mathcal{M}(S)=\mathcal{T}(S)/Mod(S)\), the sequence \(\{p_{n}\}_{n=1}^{\infty}\) lies in a precompact set \(\mathcal{M}_{\epsilon}(S)=\mathcal{T}_{\epsilon}(S)/Mod(S)\), which is the \(\epsilon\)-thick part of \(\mathcal{M}(S)\). Thus passing to a subsequence, we assume that there exists a sequence \(\{f_{n}\}_{n=1}^{\infty}\) in \(Mod(S)\) and \(x_{0}\) in \(\mathcal{T}(S)\) such that \[\lim_{n\to\infty}f_{n}(p_{n})=x_{0}.\] Set \(x_{n}=f_{n}(p_{n})\). Then \(p_{n}=f_{n}^{-1}(x_{n})\) and \(\lim_{n\to\infty}x_{n}=x_{0}\). By the compactness of \(Cl(E)\), passing to a subsequence again, we assume that \([f_{n}^{-1}]\) converges to some \([f]\) in \(Cl(E)\). Thus by Theorem 5.1, \[p=\lim_{n\to\infty}p_{n}=\lim_{n\to\infty}f_{n}^{-1}(x_{n})=\lim_{n\to\infty} \Psi_{GM}([f_{n}^{-1}],x_{n})=\Psi_{GM}([f],x_{0})=[Ext^{\frac{1}{2}}(x_{0}, \overline{f}(\cdot))].\] Since \([Ext^{\frac{1}{2}}(x_{0},\overline{f}(\cdot))]=p\in GM\), we have \([f]\in\partial E\). So \(p\in\partial E^{GM}(x_{0})\).
2306.13014
The binomial random graph is a bad inducer
For a finite graph $F$ and a value $p \in [0,1]$, let $I(F,p)$ denote the largest $y$ for which there is a sequence of graphs of edge density approaching $p$ so that the induced $F$-density of the sequence approaches $y$. We show that for all $F$ on at least three vertices and all $p \in (0,1)$, the binomial random graph $G(n,p)$ has induced $F$-density strictly less than $I(F,p).$ This provides a negative answer to a problem posed by Liu, Mubayi and Reiher. Our approach is in the limiting setting of graphons, and we in fact show a stronger result: the binomial random graph is never a \emph{local} maximum in the space of graphons of edge density $p$. This is done by finding a sequence of balanced perturbations of arbitrarily small norm that increase the $F$-density.
Vishesh Jain, Marcus Michelen, Fan Wei
2023-06-22T16:23:21Z
http://arxiv.org/abs/2306.13014v2
# The binomial random graph is a bad inducer ###### Abstract. For a finite graph \(F\) and a value \(p\in[0,1]\), let \(I(F,p)\) denote the largest \(y\) for which there is a sequence of graphs of edge density approaching \(p\) so that the induced \(F\)-density of the sequence approaches \(y\). In this short note, we show that for all \(F\) on at least three vertices and \(p\in(0,1)\), the binomial random graph \(G(n,p)\) has induced \(F\)-density strictly less than \(I(F,p).\) This provides a negative answer to a problem posed by Liu, Mubayi and Reiher. ## 1. Introduction For a finite labeled graph \(G\) with vertex set \(V(G)\) and edge set \(E(G)\), recall that the edge density of \(G\) is given by \(\rho(G)=|E(G)|/\binom{|V(G)|}{2}.\) Given another finite labeled graph \(F\), let \[N(F,G):=|\{\varphi:V(F)\hookrightarrow V(G):(a,b)\in E(F)\iff(\varphi(a), \varphi(b))\in E(G)\}|\] be the number of induced copies of \(F\) in \(G\) and define the induced \(F\)-density of \(G\) to be \[\rho(F,G):=\frac{N(F,G)}{(|V(G)|)_{|V(F)|}}\] where we write \((x)_{k}:=x(x-1)\cdots(x-(k-1))\) for the falling factorial. Finally, define the maximum induced \(F\)-density at edge density \(p\in[0,1]\) via \[I(F,p):=\sup\left\{y:\exists\ \{G_{n}\}_{n\geqslant 1},\lim_{n\to\infty}|V(G_{n} )|=\infty,\lim_{n\to\infty}\rho(G_{n})=p,\lim_{n\to\infty}\rho(F,G_{n})=y \right\}\,.\] Informally, \(I(F,p)\) is the largest induced \(F\)-density among large graphs of edge density approaching \(p\). The maximum value of \(I(F,p)\) over \(p\in[0,1]\) is exactly the inducibility of \(F\), introduced by Pippenger and Golumbic [4]. Linearity of expectation shows that the expected induced \(F\)-density in the binomial random graph \(G(n,p)\) is precisely \[\operatorname{rand}(F,p):=p^{|E(F)|}(1-p)^{\binom{n}{2}-|E(F)|}\,.\] By basic concentration estimates, if we set \(G_{n}\) to be an instance of \(G(n,p)\) for each \(n\), then we almost-surely have \(\rho(G_{n})\to p\) and \(\rho(F,G_{n})\to\operatorname{rand}(F,P)\). As such, we always have \(\operatorname{rand}(F,p)\leqslant I(F,p).\) Even-Zohar and Linial [1] suggested exploring the performance of random constructions in maximizing the inducibility of \(F\). In particular, they left open whether for \(F\) given by the disjoint union of a path of length \(3\) and an isolated vertex, the inducibility is achieved (in the limit) by \(G(n,3/10)\). Perhaps suggesting that binomial random graphs can be optimal inducers in some examples, Liu, Mubayi and Reiher asked "an easier question" [2, Problem 1.6] whether there is a graph \(F\) and \(p\in(0,1)\) so that \(I(F,p)=\operatorname{rand}(F,p)\)1. In this short note we provide a negative answer to this question. Footnote 1: We note that Liu, Mubayi and Reiher work with unlabeled graphs rather than labeled graphs, but this only changes the quantities \(N(F,G)\) by a factor depending only on \(F\). **Theorem 1.1**.: _For each finite labeled graph \(F\) with \(|V(F)|\geqslant 3\) and for all \(p\in(0,1)\),_ \[I(F,p)>\operatorname{rand}(F,p).\] We observe that if \(|V(F)|\leqslant 2\) then for all \(G\), \(\rho(F,G)\) is a function solely of the edge density \(\rho(G)\). Our approach is to work in the limiting setting of _graphons_ rather than sequences of finite graphs. Once we define the appropriate notions such as edge density and induced subgraph density for graphons, we algorithmically construct a perturbation to the graphon corresponding to \(G(n,p)\) such that the perturbed graphon achieves a higher induced \(F\)-density than \(G(n,p)\). Perhaps surprisingly, our perturbation (Proposition 2.2) completely ignores all information about \(F\) except for \(|V(F)|\) and the parity of \(|E(F)|\); in particular, this shows that one can "beat" \(G(n,p)\) for counts of induced subgraphs for all graphs on a fixed number of vertices and parity of number of edges simultaneously. Consider a finite labeled graph \(F\). Identify its vertex set \(V(F)\) with \([m]\) and write \(E\) for its edge set. Let \(\overline{E}:=\binom{[m]}{2}\setminus E\) be the set of non-edges of \(F\). Recall that a _graphon_ is a symmetric measurable function \(W:[0,1]^{2}\to[0,1]\). For a graphon \(W\), the edge density is given by \[\rho(W)=\int_{[0,1]^{2}}W(x_{1},x_{2})\,dx_{1}\,dx_{2}\] and the induced density of \(F\) in \(W\) is given by \[\rho_{F}(W)=\int_{[0,1]^{m}}\prod_{e\in E}W(x_{e_{1}},x_{e_{2}})\prod_{f\in E} \left(1-W(x_{f_{1}},x_{f_{2}})\right)dx_{1}dx_{2}\cdots dx_{m}\,.\] We note that the constant graphon \(W_{p}\equiv p\) is the limit of the random graphs \(G(n,p)\) and that \(\rho_{F}(W_{p})=\operatorname{rand}(F,p)\). It follows from standard considerations (e.g. [3, Lemma 2.4]) that \(I(F,p)\) can be recast as an optimization problem over graphons. **Fact 1.2**.: _For every finite labeled graph \(F\) and \(p\in[0,1]\) we have_ \[I(F,p)=\sup_{W}\{\rho_{F}(W):\rho(W)=p\}\,.\] Our approach is to construct a small perturbation of \(W_{p}\equiv p\) of edge density zero that bumps up the induced density of \(F\). More precisely, in the next section, we will establish the following proposition, which immediately implies Theorem1.1 by Fact1.2. **Proposition 1.3**.: _For every finite labeled graph \(F\) with \(|V(F)|\geqslant 3\) and \(p\in(0,1)\), there exists a symmetric measurable function \(\Delta=\Delta(F,p):[0,1]^{2}\to\mathbb{R}\) such that:_ * \(W_{p}+\Delta:[0,1]^{2}\to[0,1]\)_,_ * \(\rho(W_{p}+\Delta)=\rho(W_{p})=p\)_, and_ * \(\rho_{F}(W_{p}+\Delta)>\rho_{F}(W_{p})=\operatorname{rand}(F,p)\)_._ In contrast to Theorem1.1, a flag algebra computation by Even-Zohar and Linial [1, Table 2] strongly suggests that there is a graph on five vertices for which the maximum inducibility is achieved by the random bipartite graph \(G(n,n,5/6)\). Curiously, there seems to be only one graph on five vertices for which the maximum inducibility is likely achieved by a random bipartite graph. An interesting problem is to understand for which graphs this phenomenon can occur. **Problem 1.4**.: _For which graphs \(F\) and densities \(p\in(0,1)\) is \(I(F,p)\) achieved at random bipartite graphs?_ One potential direction to a partial solution to Problem1.4 would be to eliminate some family of graphs \(F\) by a perturbative approach similar to the proof of Theorem1.1. ## 2. Proof of Proposition1.3 Let \(\Delta:[0,1]^{2}\to\mathbb{R}\) be a symmetric, measurable function. For a finite labeled graph \(H\), define \[\Phi_{H}(\Delta)=\int_{[0,1]^{|V_{H}|}}\prod_{e\in E(H)}\Delta(x_{e_{1}},x_{e _{2}})\,d\mathbf{x}\,.\] Note that if \(H_{0}\) and \(H_{1}\) are isomorphic, then \(\Phi_{H_{0}}(\Delta)=\Phi_{H_{1}}(\Delta)\) for all \(\Delta\). In the case when \(\Delta\) is a graphon, the function \(\Phi_{H}\) counts the density of (not necessarily induced) copies of \(H\) in \(\Delta\). Throughout this section, we fix a finite labeled graph \(F\). We identify its vertex set \(V(F)\) with \([m]\) and write \(E\) for the set of its edges and \(\overline{E}\) for the set of its non-edges. **Lemma 2.1**.: _For a symmetric function \(\Delta:[0,1]^{2}\to\mathbb{R}\), we have the expansion_ \[\rho_{F}(W_{p}+\Delta)=\operatorname{rand}(F,p)+\sum_{H}c_{H,F}(p)\Phi_{H}( \Delta),\] _where the sum is over non-empty labeled subgraphs of the complete graph \(K_{m}\) and \(c_{H,F}(p)\) are polynomials in \(p\) depending only on \(H\) and \(F\). Further, \(c_{K_{m}}=(-1)^{|\overline{E}|}\)._ Proof.: This follows from writing \[\rho_{F}(W_{p}+\Delta)=\int_{[0,1]^{m}}\prod_{e\in\overline{E}}(p+\Delta(x_{e_ {1}},x_{e_{2}})\prod_{f\in\overline{E}}(1-p-\Delta(x_{f_{1}},x_{f_{2}}))\,d{ \bf x}\] and expanding the products. The key component in the proof of Proposition1.3 is the following. **Proposition 2.2**.: _For each \(m\in\mathbb{N}\) and \(t\in\mathbb{R}\), there is a bounded symmetric measurable function \(\Delta:[0,1]^{2}\to\mathbb{R}\) such that \(\Phi_{K_{m}}(\Delta)=t\) and for all \(\varnothing\subsetneq H\subsetneq K_{m}\) we have \(\Phi_{H}(\Delta)=0\)._ Proof of Proposition1.3 given Proposition2.2.: Let \(\Delta_{0}\) be the bounded symmetric function guaranteed by Proposition2.2 with \(t=(-1)^{|\mathcal{E}|}\). Since \(p\in(0,1)\) and \(\Delta_{0}\) is bounded, there exists \(R>0\) large enough so that \(|\Delta_{0}|/R\leqslant\min\{p,1-p\}/2\). Let \(\Delta=\Delta_{0}/R\). Then we note that \(W:=W_{p}+\Delta\) is a graphon with \[\rho(W)=\rho_{K_{2}}(W)=\rho_{K_{2}}(W_{p})+\rho_{K_{2}}(\Delta)=\rho_{K_{2}} (W_{p})=p\] and by Lemma2.1, \[\rho_{F}(W)=\operatorname{rand}(F,p)+(-1)^{|\overline{E}|}\cdot t\cdot R^{- \binom{m}{2}}=\operatorname{rand}(F,p)+R^{-\binom{m}{2}}\,.\qed\] It remains to prove Proposition2.2. Note that if \(H\) is the union of vertex disjoint graphs \(H_{1}\) and \(H_{2}\), then \(\Phi_{H}(\Delta)=\Phi_{H_{1}}(\Delta)\Phi_{H_{2}}(\Delta)\). Therefore, it suffices to prove Proposition2.2 with the second condition holding for all _connected_\(\varnothing\subsetneq H\subsetneq K_{m}\). Let \(\mathcal{G}\) be the set of connected subgraphs of \(K_{m}\) up to isomorphism; let \(N=|\mathcal{G}|\) and \(M=\sum_{H\in\mathcal{G}}|V_{H}|\). Linearly order the elements of \(\mathcal{G}\) as \(H_{1},\dots,H_{N}\) where if we have \(H_{i}\subset H_{j}\) then \(i\geqslant j\); as such, we have that \(H_{1}=K_{m}\). For any vector \({\bf a}\in\mathbb{R}^{N}\), we will construct a weighted labeled graph \(G_{\bf a}\) on \(M\) vertices, which will be the disjoint union of weighted copies of \((H_{i})_{i\in[N]}\), as follows: for each graph \(H_{i}\), we fix one edge arbitrarily and assign it weight \(a_{i}\), and subsequently assign all other edges in \(H_{i}\) to have weight \(1\). The graph \(G_{\bf a}\) is the disjoint union of these weighted graphs. Let \(\Delta_{\bf a}:[0,1]^{2}\to\mathbb{R}\) be the symmetric function associated to the weighted graph \(G_{\bf a}\), where given a finite labeled weighted graph \(H\), we associate a symmetric function \(\Delta_{H}:[0,1]^{2}\to\mathbb{R}\) to it as follows: break \([0,1]\) into \(|V_{H}|\) intervals \(I_{1},\dots,I_{|V_{H}|}\) of equal length and in the box \(I_{a}\times I_{b}\) put the value equal to the weight of the edge \(\{a,b\}\) in \(H\). Set \({\bf e}_{j}\) to be the unit vector in \(\mathbb{R}^{N}\) that has a \(1\) in coordinate \(j\) and \(0\) in all other coordinates. We first note a fact that follows immediately by definition. **Fact 2.3**.: _For each \(j\in[N]\) there is a constant \(C_{j}>0\) so that for all \(a\in\mathbb{R}\) we have \(\Phi_{H_{j}}(\Delta_{a{\bf e}_{j}})=C_{j}a\,\)._ The next fact follows from the assumption that if \(i<j\) then \(H_{i}\nsubseteq H_{j}\). **Fact 2.4**.: _Let \({\bf a},\widehat{\bf a}\in\mathbb{R}^{N}\) differ only in coordinate \(j\). Then for all \(i<j\) we have \(\Phi_{H_{i}}(\Delta_{\bf a})=\Phi_{H_{i}}(\Delta_{\widehat{\bf a}})\,\)._ We are now ready to prove Proposition2.2 Proof of Proposition2.2.: We will iteratively construct a sequence of vectors \({\bf a}^{(1)},\dots,{\bf a}^{(N)}\) with the following properties: * \({\bf a}^{(1)}=s_{1}{\bf e}_{1}\), \({\bf a}^{(j+1)}={\bf a}^{(j)}+s_{j+1}{\bf e}_{j+1}\) for \(s_{1},\dots,s_{N}\in\mathbb{R}\), * \(\Phi_{H_{1}}(\Delta_{{\bf a}^{(j)}})=t\) for all \(j\in[N]\), and * \(\Phi_{H_{i}}(\Delta_{{\bf a}^{(j)}})=0\) for all \(j\in[N]\) and \(1<i\leqslant j\). Notice that for any such sequence, \(\Delta_{{\bf a}^{(N)}}\) satisfies the conclusion of Proposition2.2. _Initialization_. By Fact 2.3, we may choose \(s_{1}\in\mathbb{R}\) so that for \(\mathbf{a}^{(1)}=s_{1}\mathbf{e}_{1}\) we have \(\Phi_{H_{1}}(\Delta_{\mathbf{a}^{(1)}})=t\). This is the only constraint required at this stage. _Iteration_. Suppose that for some \(j\geqslant 1\), we have constructed a sequence \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(j)}\) satisfying the above properties. We set \(\mathbf{a}^{(j+1)}=\mathbf{a}^{(j)}+s_{j+1}\mathbf{e}_{j+1}\), for \(s_{j+1}\in\mathbb{R}\) to be chosen momentarily. By construction, the first property above is satisfied. Moreover, by Fact 2.4, for all \(i\leqslant j\), we have that for any choice of \(s_{j+1}\in\mathbb{R}\), \[\Phi_{H_{i}}(\Delta_{\mathbf{a}^{(j+1)}})=\Phi_{H_{i}}(\Delta_{\mathbf{a}^{(j) }}).\] Therefore, it suffices to choose \(s_{j+1}\in\mathbb{R}\) such that \[\Phi_{H_{j+1}}(\Delta_{\mathbf{a}^{(j+1)}})=0.\] Since \(H_{j+1}\) is connected and \(G_{\mathbf{a}^{(j)}}\) is vertex disjoint from \(G_{\mathbf{e}_{j+1}}\), it follows that for any set \(E_{j+1}\) satisfying \(\varnothing\subsetneq E_{j+1}\subsetneq E(H_{j+1})\) we have \[\int_{[0,1]^{|V_{H}|}}\prod_{e\in E_{j+1}}\Delta_{\mathbf{a}^{(j)}}(x_{e_{1}},x_{e_{2}})\prod_{f\in E(H_{j+1})\setminus E_{j+1}}\Delta_{\mathbf{e}_{j+1}}( x_{f_{1}},x_{f_{2}})d\mathbf{x}=0,\] which implies that \[\Phi_{H_{j+1}}(\Delta_{\mathbf{a}^{(j+1)}}) =\Phi_{H_{j+1}}(\Delta_{\mathbf{a}^{(j)}})+\Phi_{H_{j+1}}(\Delta _{s_{j+1}\mathbf{e}_{j+1}})\] \[=\Phi_{H_{j+1}}(\Delta_{\mathbf{a}^{(j)}})+s_{j+1}C_{j+1},\] where the last equality uses Fact 2.3. Setting \(s_{j+1}=-\Phi_{H_{j+1}}(\Delta_{\mathbf{a}^{(j)}})/C_{j+1}\in\mathbb{R}\) completes the iterative step. ## Acknowledgments The authors thank Emily Cairncross and Dhruv Mubayi for introducing this problem to us. M.M. is supported in part by NSF grant DMS-2137623.
2306.04568
The CYGNO experiment, a directional detector for direct Dark Matter searches
The CYGNO project aims at the development of a high precision optical readout gaseous Tima Projection Chamber (TPC) for directional dark matter (DM) searches, to be hosted at Laboratori Nazionali del Gran Sasso (LNGS). CYGNO employs a He:CF$_4$ gas mixture at atmospheric pressure with a Gas Electron Multiplier (GEM) based amplification structure coupled to an optical readout comprised of sCMOS cameras and photomultiplier tubes (PMTs). This experimental setup allows to achieve 3D tracking and background rejection down to O(1) keV energy, to boost sensitivity to low WIMP masses. The characteristics of the optical readout approach in terms of the light yield will be illustrated along with the particle identification properties. The project timeline foresees, in the next 2-3 years, the realisation and installation of a 0.4 m$^3$ TPC in the underground laboratories at LNGS to act as a demonstrator. Finally, the studies of the expected DM sensitivities of the CYGNO demonstrator will be presented.
F. D. Amaro, E. Baracchini, L. Benussi, S. Bianco, C. Capoccia, M. Caponero, D. S. Cardoso, G. Cavoto, A. Cortez, I. A. Costa, E. Dané, G. Dho, F. Di Giambattista, E. Di Marco, G. D'Imperio, F. Iacoangeli, H. P. L. Jùnior, G. S. P. Lopes, G. Maccarrone, R. D. P. Mano, R. R. M. Gregorio, D. J. G. Marques, G. Mazzitelli, A. G. McLean, A. Messina, C. M. B. Monteiro, R. A. Nobrega, I. F. Pains, E. Paoletti, L. Passamonti, S. Pelosi, F. Petrucci, S. Piacentini, D. Piccolo, D. Pierluigi, D. Pinci, A. Prajapati, F. Renga, R. J. C. Roque, F. Rosatelli, A. Russo, J. M. F. dos Santos, G. Saviano, N. J. C. Spooner, R. Tesauro, S. Tomassini, S. Torelli
2023-06-07T16:16:54Z
http://arxiv.org/abs/2306.04568v1
# The CYGNO experiment, a directional detector for direct Dark Matter searches ###### Abstract The CYGNO project aims at the development of a high precision optical readout gaseous Tima Projection Chamber (TPC) for directional dark matter (DM) searches, to be hosted at Laboratori Nazionali del Gran Sasso (LNGS). CYGNO employs a He:CF\({}_{4}\) gas mixture at atmospheric pressure with a Gas Electron Multiplier (GEM) based amplification structure coupled to an optical readout comprised of sCMOS cameras and photomultiplier tubes (PMTs). This experimental setup allows to achieve 3D tracking and background rejection down to O(1) keV energy, to boost sensitivity to low WIMP masses. The characteristics of the optical readout approach in terms of the light yield will be illustrated along with the particle identification properties. The project timeline foresees, in the next 2-3 years, the realisation and installation of a 0.4 m\({}^{3}\) TPC in the underground laboratories at LNGS to act as a demonstrator. Finally, the studies of the expected DM sensitivities of the CYGNO demonstrator will be presented. keywords: dark matter, time projection chamber, optical readout Pacs: 01.30.Cc 2000 Msc: 00B25 + Footnote †: journal: Nuclear Instruments and Methods in Physics Research A ## 1 Directional Dark Matter Search Since the last decades, dark matter (DM) has been considered a well established element of our Universe, even though its nature is still elusive and unknown. The leading theory predicts the existence of at least one new particle not included in the Standard Model of particle physics. Among various candidates the Weakly Interactive Massive Particles (WIMPs) stand out as they were predicted by models of both Cosmology and particle physics. Our Galaxy is believed to reside within a DM halo made of these hypothetical neutral massive particles which would interact only weakly with standard matter [1]. In this hypothesis, nuclear recoils of few keV can be induced by DM elastic scattering and detected by experiments on Earth. The motion of the Sun around the centre of the Galaxy produces an apparent wind of DM particles coming from the Cygnus constellation in the laboratory rest frame. This wind imprints a directional dependence in the recoil angular distribution that no background can mimic[2]. The angular distribution will be highly dipolar, an aspect which can be utilised to positively identify DM, to constrain DM halo characteristics and that will help to strongly reduce the impact of the well known neutrino fog on the discovery potential of direct DM experiments [3]. ## 2 The CYGNO detector concept The CYGNO experiment aims at building a large volume gaseous Time Projection Chamber (TPC) in a back-to-back configuration with 50 cm drift per side, filled with a He:CF 60:40 gas mixture operated at atmospheric pressure and room temperature in the Laboratori Nazionali del Gran Sasso (LNGS) [4]. The charge freed by any ionising radiation inside the sensitive volume will be drifted towards the amplification stage which consists in a triple Gas Electron Multiplier (GEM) structure. Here, the charge will be multiplied and, thanks to the properties of CF\({}_{4}\), also light will be produced. The readout will be optical by means of two different light detectors: PMTs and sCMOS cameras by Hamamatsu. The PMT is a fast response detector and will allow to obtain information on the impinging radiation such as the energy, through the amount of photons collected, and the length of the track along the drift direction (henceforth z), thanks to the time spread of the signal. On the other hand, a sCMOS camera is a highly granular sensor with single photon sensitivity which will image the GEM plane, capturing the 2D projection of the track of the original radiation, other than counting the photons for the energy evaluation. Linking the information coming from the two detectors it will be possible to reach a three dimensional reconstruction of the tracks with a precise measurement of the energy. ## 3 Experimental results from prototypes One of the CYGNO prototypes is LEMOn (sketch in Fig. 1) a 20 \(\times\) 24 cm\({}^{2}\) readout area with 20 cm drift equipped with a triple 50 \(\mu\)m thick GEM amplification stage. Using an \({}^{55}\)Fe source emitting 5.9 keV photons, it was possible to evaluate a light yield detector response which resulted in roughly 650 photons per keV with an average energy resolution of 15% along all the drift distances. Such large light yield and the characteristics of the camera permit to infer a 1 keV energy threshold. In the context of DM searches, it is of high relevance to discriminate signal nuclear recoils from background electron recoils. LEMOn was exposed to a \({}^{55}\)Fe source which induces electron recoils and to a \({}^{241}\)AmBe neutron source which causes nuclear recoils of few keV. A preliminary study, only exploiting the photon density along the track granted by the granularity of the readout, allowed an efficiency of 18% on nuclear recoils while suppressing 96% of background at 6 keV. More thorough studies are ongoing with the help of simulations and neural networks techniques to augment the power of rejection[4]. ## 4 Future of CYGNO In February 2022, the LIME prototype, a 50 l mono-chamber detector with 50 cm drift length equipped with a triple 50 \(\mu\)m thick GEM stack, was installed underground at LNGS to be tested under low background environment, realistic for rare event searches. The goal is to validate the Monte Carlo simulations of the expected background with a staged shield configuration of copper (max 10 cm) and water (max 40 cm). In the next future, from 2023 to 2026, CYGNO-04 will be installed underground at LNGS. This detector will comprise a back-to-back configuration with 1 m of drift length split into two 50 cm drift chambers each with a 50 \(\times\) 80 cm\({}^{2}\) readout area. The technical design report has been submitted to the LNGS and it will be hosted in Hall F. The aim of CYGNO-04 is to prove the scalability of the experimental technique and the capability of enhancing the radiopurity of the materials employed for the construction. Finally, a CYGNO-30 detector, with a back-to-back chamber of 1 m of total drift length with an overall sensitive volume of 30 m\({}^{3}\) would be able to sensibly contribute to the DM searches. Fig. 2 shows the expected limit on the Spin Independent WIMP to nucleon cross section as a function of the DM mass for a CYGNO-30 like detector with a 1 keV\({}_{\rm ee}\) energy threshold and different background considerations, from 100 up to 10\({}^{4}\) events per year. The experiment would be competitive with the lowest limits of current experiments below 10 GeV/c\({}^{2}\) WIMP masses, but with the uniqueness of being a directional detector. ## 5 Acknowledgements Part of this project is funded by the European Union's Horizon 2020 research and innovation programme under the ERC Consolidator Grant Agreement No 818744.
2307.14452
Simulation of quantum algorithms using classical probabilistic bits and circuits
We discuss a new approach to simulate quantum algorithms using classical probabilistic bits and circuits. Each qubit (a two-level quantum system) is initially mapped to a vector in an eight dimensional probability space (equivalently, to a classical random variable with eight probabilistic outcomes). The key idea in this mapping is to store both the amplitude and phase information of the complex coefficients that describe the qubit state in the probabilities. Due to the identical tensor product structure of combining multiple quantum systems as well as multiple probability spaces, $n$ qubits are then mapped to a tensor product of $n$ 8-dimensional probabilistic vectors (i.e., the Hilbert space of dimension $2^n$ is mapped to a probability space of dimension $8^n$). After this initial mapping, we show how to implement the analogs of single-qubit and two-qubit gates in the probability space using correlation-inducing operations on these classical random variables. The key defining feature of both the mapping to the probability space and the transformations in this space (i.e., operations on the random variables) is that they are not linear, but instead affine. Using this architecture, the evolution of the $2^n$ complex coefficients of the quantum system can be tracked in the joint fully-correlated probabilities of the polynomial number of random variables. We then give specific procedures for implementing (1) the Deutsch-Jozsa algorithm, and (2) the Quantum Fourier Transform in the probability space. Identical to the Quantum case, simulating the Quantum Fourier Transform in the probability space requires $O(n)$ probabilistic bits and $O(n^2)$ (i.e., quadratic in the number of quantum bits) operations.
D. D. Yavuz, A. Yadav
2023-07-26T18:49:42Z
http://arxiv.org/abs/2307.14452v1
# Simulation of quantum algorithms using classical probabilistic bits and circuits ###### Abstract We discuss a new approach to simulate quantum algorithms using classical probabilistic bits and circuits. Each qubit (a two-level quantum system) is initially mapped to a vector in an eight dimensional probability space (equivalently, to a classical random variable with eight probabilistic outcomes). The key idea in this mapping is to store both the amplitude and phase information of the complex coefficients that describe the qubit state in the probabilities. Due to the identical tensor product structure of combining multiple quantum systems as well as multiple probability spaces, \(n\) qubits are then mapped to a tensor product of \(n\) 8-dimensional probabilistic vectors (i.e., the Hilbert space of dimension \(2^{n}\) is mapped to a probability space of dimension \(8^{n}\)). After this initial mapping, we show how to implement the analogs of single-qubit and two-qubit gates in the probability space using correlation-inducing operations on these classical random variables. The key defining feature of both the mapping to the probability space and the transformations in this space (i.e., operations on the random variables) is that they are not linear, but instead affine. Using this architecture, the evolution of the \(2^{n}\) complex coefficients of the quantum system can be tracked in the joint fully-correlated probabilities of the polynomial number of random variables. We then give specific procedures for implementing (1) the Deutsch-Jozsa algorithm, and (2) the Quantum Fourier Transform in the probability space. Identical to the Quantum case, simulating the Quantum Fourier Transform in the probability space requires \(O(n)\) probabilistic bits and \(O(n^{2})\) (i.e., quadratic in the number of quantum bits) operations. ## I Introduction Quantum computing has generated much enthusiasm over the last three decades due to the possibility of solving difficult computational problems more efficiently than any conceivable classical computer [1; 2; 3; 4; 5; 6]. One of the main reasons for this enthusiasm is the discovery of Shor's factoring algorithm, which is a polynomial-time algorithm for finding the prime factors of large numbers, of which no efficient classical algorithm is known. A key component of Shor's factoring algorithm is the Quantum Fourier Transform, which achieves the discrete Fourier transform operation on an exponentially large state space with a polynomial number qubits and operations [7]. It is now understood that, in addition to factoring, quantum algorithms can be used for solving a variety of problems [8], including efficient data search [9], and finding eigenvalues and eigenvectors of large matrices [10]. Since its inception, there has also been a rigorous debate regarding what constitutes the key ingredient of the computational speed up in quantum algorithms [11; 12; 13; 14]. It is clear that exponentially large dimension of the Hilbert space is one of the key ingredients; yet it is also clear that some degree of entanglement and high fidelity of the gates is also essential [15; 16; 17]. How much entanglement is needed has been the subject of a rigorous debate [16; 17]. To understand the true power of quantum computers, we need to better understand how exactly they differ from their classical counter-parts. Much recent research also indicates that the first truly useful quantum computers will likely use a hybrid approach, where at least some part of the computation is performed classically, using, for example, classical post-processing of quantum measurement outcomes [18; 19; 20]. If at least certain sections of the quantum computation can be replaced with classical algorithms, this may significantly improve the practical applications and the impact of quantum computers. Furthermore, such classical algorithms may be useful in their own right, since they may provide more efficient means of simulating quantum many body systems. In this paper, we will discuss a new approach for simulating quantum algorithms using classical probabilistic random variables and correlation-inducing operations on these variables (i.e., circuits). The approach builds on our recent work that map quantum systems to classical probabilistic random variables [21]. In this recent work, we started with the simplest quantum system (a two-level system, i.e., a qubit) and discussed a mapping of the quantum state to a vector in a probability space (Fig. 1). The mapping is one-to-one and preserves all the information encoded in the wavefunction. Not surprisingly, to be able to store all the information encoded in the complex coefficients, we need to increase the dimension of the system: the mapping is to an eight dimensional probabilistic space from the two dimensional Hilbert space (i.e., to a physical classical random variable with eight probabilistic outcomes). Once a single-qubit quantum state is mapped, the next key question is whether the evolution of the state can be captured in the probability space. It is well known that an arbitrary evolution of a single qubit wavefunction can be achieved using combinations of Hadamard gates and phase rotations [1]. We showed how these two main operations can be implemented with appropriate transformations of the mapped vector in the probability space (i.e., using appropriate operations on the classical random variable). One key feature of the transformations in the probability space is that they are affine, but not linear. In our recent manuscript, we also introduced an analogue of the Schrodinger's equation for the wavefunction which lives in a Hilbert space of arbitrary dimension. This is a continuous differential equation that describes the evolution of the vector in the probability space under an effective "Hamiltonian". In the current work, we use this recently suggested mapping of quantum systems to probability spaces, and discuss how one can simulate quantum algorithms using classical random variables and correlation-inducing operations on these random variables. As we discussed above, each qubit is initially mapped to a classical random variable with eight probabilistic outcomes (or, to three probabilistic bits, \(p\)-bits [22; 23], since three bits are sufficient to produce eight possibilities). Due to the identical tensor product structure of combining multiple quantum systems as well as multiple probability spaces, \(n\) qubits are then mapped to a tensor product of \(n\) 8-dimensional probabilistic vectors (i.e., the Hilbert space of dimension \(2^{n}\) is mapped to a probability space of dimension \(8^{n}\)). After this initial mapping, we show how to implement analogs of single-qubit and two-qubit gates in the probability space using operations on the classical random variables (in other words transformations of the probability state vector). The key defining feature of both the mapping to the probability space and the transformations in this space is that they are not linear, but instead affine. After this general construction, we give specific procedures for implementing (1) the Deutsch-Jozsa algorithm [8] and (2) the Quantum Fourier transform [7] in the probability space. Identical to the Quantum case, simulating Quantum Fourier Transform in the probability space using classical random variables requires \(O(n^{2})\) operations (i.e., quadratic in the number of quantum bits). Remarkably, using this architecture, the evolution of an exponential number of complex coefficients that define the \(n\)-qubit quantum wavefunction can be tracked in the fully-correlated joint probabilities of the classical random variables. The probabilities contain the information of both the real and the imaginary parts of the complex coefficients. We also show that at the end of the quantum evolution, when a measurement is performed on a Hermitian observable, its' measurement outcomes can be calculated using the same joint probabilities. The mapping and the simulation that we discuss use classical random variables, and operations on these variables, with a number that scale polynomially with the number of qubits. However, we will not make a statement regarding the true computational efficiency of our simulator. This is because: (1) There may be an exponentially scaling physical resource that is hiding in a certain aspect of our formalism. (2) To evaluate the true computational efficiency of the simulation, a detailed study of noise and error correction is critical. (3) While the measurement outcomes of the quantum system (at the end of the evolution) can be calculated using the joint probability distribution of the classical random variables, it is not clear if this calculation can be performed efficiently under the presence of noise (this is because of the exponentially small probabilities in the joint probability distribution). We will comment on these issues in more detail in the conclusions section below. Our work has been heavily influenced by the recent investigations of quantum mechanics within the operational framework of probability theories; in particular the pioneering works of Fuchs and colleagues [24], Hardy [25], and Barrett [26]. One of the main tools in these investigations is fine-tuned operator classes that allow Symmetric Informationally Complete (SIC) measurements [27; 28; 29]. Other related research has tried to place quantum mechanics under the umbrella of probability theories that are more general than classical, sometimes referred to as post-classical theories of probability [25; 26; 30; 31; 32]. This research has identified a rich landscape and the goal is to place quantum mechanics properly in this landscape in order to better understand its unique properties. In other related prior work, we note the extensive literature that have attempted to derive some features of quantum mechanics using classical "toy" theories. A good summary of various toy theories is discussed in, for example, Ref. [33]. Several prominent examples of these are due to Spekkens [34], Bell [35], Beltrametti-Bugajski [36], Kochen-Specker [37], Aaronson [38], and Aerts [39]. Of particular importance to this work is Aaronson's model [38], which discusses representing the quantum state as a vector of probabilities, and mapping this vector to another set of probabilities using an appropriate matrix. However, when only represented as a vector of projected probabilities, such a matrix inevitably depends on the initial state of the wavefunction, which is very different from the approach that we consider here. This work is also related to the mapping of quantum states to probability-like distributions, typically referred to as quasiprobabilities [40; 41; 42; 43; 44; 45; 46; 47]. The most well-known example of a quasiprobability distribution is the Wigner function. It is well-known that quasi-probabilities can have negative values; in fact, the true quantum mechanical nature of the wavefunction is expressed in these negative regions. We argue that when one allows for maps and transformations that are not necessarily linear, one can capture a quantum state (as well as its' evolution) using only probabilities (i.e., negative values are not needed). We commented on these connections more thoroughly in our recent manuscript [21]. In the current paper, we will focus specifically on simulating quantum algorithms using this approach, such as the Deutsch-Jozsa algorithm and the Quantum Fourier Transform. ## II II. Preliminaries In traditional formulation of quantum mechanics, a quantum state is described by a complex wavefunction, \(|\psi\rangle\), in a Hilbert space, \(H\). This state will evolve according to Schrodinger's equation, which conserves the norm of the wavefunction. This time evolution of the quantum state can be described using an appropriate unitary matrix, \(\hat{U}\), that satisfies, \(\hat{U}^{\dagger}\hat{U}=\hat{U}\hat{U}^{\dagger}=\hat{I}\). With this evolution, the state is mapped to \(|\psi\rangle\longrightarrow\hat{U}|\psi\rangle\). In any classical probabilistic experiment, we will have a set of probabilities, which we can also think of as constituting a vector, in a probabilistic space, \(S\). We will denote such a probabilistic vector with \(\tilde{s}\). Each of the entries of this vector has to be between 0 and 1, i.e., \(0<s_{i}<1\), and furthermore, the entries need to sum to unity, \(\sum_{i}s_{i}=1\). Because the entries add up to unity, such a vector lies on certain surface in the probabilistic space, and this surface is called the simplex [48]. Similar to a quantum state, such a probabilistic vector can also evolve in time (for example, because of a change in the experimental conditions). We can view such evolution as mapping a vector in space \(S\), to another vector. We will denote such mapping with \(T:S\longrightarrow S\). Usually, such evolution is described by multiplying the vector \(\tilde{s}\) with a Stochastic matrix, \(\tilde{\mathcal{M}}\), i.e., \(T(\tilde{s})=\tilde{\mathcal{M}}\cdot\tilde{s}\). A stochastic matrix is a matrix whose columns sum up to 1. This assures that the resultant vector also is normalized; i.e., its' components add up to unity. Throughout this paper, all quantum mechanical operators will be presented by a hat (for example \(\hat{U}\)), whereas all the transformations of the simplex vectors will be presented by a tilde (for example, \(\tilde{\mathcal{M}}\)). The probabilistic vectors can also undergo affine transformations of the form \(T(\tilde{s})=\vec{a}+\tilde{M}\cdot\vec{s}\). Here, the constant "offset" vector \(\vec{a}\) and the matrix \(\tilde{M}\) should be chosen such that the mapped vector \(T(\tilde{s})\) is a valid probability distribution. The conditions for the matrix \(\tilde{M}\) such that \(T:S\longrightarrow S\) is a valid transformation is different from stochasticity. We will discuss these conditions in detail below. Affine nature of the transformations require that a statistical mixture of the input vectors should produce the same statistical average of the transformed vectors. More formally, for any two vectors \(\tilde{s}\) and \(\vec{s}^{\prime}\), and two constants \(\lambda\) and \(\lambda^{\prime}\) such that \(\lambda+\lambda^{\prime}=1\), we have \(T(\lambda\tilde{s}+\lambda^{\prime}\vec{s}^{\prime})=\lambda T(\tilde{s})+ \lambda^{\prime}T(\tilde{s}^{\prime})\). This type of affine transformations of probabilistic vectors has not received much attention before, and is one of the central ideas of this work. Figure 1 summarizes the key features of our approach. We start with mapping a single qubit wavefunction to the probability simplex. Not surprisingly, to be able to store all the information that is encoded in the complex coefficients, we need to increase the dimension of the system. As shown in Fig. 1(a), the mapping is to an eight dimensional probability space. In this mapping of the qubit, we have something quite physical in mind: that is, the mapping is to a physical classical random variable with eight probabilistic outcomes. This can, for example, be visualised as a "die" with eight faces and the vector \(\vec{s}\) stores the probabilities in these eight outcomes. The key is that these 8 probabilities store both the amplitude and the phase information in the complex qubit wavefunction. When a measurement is made on the quantum wavefunction, one finds the quantum system to be in one of the two states with probabilities given by the magnitude square of the complex coefficients. Because the map stores both the phases and the amplitudes of these complex coefficients, not surprisingly, by measuring these probabilities (i.e., by repeatedly throwing the die and measuring the components of the 8-dimensional vector \(\tilde{s}\)), one can also uniquely calculate the probabilistic outcomes of the quantum system. Throughout this manuscript, we will formulate our approach using these above-mentioned eight-dimensional probabilistic vectors. However, we note that, our whole scheme can instead be formulated in terms of classical random variables with only two outcomes such as a coin-flip (i.e., a probabilistic bit, or a \(p\)-bit.) Eight possible outcomes require three \(p\)-bits and Figure 1: The simplified schematic of the approach that we will study in this work. (a) We start with a qubit with wavefunction, \(|\psi_{j}\rangle\), and discuss a one-to-one mapping, \(\varphi\) from the Hilbert space \(H\) to a vector \(\tilde{s}_{j}\) in an eight dimensional probability space \(S\) (which is a real Euclidean space). In this mapping of the qubit, we have something quite physical in mind: that is, the mapping is to physical classical random variable with 8 values. This can, for example, be visualised as a “die” with 8 faces; the vector \(\tilde{s}\) stores the probabilities in these 8 outcomes. (b) The wavefunction for a multi-partite quantum system is initially mapped to a tensor product of individual simplex vectors. This is due to the identical tensor product rule of combining multiple quantum systems as well as multiple probability spaces. (c) With the initial product wavefunction (unentangled state), a quantum algorithm runs through a sequence of single-qubit and two-qubit gates. Each of these operations can be mapped to a corresponding affine transformation in the probability space. The end result is that, the quantum evolution in the Hilbert space of dimension \(2^{n}\) (i.e., an exponentially large number of complex coefficients), can be smoothly tracked in the probability space of dimension \(8^{n}\). At the end of the evolution, the fully-correlated joint probabilities in the \(\tilde{s}\) vectors are measured. Because these probabilities contain both the amplitude and the phase information in the \(2^{n}\) complex coefficients of the quantum system, not surprisingly, the measurement outcomes of the quantum system can be calculated by measuring these joint probabilities. one can visualize the mapping of the qubit to three physical \(p\)-bits (instead of mapping to a single "die" with eight outcomes). We will then consider multiple qubit systems. Here, each qubit wavefunction, \(|\psi_{j}\rangle\) is mapped to an eight-dimensional simplex vector, \(\tilde{s}_{j}\). Due to the identical tensor product structure of combining multi-partite systems, the combined wavefunction of the initial multi-qubit system is mapped to a tensor product of simplex vectors (this can be understood intuitively as the joint probabilities of several events happening together). This is schematically shown in Fig. 1(b). In this multi-partite mapping, we again have something quite physical in mind. \(n\) qubits are mapped into \(n\) 8-dimensional dice (or equivalently, to \(3n\) probabilistic bits), and initially, the information in the quantum wavefunction is stored in the tensor product of probabilities stored in the corresponding \(\tilde{s}\) vectors. With this initial mapping, the next question that we address is if the evolution of the quantum system can be captured in the probability simplex. For this purpose, we will first discuss how to implement analogs of single-qubit and two-qubit gates in the probability simplex. Each gate in the quantum system can be viewed as changing the values of the complex coefficients in the Hilbert space. What is remarkable is that, these modifications of the complex coefficients (due to the quantum evolution) can be fully captured using corresponding affine transformations acting on single-simplex or two-simplex vectors. These affine operations can be viewed as physical experimental operations, that change the probabilistic outcomes of a single "die", or specific operations that induce correlations between the two "dice". With this construction, we will then shift our focus to implementing specific quantum algorithms. Here, since any quantum algorithm can be implemented using a sequence of single-qubit and two-qubit gates, we basically track the algorithm in the probability space using a sequence of affine transformations. The end result is that, the quantum evolution in the Hilbert space of dimension \(2^{n}\) (i.e., an exponentially large number of complex coefficients), can be smoothly tracked in the probability space of dimension \(8^{n}\). This is schematically shown in Fig. 1(c). We will specifically focus on the Deutsch-Josza algorithm and the Quantum Fourier Transform (which is the foundation of Shor's factoring algorithm). ## III Mapping of single qubit wavefunction and its' evolution We first discuss mapping of the single-qubit wavefunction and its evolution in the Hilbert space. This section will follow closely the discussion in our recent manuscript [21], which we include here for completeness. In the following subsections, we describe the mapping of the wavefunction from the Hilbert space to the probability space, \(\varphi\left|\psi\right\rangle\) and also mapping of the wavefunction evolution under unitary operator \(\hat{U}\), \(\tilde{M}[\hat{U}]\). ### Mapping of the single-qubit wavefunction For a single qubit, we can decribe the state \(\left|\psi\right\rangle\) in the logical qubit basis as, \[\left|\psi\right\rangle=c_{0}\left|0\right\rangle+c_{1}\left|1\right\rangle \equiv\left(\begin{array}{c}c_{0}\\ c_{1}\end{array}\right)\equiv\left(\begin{array}{c}x_{0}+iy_{0}\\ x_{1}+iy_{1}\end{array}\right)=\tilde{x}+i\tilde{y}\quad. \tag{1}\] Here, the states \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are the logical states, and \(c_{0}\) and \(c_{1}\) are the complex coefficients satisfying the usual normalization condition, \(|c_{0}|^{2}+|c_{1}|^{2}=1\). In what follows, instead of the complex coefficients \(c_{0},c_{1}\), we will work with their real and imaginary parts \(\vec{x},\vec{y}\), which are two-dimensional vectors defined as: \[\vec{x}\equiv\mbox{Re}\left|\psi\right\rangle=\left(\begin{array}{c}x_{0}\\ x_{1}\end{array}\right)\quad,\quad\vec{y}\equiv\mbox{Im}\left|\psi\right\rangle =\left(\begin{array}{c}y_{0}\\ y_{1}\end{array}\right)\quad. \tag{2}\] We propose the following mapping \(\varphi:H\mapsto S\) of the quantum state \(\left|\psi\right\rangle\) in Hilbert space \(H\) to a vector \(\vec{s}\) in the probability space \(S\): \[\varphi\left|\psi\right\rangle=\varphi(\vec{x}+i\vec{y})=\vec{s}=\frac{1}{8} \left(\begin{array}{c}1\\ 1\\ \vdots\\ 1\end{array}\right)+\frac{1}{8}\left(\begin{array}{c}\vec{x}\\ -\vec{x}\\ \vec{y}\\ -\vec{y}\end{array}\right)\equiv\frac{1}{8}(\vec{u}+\vec{p})\quad. \tag{3}\] Here, we have defined a vector with uniform entries \(\vec{u}\equiv\vec{1}\) and also another vector that stores the deviation of the probabilities from the uniform distribution, \(\vec{p}\equiv 8\vec{s}-\vec{1}\). We note that the vector \(\vec{s}\), as defined above, represents a valid probability distribution. That is, each of the entries is between \(0\) and \(1\) (i.e., \(0<s_{i}<1\)), and these entries sum up to unity, \(\sum_{i}s_{i}=1\). The fact that we need to increase the dimension from \(2\) to \(8\) is intuitive. For each complex coefficient, we need to store two real numbers, the real part and the imaginary part. Furthermore, for each real number, we need to store the quantity with both signs. This is because, in order to map the transformations of the quantum state, we will need access to both signs of these coefficients. Hence, the factor of \(4\) increase in the dimension. The map is injective (i.e., one-to-one), but not surjective. The main insight in the mapping of Eq. (3) is that the phase and the amplitude information (for the real and imaginary parts of the complex coefficients) can be stored in how much the probabilities deviate from purely random quantity (hence the initial "1" in all the entries of \(\vec{s}\)). A key property of the mapping of Eq. (3) is that it is not linear. By inspection, a superposition of two wavefunctions do not map to the same superposition of their mapped vectors: \(\varphi\big{(}a\ket{\psi}+b\ket{\phi}\big{)}\neq a\varphi\ket{\psi}+b\varphi \ket{\phi}\) for \(\ket{\psi},\ket{\phi}\in H;a,b\in\mathbb{C}\). A more explicit expression for the map, which clearly shows its affine (but not linear) nature, is: \[\varphi\ket{\psi}=\frac{1}{8}\big{(}\vec{u}+\vec{\gamma}\otimes \operatorname{Re}\ket{\psi}+\vec{\gamma}^{\prime}\otimes\operatorname{Im}\ket{ \psi}\big{)} \tag{4}\] \[\vec{\gamma}=\left(\begin{array}{c}1\\ -1\\ 0\\ 0\end{array}\right)\quad,\quad\vec{\gamma}^{\prime}=\left(\begin{array}{c}0 \\ 0\\ 1\\ -1\end{array}\right)\quad. \tag{5}\] We note that the set of states \(\vec{s}\) defined in the simplex by Eq. (3) form a convex surface. That is, for two different states \(\vec{s}\) and \(\vec{s}^{\prime}\), and for coefficients \(\lambda\) and \(\lambda^{\prime}\) such that \(\lambda+\lambda^{\prime}=1\), any combination \(\lambda\vec{s}+\lambda^{\prime}\vec{s}^{\prime}\) is also an allowed mapped state. This is similar to what is discussed in Refs. [25; 26]. We also note, however, that, differing from the prior work, the simplex vector with all of its' entries equal to \(0\) (which we can denote by \(\vec{0}\)) is not a valid mapped vector. Even if we were to include not-normalized quantum states (where the probabilities leek out of the system, for example), in the limit, \(x_{i}\to 0,y_{i}\to 0\), all of the entries for the vector in the simplex would approach \(\frac{1}{8}\), i.e., \(\vec{s}\rightarrow\frac{1}{8}\vec{u}\). When we discuss analogs of two-qubit gates, as well as quantum algorithms below, it will be useful to use a notation analogous to the logical \(\ket{0}\) and \(\ket{1}\) quantum states. For this purpose, we introduce the simplex vectors, \(\vec{s}_{0}\) and \(\vec{s}_{1}\), and correspondingly, \(\vec{p}_{0}\) and \(\vec{p}_{1}\). These are the vectors that are obtained by mapping the quantum state \(\left|\psi\right\rangle=\left|0\right\rangle\), and \(\left|\psi\right\rangle=\left|1\right\rangle\), respectively. More explicitly, these vectors are: \[\varphi\left|0\right\rangle = \bar{s}_{0}\equiv\frac{1}{8}(\bar{u}+\vec{p}_{0}),\quad\varphi \left|1\right\rangle=\bar{s}_{1}\equiv\frac{1}{8}(\bar{u}+\vec{p}_{1})\quad,\] \[\vec{p}_{0} \equiv \left(\begin{array}{c}1\\ 0\\ -1\\ 0\\ 0\\ 0\end{array}\right),\quad\vec{p}_{1}\equiv\left(\begin{array}{c}0\\ 1\\ 0\\ -1\\ 0\\ 0\\ 0\end{array}\right)\quad. \tag{6}\] The above mapped vectors from the logical \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are sufficient when the quantum algorithm only requires real coefficients in the quantum state (such as the Deutsch-Jozsa algorithm). However, when imaginary components of the coefficients are necessary, the above vectors are not sufficient. We, therefore, introduce a more general version of these vectors, \(\vec{P}_{0}\) and \(\vec{P}_{1}\), which will be critical in the discussion of the Quantum Fourier Transform. Unlike the constant vectors \(\vec{p}_{0}\) and \(\vec{p}_{1}\), we allow these more general vectors to be a function of a complex number, \(c\). For any complex coefficient, \(c\), these two vectors are defined as: \[\vec{P}_{0}(c)\equiv\left(\begin{array}{c}\mathrm{Re}(c)\\ 0\\ -\,\mathrm{Re}(c)\\ 0\\ \mathrm{Im}(c)\\ 0\end{array}\right),\quad\vec{P}_{1}(c)\equiv\left(\begin{array}{c}0\\ \mathrm{Re}(c)\\ 0\\ -\,\mathrm{Re}(c)\\ 0\\ \mathrm{Im}(c)\\ 0\\ -\,\mathrm{Im}(c)\end{array}\right)\quad. \tag{7}\] which can be abstractly expressed in one statement as, \[\vec{P}_{b}(c)=\left[\mathrm{Re}(c)\bar{\gamma}+\mathrm{Im}(c)\bar{\gamma}^{ \prime}\right]\otimes\left|b\right\rangle\quad. \tag{8}\] for a given logical qubit state \(\left|b\right\rangle,b\in\mathbb{B}=\{0,1\}\). These vectors allow us to express the map for a more general single qubit state, \(\left|\psi\right\rangle=c_{0}\left|0\right\rangle+c_{1}\left|1\right\rangle\), in the following simplified form: \[\varphi\left|\psi\right\rangle=\tilde{s}(\psi)\equiv\frac{1}{8}[\tilde{u}+ \tilde{P}_{0}(c_{0})+\tilde{P}_{1}(c_{1})]=\frac{1}{8}[\tilde{u}+\sum_{b\in \mathbb{B}}\tilde{P}_{b}(c_{b})]\quad. \tag{9}\] We note that \(\tilde{P}_{0}(1)=\tilde{p}_{0}\) and \(\tilde{P}_{1}(1)=\tilde{p}_{1}\), whereas \(\tilde{P}_{0}(0)=\tilde{P}_{1}(0)=\tilde{0}\). We also have: \[\tilde{P}_{0}(re^{i\phi})=r\tilde{P}_{0}(e^{i\phi}),\quad\tilde{P} _{1}(re^{i\phi})=r\tilde{P}_{1}(e^{i\phi})\quad, \tag{10}\] \[\text{and, }\tilde{P}_{0}(r)=r\,\tilde{p}_{0},\quad\tilde{P}_{1}(r)=r \,\tilde{p}_{1},\quad\forall r\in\mathbb{R}\quad. \tag{11}\] We finally note that the map \(\tilde{P}_{b}:\mathbb{C}\mapsto\mathbb{R}^{8}\) also satisfies the following additive property: \[\tilde{P}_{b}(\sum_{k}c_{k})=\sum_{k}\tilde{P}_{b}(c_{k}),\forall c_{k}\in \mathbb{C}\quad. \tag{12}\] ### Single qubit transformations in the simplex The central question is what type of transformations of the probability vector, \(T:S\to S\), should we be looking for. Motivated by the mapping of Eq. (3), we look for affine transformations of the simplex vector of the form a translation added on linear combinations of the simplex vector entries. Note that the entries of \(\vec{p}\) in Eq. (3) sum up to zero; i.e., \(\sum_{i}p_{i}=\vec{u}\cdot\vec{p}=0\). Furthermore, the Euclidian norm of \(\vec{p}\) is a constant \(\|\vec{p}\|=\sqrt{2}\), since we have \(x_{0}^{2}+y_{0}^{2}+x_{1}^{2}+y_{1}^{2}=1\) (this is because of the normalization of the state \(\left|\psi\right\rangle\)). We also note that the two vectors that form the simplex vector \(\tilde{s}\) are orthogonal to each other, \(\vec{u}\cdot\vec{p}=0\). As a result, we have \(\|\vec{s}\|=\sqrt{\|\vec{u}\|^{2}+\|\vec{p}\|^{2}}/8=\sqrt{10}/8\), which is also constant. This shows that \(\tilde{s}\) lies on the intersection of a seven-dimensional hypersphere, with four seven-dimensional hyperplanes, resulting in a three dimensional hypersurface \(S\). As it will be clear below, because the quantum gates form linear combinations of the entries of \(\vec{p}\), we first view the mapping of the simplex vector \(\tilde{s}\), as instead mapping \(\vec{p}\) to another vector. We will call the matrix for this mapping to be \(\tilde{M}[\hat{U}]\) (corresponding to the unitary quantum evolution \(\hat{U}\)): \[\left|\psi\right\rangle\longrightarrow\hat{U}\left|\psi\right\rangle \Longleftrightarrow\vec{p}\longrightarrow\tilde{M}[\hat{U}]\cdot\vec{p}\quad. \tag{13}\] Expressed as \(T[\hat{U}]\) acting on the full simplex state \(\tilde{s}\), the transformation of Eq. (13) is, \(T[\hat{U}](\tilde{s})=\frac{1}{8}\left(\tilde{u}+\tilde{M}[\hat{U}]\cdot\vec{p}\right)\), which gives \(T[\hat{U}](\tilde{s})=\frac{1}{8}\left[\tilde{u}+\tilde{M}[\hat{U}]\cdot(8 \tilde{s}-\tilde{u})\right]\), or writing it slightly differently, \[T[\hat{U}](\tilde{s})=\frac{1}{8}\left(\tilde{I}_{8\times 8}-\tilde{M}[\hat{U}] \right)\cdot\tilde{u}+\tilde{M}[\hat{U}]\cdot\tilde{s}\quad. \tag{14}\] Here, the quantity \(\tilde{I}_{8\times 8}\) is the \(8\times 8\) identity matrix. Below, we will give explicit general expressions for the \(8\times 8\) matrices, \(\tilde{M}[\hat{U}]\), tracking a specific evolution, \(\hat{U}\), of the quantum state. With the matrix \(\tilde{M}\) given, Eq. (14) describes the explicit transformation of the probability vector, with the map \(T:S\longrightarrow S\) in the simplex. We note that, the first term in the right hand side of Eq. (14) is a translation for each of the entries of the vector (an offset). Because of this term, the map \(T:S\longrightarrow S\) is not linear (i.e., the sum of two vectors \(\tilde{s}\) and \(\tilde{s}^{\prime}\) would not transform as the sum of the individual transforms). However, \(T\) is an affine map. For two vectors, \(\tilde{s}\) and \(\tilde{s}^{\prime}\), and for coefficients \(\lambda\) and \(\lambda^{\prime}\) such that \(\lambda+\lambda^{\prime}=1\), we have \(T(\lambda\tilde{s}+\lambda^{\prime}\tilde{s}^{\prime})=\lambda T(\tilde{s})+ \lambda^{\prime}T(\tilde{s}^{\prime})\). The constraints on the matrix \(\tilde{M}\) of above such that \(T:S\longrightarrow S\) is a valid map is different from stochasticity. Specifically, the two necessary constraints are (1) \(\tilde{M}\) should be such that the norm of the resulting vector is preserved since we need to have: \(\left\|\tilde{M}\cdot\vec{p}\right\|=\sqrt{2}\). Because of the specific form for the vector \(\vec{p}\), this norm conservation does not imply orthogonality of the matrix \(\tilde{M}\). By inspection, the necessary constraint is that the sum of the squares of the entries in each row must add up to unity: i.e., \(\sum_{j}\tilde{M}_{ij}^{2}=1\) for each row \(i\). (2) The rows of \(\tilde{M}\) should be related to each other such that the entries of \(\tilde{M}\cdot\vec{p}\) sum up to zero. Specifically, \(\tilde{M}\cdot\vec{p}\) should produce a column vector of the form shown in Eq. (3), with respective entries having equal amplitude and opposite signs. This assures that the resulting full simplex vector, \(\frac{1}{8}\left(\tilde{u}+\tilde{M}\cdot\vec{p}\right)\) is a valid probability distribution (i.e., its' entries add up to unity). Given a general unitary matrix \(\hat{U}\) acting on a quantum state vector \(\left|\psi\right\rangle\), we note that the real and imaginary parts of the wavefunction will transform as: \[\hat{U}\left|\psi\right\rangle=\left[\text{Re}(\hat{U})+i\,\text{Im}(\hat{U}) \right]\cdot(\tilde{x}+i\vec{y})=\left[\text{Re}(\hat{U})\cdot\vec{x}-\text{ Im}(\hat{U})\cdot\vec{y}\right]+i\left[\text{Re}(\hat{U})\cdot\vec{y}+\text{Im}( \hat{U})\cdot\vec{x}\right]\quad. \tag{15}\] Here, the quantities \(\text{Re}(\hat{U})\) and \(\text{Im}(\hat{U})\) are the real and imaginary components of the evolution operator \(\hat{U}\), respectively. This implies that, under general unitary evolution, the real and imaginary parts of the wavefunction will evolve as: \[\tilde{x} \longrightarrow \left[\mathrm{Re}(\hat{U})\cdot\tilde{x}-\mathrm{Im}(\hat{U})\cdot \tilde{y}\right]\] \[\tilde{y} \longrightarrow \left[\mathrm{Re}(\hat{U})\cdot\tilde{y}+\mathrm{Im}(\hat{U}) \cdot\tilde{x}\right]\quad. \tag{16}\] For the mapped vector \(\tilde{s}\) in the simplex, the above evolution of the real and imaginary parts of the wavefunction implies the following transformation of the vector \(\tilde{p}\): \[\tilde{p}=\left(\begin{array}{c}\tilde{x}\\ -\tilde{x}\\ \tilde{y}\\ -\tilde{y}\end{array}\right)\longrightarrow\left(\begin{array}{c|c|c|c} \mathrm{Re}(\hat{U})&O&O&\mathrm{Im}(\hat{U})\\ \hline O&\mathrm{Re}(\hat{U})&\mathrm{Im}(\hat{U})&O\\ \hline\mathrm{Im}(\hat{U})&O&\mathrm{Re}(\hat{U})&O\\ \hline O&\mathrm{Im}(\hat{U})&O&\mathrm{Re}(\hat{U})\end{array}\right)\left( \begin{array}{c}\tilde{x}\\ -\tilde{x}\\ \tilde{y}\\ -\tilde{y}\end{array}\right)=\tilde{M}(\hat{U})\cdot\tilde{p}\quad. \tag{17}\] We also note that due to the structure of \(\tilde{p}\), the following two transformations are equivalent: \[\left(\begin{array}{c|c|c|c|c}\mathrm{Re}(\hat{U})&O&O&\mathrm{Im}(\hat{U}) \\ \hline O&\mathrm{Re}(\hat{U})&\mathrm{Im}(\hat{U})&O\\ \hline\mathrm{Im}(\hat{U})&O&\mathrm{Re}(\hat{U})&O\\ \hline O&\mathrm{Im}(\hat{U})&O&\mathrm{Re}(\hat{U})\end{array}\right)\left( \begin{array}{c|c|c|c}\tilde{x}\\ -\tilde{x}\\ \tilde{y}\\ -\tilde{y}\end{array}\right)=\left(\begin{array}{c|c|c|c}O&-\mathrm{Re}( \hat{U})&O&\mathrm{Im}(\hat{U})\\ \hline-\mathrm{Re}(\hat{U})&O&\mathrm{Im}(\hat{U})&O\\ \hline\mathrm{Im}(\hat{U})&O&O&-\,\mathrm{Re}(\hat{U})\\ \hline O&\mathrm{Im}(\hat{U})&-\,\mathrm{Re}(\hat{U})&O\end{array}\right)\left( \begin{array}{c}\tilde{x}\\ -\tilde{x}\\ \tilde{y}\\ -\tilde{y}\end{array}\right)\quad. \tag{18}\] As a result of the above equivalence, we consider both of the matrices to be equivalent definitions of the transformation \(\tilde{M}[\hat{U}]\), associated to a general evolution of the state: \[\tilde{M}[\hat{U}]=\left(\begin{array}{c|c|c|c}\mathrm{Re}(\hat{U})&O&O& \mathrm{Im}(\hat{U})\\ \hline O&\mathrm{Re}(\hat{U})&\mathrm{Im}(\hat{U})&O\\ \hline\mathrm{Im}(\hat{U})&O&\mathrm{Re}(\hat{U})&O\\ \hline O&\mathrm{Im}(\hat{U})&O&\mathrm{Re}(\hat{U})\end{array}\right)=\tilde{ I}_{4\times 4}\otimes\mathrm{Re}(\hat{U})+\tilde{\Lambda}\otimes\mathrm{Im}( \hat{U})\quad, \tag{19}\] or, \[\tilde{M}[\hat{U}]=\left(\begin{array}{c|c|c|c}O&-\,\mathrm{Re}(\hat{U})&O& \mathrm{Im}(\hat{U})\\ \hline-\,\mathrm{Re}(\hat{U})&O&\mathrm{Im}(\hat{U})&O\\ \hline\mathrm{Im}(\hat{U})&O&O&-\,\mathrm{Re}(\hat{U})\\ \hline O&\mathrm{Im}(\hat{U})&-\,\mathrm{Re}(\hat{U})&O\end{array}\right)=- \tilde{\Lambda}^{2}\otimes\mathrm{Re}(\hat{U})+\tilde{\Lambda}\otimes\mathrm{Im }(\hat{U})\quad. \tag{20}\] where, \[\tilde{\Lambda}=\left(\begin{array}{cccc}0&0&0&1\\ 0&0&1&0\\ 1&0&0&0\\ 0&1&0&0\end{array}\right),\quad\tilde{\Lambda}^{2}=\left(\begin{array}{cccc}0&1 &0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{array}\right) \tag{21}\] Using the equivalence between Eq. (19) and Eq. (20) we can prove the useful property that \(\tilde{M}[\hat{U}_{1}\hat{U}_{2}]\) and \(\tilde{M}[\hat{U}_{1}]\tilde{M}[\hat{U}_{2}]\) have the same action over a particular \(\vec{p}\) state, described in what follows, \[\tilde{M}[\hat{U}_{1}]\tilde{M}[\hat{U}_{2}]=(\tilde{I}_{4\times 4} \otimes\text{Re}(\hat{U}_{1})+\tilde{\Lambda}\otimes\text{Im}(\hat{U}_{1}))( \tilde{I}_{4\times 4}\otimes\text{Re}(\hat{U}_{2})+\tilde{\Lambda}\otimes \text{Im}(\hat{U}_{2}))\] \[=\tilde{I}_{4\times 4}\otimes\text{Re}(\hat{U}_{1})\,\text{Re}( \hat{U}_{2})+\tilde{\Lambda}\otimes(\text{Re}(\hat{U}_{1})\,\text{Im}(\hat{U} _{2})+\text{Im}(\hat{U}_{1})\,\text{Re}(\hat{U}_{2}))+\tilde{\Lambda}^{2} \otimes(\text{Im}(\hat{U}_{1})\,\text{Im}(\hat{U}_{2}))\] \[\equiv\tilde{I}_{4\times 4}\otimes(\text{Re}(\hat{U}_{1})\,\text{Re}( \hat{U}_{2})-\text{Im}(\hat{U}_{1})\,\text{Im}(\hat{U}_{2}))+\tilde{\Lambda} \otimes(\text{Re}(\hat{U}_{1})\,\text{Im}(\hat{U}_{2})+\text{Im}(\hat{U}_{1}) \,\text{Re}(\hat{U}_{2}))\] \[\equiv\tilde{I}_{4\times 4}\otimes\text{Re}(\hat{U}_{1}\hat{U}_{2})+ \tilde{\Lambda}\otimes\text{Im}(\hat{U}_{1}\hat{U}_{2})=\tilde{M}[\hat{U}_{1} \hat{U}_{2}] \tag{22}\] This identity implies that, \[T[\hat{U}_{1}]\circ T[\hat{U}_{2}]=T[\hat{U}_{1}\hat{U}_{2}] \tag{23}\] which is in direct analogy to compositions of two or more unitary operations on any qubit state \(|\psi\rangle\). Moreover, this in turn shows that the transforms \(T[\hat{U}]\) for any given unitary \(\hat{U}\) are reversible as they should be for closed quantum systems: \(T[\hat{U}]\circ T[\hat{U}^{\dagger}]=T[\hat{U}^{\dagger}]\circ T[\hat{U}]=T[ \hat{I}_{2\times 2}]=\text{identity (i.e., }T[\hat{U}^{\dagger}]=T^{-1}[\hat{U}])\). Furthermore, restricted to the simplex manifold \(S\) the transform \(T[\hat{U}]\) for any unitary \(\hat{U}\) is affine, i.e., \[T[\hat{U}](\lambda\bar{s}+(1-\lambda)\bar{s}^{\prime})=\lambda\,T[\hat{U}]( \bar{s})+(1-\lambda)\,T[\hat{U}](\bar{s}^{\prime}),\quad 0\leqslant\lambda\leqslant 1 \tag{24}\] As a specific example, we next discuss how to implement the analog of Hadamard gate on a single qubit. A Hadamard gate is accomplished by multiplying the state vector \(|\psi\rangle\) with the following unitary matrix [1]: \[\hat{H}=\left(\begin{array}{cc}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{array}\right)\quad. \tag{25}\] The effect of the Hadamard gate on the quantum state, explicitly expressed in terms of the real and imaginary parts of the complex coefficients, is: \[|\psi\rangle \longrightarrow \hat{H}|\psi\rangle\] \[\left(\begin{array}{c}x_{0}+iy_{0}\\ x_{1}+iy_{1}\end{array}\right) \longrightarrow \left(\begin{array}{c}\frac{1}{\sqrt{2}}x_{0}+\frac{1}{\sqrt{ 2}}x_{1}+i\left(\frac{1}{\sqrt{2}}y_{0}+\frac{1}{\sqrt{2}}y_{1}\right)\\ \frac{1}{\sqrt{2}}x_{0}-\frac{1}{\sqrt{2}}x_{1}+i\left(\frac{1}{\sqrt{2}}y_{0} -\frac{1}{\sqrt{2}}y_{1}\right)\end{array}\right)\quad. \tag{26}\] By inspection, the required \(8\times 8\) matrix for the transformation of Eq. (26) is: \[\tilde{M}[\hat{H}]=\left(\begin{array}{cccc|cccc}\frac{1}{\sqrt{2}}&\frac{1 }{\sqrt{2}}&0&0&0&0&0&0\\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}&0&0&0&0&0&0\\ \hline 0&0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0&0&0&0\\ 0&0&\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}&0&0&0&0\\ \hline 0&0&0&0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0&0\\ 0&0&0&0&\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}&0&0\\ \hline 0&0&0&0&0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ 0&0&0&0&0&\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{array}\right)\quad. \tag{27}\] With the matrix \(\tilde{M}[\hat{H}]\) given as above, the full transformation of the simplex vector is given by Eq. (14), i.e., \(T[\hat{H}](\tilde{s})=\frac{1}{8}\left(\tilde{I}-\tilde{M}[\hat{H}]\right) \cdot\vec{u}+\tilde{M}[\hat{H}]\cdot\vec{s}\). As we discussed in detail in our recent paper [21], the above analysis can be extended to find the transformation matrices for Rabi rotations as well as the single-qubit phase gate. For completeness, we present these matrices in Appendix A. Because any arbitrary evolution of the single-qubit wavefunction can be achieved using a combination of Rabi rotation gate and phase-gates, such evolution can be fully tracked using corresponding transformations of the corresponding vector \(\tilde{s}\) in the probability space (see Fig. 8 in Appendix A for more clarity). ### Measurements on the single-qubit system As we mentioned above, the above map of the single-qubit wavefunction stores both the real and imaginary parts of the complex coefficients in an eight-dimensional vector of probabilities \(\tilde{s}\). Not surprisingly, there is also a one-to-one correspondence between the measurement outcomes. That is, by measuring the probabilities of the mapped system (i.e., the components of the vector \(\vec{s}\)), we can calculate the probabilistic outcomes of the measurement of the quantum system. With the complex coefficients given in the qubit wavefunction, \(c_{0}\) and \(c_{1}\), when a quantum measurement is performed, the corresponding probabilities of finding the system in state \(|0\rangle\) and \(|1\rangle\) are \(|c_{0}|^{2}\) and \(|c_{1}|^{2}\), respectively. These quantities, in turn, can be expressed as the probability components of the simplex vector: \[|c_{0}|^{2}=x_{0}^{2}+y_{0}^{2}=(1-8s_{1})^{2}+(1-8s_{5})^{2},\] \[|c_{1}|^{2}=x_{1}^{2}+y_{1}^{2}=(1-8s_{2})^{2}+(1-8s_{6})^{2}. \tag{28}\] More generally, we can also establish a one-to-one correspondence between measurements of an observable in the quantum system and corresponding measurements in the probability space. For a given observable \(\hat{A}\) and a quantum state \(|\psi\rangle\), the average measured value of \(\hat{A}\) when the quantum system is in state \(|\psi\rangle\) is \(\langle\hat{A}\rangle_{|\psi\rangle}=\langle\psi|\,\hat{A}\,|\psi\rangle\). To establish a corresponding quantity in the probability space, we first map the operator \(\hat{A}\) in an identical way to how we mapped the evolution operator, \(\hat{U}\), above: \[\tilde{M}[\hat{A}]=\left(\begin{array}{c|c|c|c}\mathrm{Re}(\hat{A})&O&O& \mathrm{Im}(\hat{A})\\ \hline O&\mathrm{Re}(\hat{A})&\mathrm{Im}(\hat{A})&O\\ \hline\mathrm{Im}(\hat{A})&O&\mathrm{Re}(\hat{A})&O\\ \hline O&\mathrm{Im}(\hat{A})&O&\mathrm{Re}(\hat{A})\\ \end{array}\right)=\tilde{I}_{4\times 4}\otimes\mathrm{Re}(\hat{A})+ \tilde{\Lambda}\otimes\mathrm{Im}(\hat{A})\quad, \tag{29}\] It can then be shown that, the same average measurement value for the quantum observable can be obtained by the following expression in terms of the mapped observable, \(\tilde{M}[\hat{A}]\), and mapped quantum state, \(\varphi\,|\psi\rangle=\vec{s}=(\vec{u}+\vec{p})/8\). \[\langle\psi|\,\hat{A}\,|\psi\rangle=(\vec{p}^{\mathsf{T}}\cdot\tilde{M}[\hat {A}]\cdot\vec{p})/2\quad. \tag{30}\] We prove this correspondence in Appendix B. We have found that there is another informative way to evaluate the average measured value of the quantum observable \(\hat{A}\) in the probability space. We can envision to evolve the mapped simplex vector, \(\vec{s}\), with the transformation matrix of \(\tilde{M}[\hat{A}]\) of above, which we refer to as \(T[\hat{A}](\vec{s})\). We can then estimate how much this transformed vector has deviated from the initial simplex vector, which we denote with \(\langle T[\hat{A}]\rangle_{\tilde{s}}\), by taking the dot product of the transformed vector with the initial one: \[\langle T[\hat{A}]\rangle_{\tilde{s}}\equiv\tilde{s}^{\sf T}\cdot T[\hat{A}]( \tilde{s}) \tag{31}\] We can then relate this quantity to the quantum measurement outcome, \(\langle\hat{A}\rangle_{|\psi\rangle}\), by noting that: \[\tilde{s}^{\sf T}\cdot T[\hat{A}](\tilde{s})=\frac{1}{8}(\tilde{s}^{\sf T} \cdot\tilde{u}+\tilde{s}^{\sf T}\cdot\tilde{M}[\hat{A}]\cdot\vec{p})=\frac{1} {8}[1+\frac{1}{8}(\tilde{u}^{\sf T}+\vec{p}^{\sf T})\cdot\tilde{M}[\hat{A}] \cdot\vec{p}]=\frac{1}{8}[1+\frac{1}{8}(\vec{p}^{\sf T}\cdot\tilde{M}[\hat{A}] \cdot\vec{p})] \tag{32}\] which then gives: \[\langle T[\hat{A}]\rangle_{\varphi|\psi\rangle}=\frac{1}{8}(1+\frac{1}{4} \langle\hat{A}\rangle_{|\psi\rangle})\quad. \tag{33}\] This equation forms a direct correspondence between quantum measurements and measurements performed in the probability space, for any quantum observable \(\hat{A}\). ## IV Extension to multiple qubits ### Mapping the wavefunction of two-qubits When we have more than one qubit, the Hilbert space is given by the tensor product of the Hilbert space of the individual qubits; i.e., for two qubits, the state will be of the form: \[|\psi\rangle=|\psi_{1}\rangle\otimes|\psi_{2}\rangle\quad. \tag{34}\] For probabilistic spaces, we combine multiple vectors in an identical way. This has been discussed and rigorously proven in Ref. [26]; it is also implicit in the discussion of the mathematical structure of the probability theory by de Finetti [48]. However, this feature of combining probability spaces is not widely known. It is usually assumed that the tensor product is a feature that is special and specific to quantum mechanics. To map two qubits, we first envision mapping each qubit to a simplex vector, exactly as defined above with the single qubit map: \(\varphi\,|\psi_{1}\rangle=\tilde{s}_{1}\) and \(\varphi\,|\psi_{2}\rangle=\tilde{s}_{2}\). The combined vector in the simplex will be given by \[\vec{s}^{\prime}_{12}=\tilde{s}_{1}\otimes\tilde{s}_{2}\quad. \tag{35}\] In this combined vector, we again have something quite physical in mind. Instead of one, we now have two "dice", each with eight probabilistic outcomes. The 64 entries of the vector \(\vec{s}^{\prime}_{12}\) stores the joint probability distribution of two "dice" experiment outcomes. We note that this form of the combined vector is not of the form of the single-qubit simplex vector as described above by Eq. (3). Specifically, looking at this combined vector more closely: \[\vec{s}^{\prime}_{12} = \vec{s}_{1}\otimes\vec{s}_{2} \tag{36}\] \[= \frac{1}{8^{2}}\big{(}\vec{u}+\vec{p}_{1}\big{)}\otimes\big{(} \vec{u}+\vec{p}_{2}\big{)}\] \[= \frac{1}{8^{2}}\big{(}\vec{u}\otimes\vec{u}+\vec{u}\otimes\vec{p }_{2}+\vec{p}_{1}\otimes\vec{u}+\vec{p}_{1}\otimes\vec{p}_{2}\big{)}\quad.\] There are two cross-terms on the right hand side of Eq. (36), \(\vec{u}\otimes\vec{p}_{2}+\vec{p}_{1}\otimes\vec{u}\), which prevents the combined vector \(\vec{s}^{\prime}_{12}\), taking the form of the simplex vector as defined by Eq. (3). As will be clear below, to be able to extend unitary operations for the two-qubits to transformations in the overall simplex vector (i.e., to extend transformation matrices \(\tilde{M}\) to the combined simplex vector), it is imperative that we retain the form in Eq. (3) for the combined vector. We require a joint probability distribution, which we define \(\vec{s}_{12}\), containing information of the two simplex vectors, of the form, \[\vec{s}_{12}=\frac{1}{8^{2}}\big{(}\vec{u}\otimes\vec{u}+\vec{p}_{1}\otimes \vec{p}_{2}\big{)}\quad. \tag{37}\] This joint distribution, \(\vec{s}_{12}\) is of the form Eq. (3), and does not have the cross terms. As we will discuss below in detail, for this joint distribution, \(\vec{s}_{12}\), quantum unitary operations can now be formulated by taking tensor product of operations on \(\vec{p}_{1}\) and \(\vec{p}_{2}\) with some final offset (i.e., affine operations), analogous to the procedure for the transformations mimicking single-qubit gates that we discussed above. The procedure for obtaining the combined vector \(\vec{s}_{12}\) of the desired form is as follows. By applying appropriate affine transformations to the vectors, \(\vec{s}_{1}\) and \(\vec{s}_{2}\), we can also generate a joint distribution that has the opposite signs of the above-mentioned cross terms. We then take a statistical mixture of \(\vec{s}^{\prime}_{12}\), with its copy that has the cross terms with the opposite signs. More formally, we use the following bi-affine transformation, \(\tau\): \[\tilde{s}_{12}=\tau(\tilde{s}_{1},\tilde{s}_{2})\equiv\frac{1}{2}\left[\tilde{s} _{1}\otimes\tilde{s}_{2}+\Pi(\tilde{s}_{1})\otimes\Pi(\tilde{s}_{2})\right] \tag{38}\] Here, the transformations of the single vectors \(\Pi(\tilde{s}_{1})\) and \(\Pi(\tilde{s}_{2})\) are the following: \[\Pi(\tilde{s}_{1}) = \frac{1}{8}(\tilde{u}+\tilde{\Pi}\cdot\vec{p}_{1})=\frac{1}{8}( \tilde{u}-\vec{p}_{1})\quad,\] \[\Pi(\tilde{s}_{2}) = \frac{1}{8}(\tilde{u}+\tilde{\Pi}\cdot\vec{p}_{2})=\frac{1}{8}( \tilde{u}-\vec{p}_{2})\quad. \tag{39}\] In above, the matrix \(\tilde{\Pi}\) is a projection matrix which shuffles the entries of the \(\vec{p}\) vector to map \(\vec{p}\to-\vec{p}\). We note that the transformations \(\Pi(\tilde{s})\) are affine since for any two \(\lambda\) and \(\lambda^{\prime}\) such that \(\lambda+\lambda^{\prime}=1\), and any two simplex vectors \(\vec{s}\) and \(\vec{s}^{\prime}\), we have \(\Pi(\lambda\bar{s}+\lambda^{\prime}\bar{s}^{\prime})=\lambda\Pi(\bar{s})+ \lambda^{\prime}\Pi(\bar{s}^{\prime})\). Because the transformation \(\Pi(\bar{s})\) is affine, the transformation of the combined simplex vector, \(\tau(\tilde{s}_{1},\tilde{s}_{2})\) is also affine in each of its entries. That is, for any two constants \(\lambda\) and \(\lambda^{\prime}\) such that \(\lambda+\lambda^{\prime}=1\) and simplex vectors, \(\tilde{s}_{1}\), \(\bar{s}^{\prime}_{1}\), and \(\tilde{s}_{2}\), \(\bar{s}^{\prime}_{2}\), we have: \[\tau(\lambda\bar{s}_{1}+\lambda^{\prime}\bar{s}^{\prime}_{1}, \tilde{s}_{2})=\lambda\tau(\bar{s}_{1},\tilde{s}_{2})+\lambda^{\prime}\tau( \bar{s}^{\prime}_{1},\tilde{s}_{2})\quad,\] \[\tau(\tilde{s}_{1},\lambda\tilde{s}_{2}+\lambda^{\prime}\bar{s}^ {\prime}_{2})=\lambda\tau(\tilde{s}_{1},\tilde{s}_{2})+\lambda^{\prime}\tau( \tilde{s}_{1},\bar{s}^{\prime}_{2})\quad. \tag{40}\] The bi-affine transformation of Eq. (38) will be the starting point for mapping and manipulation of multiple qubits and will be used throughout the manuscript continually. Because of this, we define a new operation which we call \(\otimes^{s}\), which essentially refers to mapping of Figure 2: Simplified schematic for the affine transformation that generates the \(\tilde{s}_{12}\) state with the required form, starting with the initial states \(\tilde{s}_{1}\) and \(\tilde{s}_{2}\). The \(\otimes\) symbol represents tensor product of the corresponding two states and “mean” here implies that we add the resultant tensored vectors and divide by 2 as in Eq. (41) (i.e., taking a statistical average). multiple qubits followed by this unique transformation to eliminate the cross terms: \[\varphi_{2}(\ket{\psi_{1}}\otimes\ket{\psi_{2}})\equiv\vec{s}_{1}\otimes^{s}\vec{ s}_{2}\equiv\tau(\vec{s}_{1},\vec{s}_{2})=\frac{1}{2}\left[\vec{s}_{1}\otimes\vec{s}_{2}+ \Pi(\vec{s}_{1})\otimes\Pi(\vec{s}_{2})\right]\quad. \tag{41}\] Below in Appendix C, we will take a closer look at this bivalent operation; we will define it more rigorously and show that it is closed and satisfies all the properties of the ordinary tensor operation. Below, in all the quantum algorithms that we discuss, an initial system of unentangled qubits (i.e., with the \(n\)-qubit wavefunction in a product state), will initially mapped to a tensor product of simplex vectors, in exactly the same manner as we described above. We will then apply a transformation similar to Eq. (41), to transform the overall simplex vector in a form similar to \(\vec{s}_{12}\) of Eq. (38) (i.e., in a form which is a constant added to the tensor product of individual \(\vec{p}\) vectors). We will then show how a sequence of single-qubit and two-qubit gates that evolve the wavefunction can be mimicked in the probability space, with the overall simplex vector smoothly following the wavefunction. We note that the map that is shown in Eq. (41) can also be extended to initial non-separable two-qubit states. A general two-qubit state wavefunction can be written as a linear combination of separable tensor product states: \(\ket{\psi}=\sum_{j,k}\ket{\psi_{j}}\otimes\ket{\psi_{k}}\). With each state \(\ket{\psi_{j}}\) producing coefficients in the simplex vector \(\vec{p}_{j}\), and since the map \(\varphi_{2}\) is linear for the \(\vec{p}\) vectors we can define, \[\varphi_{2}(\sum_{j,k}\ket{\psi_{j}}\otimes\ket{\psi_{k}})\equiv \frac{1}{8^{2}}(\vec{u}^{\otimes_{2}}+\sum_{j,k}\vec{p}_{j}\otimes\vec{p}_{k})\quad. \tag{42}\] One interesting property of the map of Eq. (3) is that, the absolute phase of the quantum wavefunction matters. That is, the quantum states \(\ket{\psi}\) and \(\exp\{i\phi\}\ket{\psi}\) are mapped to different vectors \(\vec{s}\) in the probability space. A consequence of this is that, when multiple qubits are mapped, the absolute phases of each qubit wavefunction cannot be trivially combined, and the ordering of these phases become important. Such ordering of the phases will be important in the Quantum Fourier Transform discussion of below and we will also discuss it more thoroughly in Appendix D. ### Implementing two-qubit gates In this section, we discuss the implementation of operations on the two qubits using appropriate transformations of the mapped and transformed simplex vector, \(\tilde{s}_{12}\). There are two types of operations that we will consider: (1) separable operations where two independent single-qubit gates are applied to each qubit, and (2) non-separable operations such as the entangling two-qubit controlled-not (CNOT) gate. We start with separable operations on the two qubits of the form \(\hat{U}=\hat{U}_{1}\otimes\hat{U}_{2}\), where the first qubit and second qubit each evolve under operators \(\hat{U}_{1}\) and \(\hat{U}_{2}\), respectively. In simplex space, we define the tensor product of operations on each of the individual simplex vectors: \(\tilde{M}_{2}[\hat{U}]\equiv\tilde{M}[\hat{U}_{1}]\otimes\tilde{M}[\hat{U}_{2}]\). More explicitly, \(\tilde{M}_{2}[\hat{U}]\) is \(64\times 64\) matrix, which is a tensor product of two \(8\times 8\) single simplex vector transformation matrices as we discussed above: \[\tilde{M}_{2}[\hat{U}_{1}\otimes\hat{U}_{2}]\doteq\left(\begin{array}{c|c|c|c} \mathrm{Re}(\hat{U}_{1})&O&O&\mathrm{Im}(\hat{U}_{1})\\ \hline O&\mathrm{Re}(\hat{U}_{1})&\mathrm{Im}(\hat{U}_{1})&O\\ \hline\mathrm{Im}(\hat{U}_{1})&O&\mathrm{Re}(\hat{U}_{1})&O\\ \hline O&\mathrm{Im}(\hat{U}_{1})&O&\mathrm{Re}(\hat{U}_{1})\end{array}\right) \otimes\left(\begin{array}{c|c|c|c|c}\mathrm{Re}(\hat{U}_{2})&O&O&\mathrm{Im }(\hat{U}_{2})\\ \hline O&\mathrm{Re}(\hat{U}_{2})&\mathrm{Im}(\hat{U}_{2})&O\\ \hline\mathrm{Im}(\hat{U}_{2})&O&\mathrm{Re}(\hat{U}_{2})&O\\ \hline O&\mathrm{Im}(\hat{U}_{2})&O&\mathrm{Re}(\hat{U}_{2})\end{array}\right) \quad. \tag{43}\] We apply the transformation matrix of above, to the combined simplex vector, \(\tilde{s}_{12}\) exactly in the same manner as the single simplex vector transformations: \[T_{2}[\hat{U}](\tilde{s}_{12})\equiv\frac{1}{8^{2}}(\tilde{I}_{64\times 64}- \tilde{M}_{2}[\hat{U}])\cdot\vec{u}\otimes\vec{u}+\tilde{M}_{2}[\hat{U}]\cdot \tilde{s}_{12}\quad. \tag{44}\] Because the form of the combined simplex vector, \(\tilde{s}_{12}\), is identical to the single simplex vector of Eq. (3), the above transformation of \(\tilde{s}_{12}\) results in appropriate linear transformations of the corresponding \(\vec{p}\) vectors. More explicitly, we have \[T_{2}[\hat{U}](\tilde{s}_{12}) =\frac{1}{8^{2}}(\tilde{I}_{64\times 64}-\tilde{M}[\hat{U}])\cdot \vec{u}\otimes\vec{u}+\frac{1}{8^{2}}\tilde{M}[\hat{U}]\cdot(\vec{u}\otimes \vec{u}+\vec{p}_{1}\otimes\vec{p}_{2}) \tag{45}\] \[=\frac{1}{8^{2}}(\vec{u}\otimes\vec{u}+\tilde{M}[\hat{U}_{1}]\cdot \vec{p}_{1}\otimes\tilde{M}[\hat{U}_{2}]\cdot\vec{p}_{2}) \tag{46}\] We note that, when \(\tilde{M}_{2}[\hat{U}]\equiv\tilde{M}[\hat{U}_{1}]\otimes\tilde{M}[\hat{U}_{2}]\) of Eq. (43) is explicitly evaluated, the final matrix would contain terms with all four product combinations of the real and imaginary parts of the individual evolution operators, i.e., \(\mathrm{Re}(\hat{U}_{1})\otimes\mathrm{Re}(\hat{U}_{2})\), \(\mathrm{Re}(\hat{U}_{1})\otimes\mathrm{Im}(\hat{U}_{2})\), \(\mathrm{Im}(\hat{U}_{1})\otimes\mathrm{Im}(\hat{U}_{2})\), \(\mathrm{Im}(\hat{U}_{1})\otimes\mathrm{Im}(\hat{U}_{2})\), \(\mathrm{Im}(\hat{U}_{2})\otimes\mathrm{Im}(\hat{U}_{2})\) \(\mathrm{Re}(\hat{U}_{2})\), and \(\mathrm{Im}(\hat{U}_{1})\otimes\mathrm{Im}(\hat{U}_{2})\). As a result, we cannot express \(\tilde{M}[\hat{U}_{1}]\otimes\tilde{M}[\hat{U}_{2}]\) just in terms of \(\mathrm{Re}(\hat{U}_{1}\otimes\hat{U}_{2})\) and \(\mathrm{Im}(\hat{U}_{1}\otimes\hat{U}_{2})\). Hence we have the following important inequality: \[\tilde{M}[\hat{U}_{1}\otimes\hat{U}_{2}]\neq\tilde{I}_{4\times 4}\otimes \mathrm{Re}(\hat{U}_{1}\otimes\hat{U}_{2})+\Lambda\otimes\mathrm{Im}(\hat{U}_ {1}\otimes\hat{U}_{2})\quad. \tag{47}\] We, therefore, conclude that the definition of \(\tilde{M}\) (single qubit operator map) is different from \(\tilde{M}_{2}\) (two qubit operator map). As we will discuss below, for multiple qubit separable operations \(\hat{U}=\bigotimes_{k=1}^{n}\hat{U}_{k}\) the map is analogously defined as \(\tilde{M}_{n}[\hat{U}]\equiv\bigotimes_{k=1}^{n}\tilde{M}[\hat{U}_{k}]\). We now move to non-separable, entangling operations, such as the CNOT gate. A general two qubit controlled unitary operation \(\hat{C}_{U}\) can be decomposed as: \[\hat{C}_{\hat{U}}=\hat{P}_{0}\otimes\hat{I}_{2\times 2}+\hat{P}_{1}\otimes\hat{U},\quad P_{0}=|0\rangle\!\langle 0|\,,\quad P_{1}=|1\rangle\!\langle 1| \tag{48}\] Here, the first qubit is the control qubit, and the second qubit is the target qubit, respectively. The above unitary operation \(\hat{C}_{\hat{U}}\) applies identity operator to the target qubit, when the control qubit is in state \(|0\rangle\). When the control qubit is in state \(|1\rangle\), the target qubit evolves under unitary operator \(\hat{U}\). Based on the above unitary controlled quantum operation, we define its simplex analog \(\tilde{M}_{2}[\hat{C}_{\hat{U}}]\) as, \[\tilde{M}_{2}[\hat{C}_{\hat{U}}]\equiv\tilde{P}_{0}\otimes\tilde{I}_{8\times 8 }+\tilde{P}_{1}\otimes\tilde{M}[\hat{U}] \tag{49}\] where the two "projection" matrices, \(\tilde{P}_{0}\) and \(\tilde{P}_{1}\), acting on the first simplex vector are \[\tilde{P}_{0}=\left(\begin{array}{c|c|c|c|c}1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ \hline 0&0&1&0&0&0&0\\ 0&0&0&0&0&0&0\\ \hline 0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0\\ \hline 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0\\ \hline 0&0&0&0&0&1&0\\ 0&0&0&0&0&0\\ \hline 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1\end{array}\right)\quad\tilde{P}_{1}=\left(\begin{array}{c|c|c|c |c}0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ \hline 0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ \hline 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ \hline 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1\end{array}\right)\quad. \tag{50}\] With the transformation matrix, \(\tilde{M}_{2}[\hat{C}_{\hat{U}}]\) described by Eq. (49), we transform the combined simplex vector \(\tilde{s}_{12}\) exactly in the manner described by Eq. (33): \(\bar{s}_{12}\longrightarrow\frac{1}{8^{2}}(\tilde{I}_{64\times 64}-\tilde{M}_{2}[ \hat{C}_{\hat{U}}])\cdot\bar{u}\otimes\bar{u}+\tilde{M}_{2}[\hat{C}_{\hat{U}}] \cdot\bar{s}_{12}\). We note that this transformation is a valid transformation of the simplex vector since (1) the operation \(\hat{C}_{\hat{U}}\) is unitary as a whole implying that \(T_{2}[\hat{C}_{\hat{U}}]\) is affine and conserves probability under transformations, (2) we can explicitly prove that \(\tilde{M}_{2}[\hat{C}_{\hat{U}}]\cdot\bar{P}\) from some valid \(\tilde{P}\) always has the form such that it is orthogonal to \(\bar{u}\otimes\bar{u}\) as was suggested earlier in Eq. (2). Let \(\tilde{P}=\sum_{jk}\bar{p}_{j}\otimes\bar{p}_{k}\), then, \[\tilde{u}^{\sf T}\otimes\bar{u}^{\sf T}\cdot(\tilde{M}[\hat{C}_{ \hat{U}}]\cdot\tilde{P}) =\sum_{j,k}(\bar{u}^{\sf T}\otimes\bar{u}^{\sf T}\cdot(\tilde{P}_ {0}\otimes\tilde{I}_{8\times 8})\cdot(\bar{p}_{j}\otimes\bar{p}_{k})+\bar{u}^{ \sf T}\otimes\bar{u}^{\sf T}\cdot(\tilde{P}_{1}\otimes\tilde{M}(\hat{U})) \cdot(\bar{p}_{j}\otimes\bar{p}_{k}))\] \[=\sum_{j,k}\left((\bar{u}^{\sf T}\cdot\tilde{P}_{0}\cdot\bar{p}_{ j})(\bar{u}^{\sf T}\cdot\bar{p}_{k})^{\sf T}+\overbrace{(\bar{u}^{\sf T} \cdot\tilde{P}_{1}\cdot\bar{p}_{j})(\bar{u}^{\sf T}\cdot\tilde{M}[\hat{U}] \cdot\bar{p}_{k})}^{0}\right)=0\] In order to make these ideas more concrete, we next consider the creation of the simplex version \(\bar{s}(\Phi^{+})\) of the two-qubit entangled Bell state \(|\Phi^{+}\rangle\). The simplex circuit that accomplishes this is shown schematically in Fig. 3. Here, we start with the \(\bar{s}_{0}\otimes\bar{s}_{0}\) state (exactly mimicking a two-qubit quantum system starting in the \(|00\rangle\) state) and use the above introduced map \(\tau\) to produce the \(\tilde{s}_{00}\) state, \[\bar{s}_{00}\equiv\bar{s}_{0}\otimes^{s}\bar{s}_{0}=\tau(\bar{s}_{0},\bar{s}_ {0})=\frac{1}{8^{2}}(\bar{u}\otimes\bar{u}+\bar{p}_{0}\otimes\bar{p}_{0})\quad. \tag{51}\] We then apply the analog of the Hadamard gate to the first simplex vector transforming Figure 3: The circuit for creating a Bell state in the probability space. The dashed line marked as \(\tau\) indicates the application of the bi-affine map \(\tau\) (see Eq. (38)) to the initial state, i.e. the state after the dashed line in this case will be: \(\tau(\tilde{s}_{0},\tilde{s}_{0})=\tilde{s}_{00}\) as defined in Eq. (51). that vector into: \[\vec{s}^{\prime}=T_{2}[\hat{H}\otimes\hat{I}](\vec{s}_{00}) =\frac{1}{8^{2}}\big{(}\vec{u}\otimes\vec{u}+\tilde{M}[\hat{H}] \cdot\vec{p}_{0}\otimes\tilde{I}_{8\times 8}\cdot\vec{p}_{0}\big{)} \tag{52}\] \[=\frac{1}{8^{2}}\big{[}\vec{u}\otimes\vec{u}+(\vec{p}_{0}(1/\sqrt {2})+\vec{p}_{1}(1/\sqrt{2}))\otimes\vec{p}_{0}\big{]}\] (53) \[=\frac{1}{8^{2}}\big{[}\vec{u}\otimes\vec{u}+\frac{1}{\sqrt{2}}( \vec{p}_{0}+\vec{p}_{1})\otimes\vec{p}_{0}\big{]}=\frac{1}{8^{2}}\big{[}\vec{u} \otimes\vec{u}+\frac{1}{\sqrt{2}}\big{(}\vec{p}_{00}+\vec{p}_{10}\big{)} \big{]}\quad. \tag{54}\] Here, we have introduced the following useful notation, which we will use extensively throughout the manuscript \[\vec{p}_{00}\equiv\vec{p}_{0}\otimes\vec{p}_{0}\quad,\quad\vec{p} _{01}\equiv\vec{p}_{0}\otimes\vec{p}_{1}\quad,\] \[\vec{p}_{10}\equiv\vec{p}_{1}\otimes\vec{p}_{0}\quad,\quad\vec{p} _{11}\equiv\vec{p}_{1}\otimes\vec{p}_{1}\quad. \tag{55}\] More generally, for \(n\) qubits, we will use an extended version of this notation: \[\bigotimes_{i=1}^{n}\vec{p}_{q_{i}}\equiv\vec{p}_{\mathbf{q}},\quad\mathbf{q} =(q_{1},\ldots,q_{n})\in\mathbb{B}^{n}\quad. \tag{56}\] After the equivalent of the Hadamard gate on the first vector, we next apply the analog of the CNOT gate to obtain the Bell-state in the probability space: \[\vec{s}(\Phi^{+}) =T[C_{\hat{X}}](\vec{s}^{\prime})=\frac{1}{8^{2}}[\vec{u}\otimes \vec{u}+\tilde{M}_{2}[\hat{C}_{\hat{X}}]\cdot\frac{1}{\sqrt{2}}(\vec{p}_{00}+ \vec{p}_{10})] \tag{57}\] \[=\frac{1}{8^{2}}\big{[}\vec{u}\otimes\vec{u}+\frac{1}{\sqrt{2}} \big{(}\tilde{P}_{0}\otimes\tilde{I}_{8\times 8}+\tilde{P}_{1}\otimes\tilde{M}_{2 }[\hat{X}]\big{)}\cdot\big{(}\vec{p}_{0}\otimes\vec{p}_{0}+\vec{p}_{1}\otimes \vec{p}_{0}\big{)}\big{]}\] (58) \[=\frac{1}{8^{2}}\big{[}\vec{u}\otimes\vec{u}+\frac{1}{\sqrt{2}} \big{(}\vec{p}_{0}\otimes\vec{p}_{0}+\vec{p}_{1}\otimes\vec{p}_{1}\big{)} \big{]}\] (59) \[=\frac{1}{8^{2}}\big{(}\vec{u}\otimes\vec{u}+\frac{1}{\sqrt{2}} \big{(}\vec{p}_{00}+\vec{p}_{11}\big{)}\big{)} \tag{60}\] hence we see that the state \(\vec{s}(\Phi^{+})\) has the form that exactly mimicks the quantum system being in a two-qubit entangled Bell state of the form, \(|\psi\rangle=\frac{1}{\sqrt{2}}\big{(}|00\rangle+|11\rangle\big{)}\). ### Mapping states and operators for more than two qubits Extending the map for more than two qubits is straightforward. We first view each qubit to be mapped to a corresponding eight dimensional \(\vec{s}\) vector, exactly in the manner described by Eq. (3). The tensor product of the qubit wavefunctions would then map to the tensor product of probability vectors, \(\vec{s}_{1}\otimes\vec{s}_{2}\cdots\otimes\vec{s}_{n}\). However, identical to the two-qubit case, this would produce cross terms in the final resultant vector. We then use the procedure outlined above and use simplex tensor operation \(\otimes^{s}\) to obtain \[\varphi_{n}\left(\bigotimes_{k=1}^{n}|\psi_{k}\rangle\right)\equiv\vec{s}_{1} \otimes^{s}\vec{s}_{2}\cdots\otimes^{s}\vec{s}_{n}=\frac{1}{8^{n}}(\vec{u}^{ \otimes^{n}}+\bigotimes_{k=1}^{n}\vec{p}_{k})=\vec{s}_{1,n}\quad. \tag{61}\] This can be achieved by recursive application of the affine map \(\tau\) introduced earlier, in the following manner: \[\varphi_{n}\left(\bigotimes_{k=1}^{n}|\psi_{k}\rangle\right)\equiv\tau(\vec{ s}_{1},\tau(\vec{s}_{2},\ldots\tau(\vec{s}_{n-1},\vec{s}_{n})\ldots))\quad. \tag{62}\] We provide a schematic for the circuit that achieves the recursive application of this affine map in Fig. 4. The overall map \(\varphi_{n}\) is as before also affine for each of the simplex state vectors. We note that since the action of each bi-affine transformation \(\tau\) can be applied in constant time and memory, we can implement the map \(\varphi_{n}\) (i.e, the circuit that is shown in Fig. 4) for any large \(n\) in \(O(n)\) time and memory. For non-separable states the map is Figure 4: Simplified schematic for the operations involved in the recursive application of the bi-affine map \(\tau\) as defined in Eq. (62) (cf. Fig. 2 and refer to Appendix C for details). At each stage we generate a new distribution using the bi-affine map \(\tau\) and propagate that distribution to the next stage. After \(n\) stages, the circuit produces a combined simplex state \(\vec{s}_{1,n}\) corresponding to the \(n\) qubit state. From this diagram it is clear that the runtime/time complexity of \(\varphi_{n}\) grows linearly with \(n\) because there are only \(n\)-stages required with each costing a constant overhead. defined based on Eq. (42) as, \[\varphi_{n}\left(\sum_{j}\bigotimes_{i=1}^{n}|\psi_{j_{i}}\rangle\right)=\frac{1} {8^{n}}(\vec{u}^{\otimes^{n}}+\sum_{j}\bigotimes_{i=1}^{n}\vec{p}_{j_{i}}) \tag{63}\] where, each \(\vec{p}_{j_{i}}\) correspond to the \(p\)-state of each \(\varphi\,|\psi_{j_{i}}\rangle\). For mapping unitary evolution, first we consider separable (non-entangling) operators of the form \(\bigotimes_{i=1}^{n}\hat{U}_{i}\), which is a tensor product of evolution of each qubit by operator \(\hat{U}_{i}\), respectively. Following exactly the same strategy as we discussed above for two-qubits, we define the map of this tensor product as a tensor product of individual mapped operators: \[\tilde{M}_{n}[\bigotimes_{i=1}^{n}\hat{U}_{i}]\equiv\bigotimes_{i=1}^{n}\tilde {M}[\hat{U}_{i}]\quad. \tag{64}\] Here, the map for each unitary operator acting on qubit \(i\), \(\hat{U}_{i}\), is exactly as defined above: \[\tilde{M}[\hat{U}_{i}]=\left(\begin{array}{c|c|c|c}\mathrm{Re}(\hat{U}_{i} )&O&O&\mathrm{Im}(\hat{U}_{i})\\ \hline O&\mathrm{Re}(\hat{U}_{i})&\mathrm{Im}(\hat{U}_{i})&O\\ \hline\mathrm{Im}(\hat{U}_{i})&O&\mathrm{Re}(\hat{U}_{i})&O\\ \hline O&\mathrm{Im}(\hat{U}_{i})&O&\mathrm{Re}(\hat{U}_{i})\\ \end{array}\right)=\tilde{I}_{4\times 4}\otimes\mathrm{Re}(\hat{U}_{i})+ \tilde{\Lambda}\otimes\mathrm{Im}(\hat{U}_{i})\quad. \tag{65}\] More generally, entangling (non-separable) operations can be expressed as a sum of separable operations of the form \(\sum_{j}\bigotimes_{i=1}^{n}\hat{U}_{j_{i}}\), and their map to the probability space is: \[\tilde{M}_{n}[\sum_{j}\bigotimes_{i=1}^{n}\hat{U}_{j_{i}}]\equiv\sum_{j} \bigotimes_{i=1}^{n}\tilde{M}[\hat{U}_{j_{i}}]\quad. \tag{66}\] The transformation of the overall simplex vector with this mapped evolution is: \[T_{n}[\sum_{j}\bigotimes_{i=1}^{n}\hat{U}_{j_{i}}](\vec{S}) =\frac{1}{8^{n}}(\tilde{I}_{8^{n}\times 8^{n}}-\tilde{M}_{n}[\sum_{j} \bigotimes_{i=1}^{n}\hat{U}_{j_{i}}])\cdot\vec{u}^{\otimes^{n}}+\tilde{M}_{n} [\sum_{j}\bigotimes_{i=1}^{n}\hat{U}_{j_{i}}]\cdot\vec{S} \tag{67}\] \[=\frac{1}{8^{n}}(\tilde{I}_{8^{n}\times 8^{n}}-\sum_{j} \bigotimes_{i=1}^{n}\tilde{M}[\hat{U}_{j_{i}}])\cdot\vec{u}^{\otimes^{n}}+ \sum_{j}\bigotimes_{i=1}^{n}\tilde{M}[\hat{U}_{j_{i}}]\cdot\vec{S}\quad, \tag{68}\] for some given mapped \(n\)-qubit simplex state vector \(\vec{S}\). ### Measurements on the \(n\)-qubit system The map defined above for \(n\) qubits stores the real and imaginary parts of the complex coefficients in the corresponding probabilities of the \(\bar{s}\) vector. Identical to the single qubit maps that we discussed above, because the real and imaginary parts are stored in the mapped simplex vector, there is one-to-one correspondence between the measurement probabilities of the quantum wavefunction and the measured individual entries (i.e., probabilities) of the simplex vector. Following the strategy introduced for single qubit measurements, for a given \(n\)-qubit total wavefunction \(\left|\psi_{tot}\right\rangle\), the corresponding mapped simplex vector \(\vec{s}_{tot}=\varphi_{n}\left|\psi_{tot}\right\rangle\) and the qubit projection operator \(\hat{M}_{\mathbf{q}}=\left|\mathbf{q}\right\rangle\!\!\left\langle\mathbf{q}\right|\), the following connection holds (cf. Eq (33)): \[\vec{s}_{tot}^{\mathsf{T}}\cdot T[\hat{M}_{\mathbf{q}}](\vec{s}_{tot})=\langle T [\hat{M}_{\mathbf{q}}]\rangle_{\vec{s}_{tot}}=\frac{1}{8^{n}}(1+\frac{1}{4^{n }}|\left\langle\mathbf{q}|\psi_{tot}\right\rangle|^{2}),\quad\mathbf{q}\in \mathbb{B}^{n}\quad. \tag{69}\] To give a specific example, consider that at the end of the quantum evolution, we are interested in finding the probability that the qubit system is in the state \(\left|000...0\right\rangle\) after a measurement in that basis. This probability is \(|\left\langle 000...0|\psi_{tot}\right\rangle|^{2}\), and can be calculated directly from the components of the simplex vector \(\vec{s}_{tot}\): \[|\left\langle 000...0|\psi_{tot}\right\rangle|^{2}=(1-8^{n}s_{tot,1})^{2}+(1-8^{ n}s_{tot,5})^{2}\quad. \tag{70}\] Furthermore, we note that in a quantum system, the outcomes of measurements do not depend on the absolute phase of the wavefunction. As we mentioned above, different absolute phases for the quantum wavefunction result in different maps in the simplex. A critical point of consideration is whether the measurements we make in the simplex are independent of the phase ordering we choose for a particular state or not? It can in fact be proven that the phase ordering operations do not affect the measurements we make in the logical basis (see Appendix E for the proof) establishing the consistency of the phase ordering operations that we introduce in Appendix D. For measurements more general than just a projection operator, a connection that is identical to the single qubit case presented in Eq. (33) holds. That is, for any given quantum observable \(\hat{A}\), quantum state \(\left|\psi_{tot}\right\rangle\) and the corresponding simplex state \(\vec{s}_{tot}=\varphi_{n}\left|\psi_{tot}\right\rangle\) (note here that the phase order has not been specified because of its irrelevance), we have the following connection between the quantum measurement and the simplex measurement (see Appendix E for the proof): \[\tilde{s}_{tot}^{\sf T}\cdot T[\hat{A}](\tilde{s}_{tot})=\langle T[\hat{A}] \rangle_{\tilde{s}_{tot}}=\frac{1}{8^{n}}(1+\frac{1}{4^{n}}\langle\hat{A} \rangle_{|\psi_{tot}\rangle})\quad. \tag{71}\] ## V The Deutsch-Jozsa algorithm In the Deutsch - Jozsa problem [8], we are given a black box quantum computer known as an oracle that implements some function \(f\). The function \(f\) takes \(n\)-bit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced (0 for exactly half of the input domain and 1 for the other half). The task then is to determine if \(f\) is constant or balanced by using the oracle. For a classical deterministic algorithm, an exponential number of evaluations of the function are required. For a quantum algorithm, only a single quarry to the function \(f\) is sufficient. The Deutsch-Jozsa algorithm is critically important in the history of quantum computation, since it was the first algorithm to explicitly show that there can be an exponential speed-up if quantum computing is used. One thing to note is that in the Deutsch - Jozsa quantum algorithm, complex values for the coefficients are not needed; i.e., only the real values and the signs are important. Because of that, for the mapping of each qubit, we need the components of the simplex vectors that store only the real parts of the coefficients; i.e., for each mapped qubit only \(\vec{p}_{0}\) and \(\vec{p}_{1}\) vectors introduced above in Section III are sufficient. In our implementation, we follow quite closely the main steps in the quantum algorithm. We start with \(n+1\) simplex vectors. The first \(n\) simplex vectors are initialized to their \(\vec{p}_{0}\) state, while the final vector is in \(\vec{p}_{1}\) state. Our initial state is therefore: \[\frac{1}{8^{n+1}}\left[\vec{u}^{\otimes_{n+1}}+\left(\overbrace{\vec{p}_{0} \otimes\vec{p}_{0}\otimes...\otimes\vec{p}_{0}\otimes\vec{p}_{1}}^{n\ \mbox{simplex vectors}}\right)\right]\quad. \tag{72}\] We then proceed with applying a Hadamard gate to each of the simplex vectors to obtain: \[\frac{1}{8^{n+1}}\left[\vec{u}^{\otimes_{n+1}}+\frac{1}{\sqrt{2^{n+1 }}}\left(\bar{p}_{0}+\bar{p}_{1}\right)\otimes\left(\bar{p}_{0}+\bar{p}_{1} \right)\otimes...\otimes\left(\bar{p}_{0}+\bar{p}_{1}\right)\otimes\left(\bar{p }_{0}-\bar{p}_{1}\right)\right]\quad, \tag{73}\] \[= \frac{1}{8^{n+1}}\left[\vec{u}^{\otimes_{n+1}}+\frac{1}{\sqrt{2^{ n+1}}}\sum_{z=0}^{2^{n}-1}\bar{p}_{\bf z}\otimes\left(\bar{p}_{0}-\bar{p}_{1} \right)\right]\quad.\] Now, similar to the quantum case, we have the function \(f\) implemented as an oracle. The oracle maps the state \(\bar{p}_{\bf z}\otimes\bar{p}_{y}\) to \(\bar{p}_{\bf z}\otimes\bar{p}_{y^{\prime}}\) where \(y^{\prime}=y\oplus f(z)\) and \(z\) denotes the decimal equivalent of \(({\bf z})_{2}\). Here \(\oplus\) denotes addition modulo 2. Applying this oracle to state above gives: \[\frac{1}{8^{n+1}}\left[\vec{u}^{\otimes_{n+1}}+\frac{1}{\sqrt{2^{ n+1}}}\sum_{z=0}^{2^{n}-1}\bar{p}_{\bf z}\otimes\left(\bar{p}_{0^{\prime}}- \bar{p}_{1^{\prime}}\right)\right]\quad. \tag{74}\] where we have \(0^{\prime}=0\oplus f(z)\) and \(1^{\prime}=1\oplus f(z)\) for the \((n+1)^{\rm th}\) bit, respectively. Noting that for each \(z\), there are only two possibilities for \(f(z)\), either 0 or 1. As a result, the above state actually equals to: \[= \frac{1}{8^{n+1}}\left[\vec{u}^{\otimes_{n+1}}+\frac{1}{\sqrt{2^{ n+1}}}\sum_{z=0}^{2^{n}-1}(-1)^{f(z)}\bar{p}_{\bf z}\otimes\left(\bar{p}_{0}- \bar{p}_{1}\right)\right]\quad. \tag{75}\] At this point, the \((n+1)^{\rm th}\) vector is redundant and can be ignored. Focusing on just the Figure 5: Simplex version of the Deutsch-Jozsa algorithm. The dashed line marked as \(\tau\) indicates recursive application of the bi-affine map \(\tau\) as shown in Eq. (62). Therefore, the state before the application of Hadamard gates is as indicated in Eq. (72). The quantum oracle evaluating the function \(f\) can be represented by a unitary evolution, \(\hat{U}_{f}\). This unitary evolution is encoded in the black box simplex transformation, \(\tilde{M}_{n+1}[\hat{U}_{f}]\) such that it has the action as desired on the simplex states. first \(n\) simplex vectors, we have the state: \[\frac{1}{8^{n}}\bigg{[}\tilde{u}^{\otimes_{n}}+\frac{1}{\sqrt{2^{n}}}\sum_{z=0}^{ 2^{n}-1}(-1)^{f(z)}\tilde{p}_{\bf z}\bigg{]}\quad. \tag{76}\] Next, we apply the following \(n\)-bit Hadamard transform to the \(\tilde{p}_{\bf z}\) vector: \[\tilde{M}_{n}[\hat{H}^{\otimes_{n}}]\cdot\tilde{p}_{\bf z}=\frac{1}{\sqrt{2^{n }}}\sum_{k=0}^{2^{n}-1}(-1)^{{\bf z}\cdot{\bf k}}\tilde{p}_{\bf k}\quad. \tag{77}\] Here, the quantity \({\bf z}\cdot{\bf k}=z_{1}\cdot k_{1}\oplus z_{2}\cdot k_{2}\oplus z_{3}\cdot k _{3}\oplus...\oplus z_{n}\cdot k_{n}\) is the sum of the bitwise product and as above \(\oplus\) denotes addition modulo 2. Applying the Hadamard transformation of Eq. (77) to the state of Eq. (76), we then get: \[\frac{1}{8^{n}}\bigg{[}\tilde{u}^{\otimes_{n}}+\frac{1}{\sqrt{2^ {n}}}\sum_{z=0}^{2^{n}-1}(-1)^{f(z)}\tilde{M}_{n}[\hat{H}^{\otimes_{n}}]\cdot \tilde{p}_{\bf z}\bigg{]}\quad, \tag{78}\] \[= \frac{1}{8^{n}}\bigg{[}\tilde{u}^{\otimes_{n}}+\frac{1}{\sqrt{2^ {n}}}\sum_{z=0}^{2^{n}-1}(-1)^{f(z)}\left(\frac{1}{\sqrt{2^{n}}}\sum_{k=0}^{2^ {n}-1}(-1)^{{\bf z}\cdot{\bf k}}\tilde{p}_{\bf k}\right)\bigg{]}\quad,\] \[= \frac{1}{8^{n}}\bigg{[}\tilde{u}^{\otimes_{n}}+\frac{1}{2^{n}} \sum_{k=0}^{2^{n}-1}\sum_{z=0}^{2^{n}-1}(-1)^{f(z)}(-1)^{{\bf z}\cdot{\bf k}} \tilde{p}_{\bf k}\bigg{]}\quad.\] Now, the basic idea is that if the function \(f\) is constant, then the sum \(\sum_{z=0}^{2^{n}-1}(-1)^{f(z)}(-1)^{{\bf z}\cdot{\bf k}}\) is \(2^{n}\) only for \(k=0\) and zero otherwise. The result which is identical to the quantum case is now stored in the corresponding \(\tilde{p}\) vector of the final probability distribution; that is if the function \(f\) is constant, the final simplex vector has \(\tilde{p}_{\bf k}\) contribution for only \(k=0\). This is achieved by a single operation of the oracle function \(\tilde{M}_{n+1}[\hat{U}_{f}]\) to the overall simplex vector. ## VI Quantum Fourier Transform Quantum Fourier transform is a method to achieve discrete Fourier transform in an exponentially large Hilbert space using only polynomial number of operations [7]. In this section, we will first review the main steps in the Quantum Fourier Transform operation and then discuss its implementation in the probability space. Let's consider an exponentially large sequence of numbers, \(x_{j}\), of length \(L\): \(\{0\leqslant x_{j}\leqslant 1:j\in\{0,1,\ldots,L-1\}\}\). The discrete Fourier transform of such a sequence is given by the following expression: \[y_{k}=\frac{1}{\sqrt{L}}\sum_{j=0}^{L-1}e^{2\pi i\,jk/L}x_{j},\forall k\in\{0,1, \ldots,L-1\}\quad. \tag{79}\] Because this transformation is unitary we can envision a quantum procedure that achieves the above transformation for the expansion coefficients of the quantum state in a certain basis. Quantum Fourier Transform, which forms a critical step in Shor's factoring algorithm, specifies a method for transformation of the components of the basis states in a manner identical to Eq. (79). More explicitly, for a sequence of length \(L=2^{n}\), we design a quantum unitary evolution matrix \(\hat{Q}_{n}\) on \(n=\log_{2}L\) qubits which has the following action on a given logical basis state \(\left|j_{1},j_{2},\ldots,j_{n}\right\rangle\equiv\left|j\right\rangle\), \[\hat{Q}_{n}\left|j_{1},j_{2},\ldots,j_{n}\right\rangle=\hat{Q}_{n}\left|j \right\rangle\equiv\frac{1}{\sqrt{2^{n}}}\sum_{k=0}^{2^{n}-1}e^{2\pi ijk/2^{n} }\left|k\right\rangle\quad. \tag{80}\] where each \(j_{\nu},k_{\nu}\in\mathbb{B}\) and \(j\), \(k\) are the decimal equivalents of the binary representations: \((j_{1}\,j_{2}\cdots j_{n})_{2}\) and \((k_{1}\,k_{2}\cdots k_{n})_{2}\), respectively. Following the definition of above, if we have a general state, \[\left|x\right\rangle=\sum_{j=0}^{2^{n}-1}x_{j}\left|j\right\rangle \tag{81}\] that stores the sequence \(\{0\leqslant x_{j}\leqslant 1:j\in\{0,1,\ldots,2^{n}-1\}\}\), then the application of the unitary operator \(\hat{Q}_{n}\) to this state provides us with a state that stores the discrete Fourier transform of the aforementioned sequence of coefficients. This can be seen by noting that: \[\hat{Q}_{n}\left|x\right\rangle =\sum_{j=0}^{2^{n-1}}x_{j}\hat{Q}_{n}\left|j\right\rangle=\sum_{j =0}^{2^{n}-1}x_{j}\frac{1}{\sqrt{2^{n}}}\sum_{k=0}^{2^{n}-1}e^{2\pi ijk/2^{n} }\left|k\right\rangle \tag{82}\] \[=\sum_{k=0}^{2^{n}-1}\overbrace{\frac{1}{\sqrt{2^{n}}}\left( \sum_{j=0}^{2^{n}-1}e^{2\pi i\,jk/2^{n}}x_{j}\right)}^{\underbrace{y_{k}}_{ \left|k\right\rangle}}\left|k\right\rangle=\sum_{k=0}^{2^{n}-1}y_{k}\left|k \right\rangle=\left|y\right\rangle, \tag{83}\] While the unitary operator, \(\hat{Q}_{n}\) is of dimension \(2^{n}\times 2^{n}\) and acts on an exponentially large state space, remarkably, the Quantum Fourier Transform operation can be implemented using \(O(n^{2})\) single-qubit and two-qubit gates. This can most readily be seen by writing the effect of \(\hat{Q}_{n}\) on a basis state \(\ket{j}\) in the following product form: \[\hat{Q}_{n}\ket{j}=\frac{1}{(\sqrt{2})^{n}}(\ket{0}+e^{2\pi i(0.j_{n})_{2}}\ket{1} )(\ket{0}+e^{2\pi i(0.j_{n-1}j_{n})_{2}}\ket{1})\cdots(\ket{0}+e^{2\pi i(0.j_{1} j_{2}j_{3}...j_{n})_{2}}\ket{1})\quad. \tag{84}\] In order to implement the simplex version of the Quantum Fourier Transform, we first write the above product form in the probability space, in the tensor product of the \(\vec{p}\) vectors: \[\tilde{M}[\hat{Q}_{n}]\cdot\vec{p}_{j_{1},j_{2}...,j_{n}}=\frac{1}{(\sqrt{2})^ {n}}(\vec{p}_{0}+\tilde{P}_{1}(e^{2\pi i(0,j_{n})_{2}}))\otimes(\vec{p}_{0}+ \tilde{P}_{1}(e^{2\pi i(0,j_{n-1}j_{n})_{2}}))\otimes\cdots\otimes(\vec{p}_{0} +\tilde{P}_{1}(e^{2\pi i(0,j_{1}j_{2}j_{3}...j_{n})_{2}}))\quad. \tag{85}\] There is a well-known procedure for implementing the Quantum Fourier Transform using a sequence of Hadamard gates and controlled-phase rotations. This procedure is first implemented in a reverse order on the qubits, and then transformed into the desired form by using a final set of SWAP operations. To implement the simplex analog of the Quantum Fourier Transform, we follow this procedure gate by gate. We give a circuit diagram in Fig. 6 that generates the state in Eq. (84) but in reverse order, we call this operation \(\tilde{M}[\hat{Q}^{\prime}_{n}]\). In Fig. 6, each gate \(H\) is a Hadamard gate on that specific simplex vector, and each controlled-rotation Figure 6: Circuit for implementing \(\tilde{M}[\hat{Q}^{\prime}_{n}]\). The dashed line marked as \(\tau\) indicates recursive application of the bi-affine map \(\tau\) as shown in Eq. (62). The expression for the rotation gates \(\hat{R}_{k}\) are provided in Eq. (86). At the end of this circuit, identical to the quantum case, the output bits are in reverse order. Furthermore, there is also another complication, due to how the absolute phase of the wavefunction can be distributed (ordered) into individual simplex vectors. To obtain the Fourier Transform operation identical to the quantum case, post processing involves application of a phase ordering operations followed by \(\lfloor n/2\rfloor\) SWAP operations (see Fig. 7). gate \(\hat{R}_{k}\) is the following matrix: \[\hat{R}_{k}\equiv\left(\begin{array}{cc}1&0\\ 0&e^{2\pi i/2^{k}}\end{array}\right)\quad. \tag{86}\] We now outline the procedure step-by-step as follows. First, we start with the initial product state of the simplex vectors \(\tilde{s}_{j_{1}}\otimes\tilde{s}_{j_{2}}\otimes\cdots\otimes\tilde{s}_{j_{n}}\). This product state is obtained with the identical procedure that we discussed above, for example, in the Deutsch-Jozsa algorithm. Each qubit is mapped to a simplex vector using the mapping of Eq. (3), \(\ket{\psi_{j_{i}}}\rightarrow\tilde{s}_{j_{i}}\). We then apply the recursive bi-affine map \(\tau\) to get the quantum analogue of the initial product state \(\ket{j}\) (cf. Eqs. (61) and (62)): \[\varphi_{n}\ket{j}=\frac{1}{8^{n}}(\vec{u}^{\otimes_{n}}+\vec{p}_{j_{1}} \otimes\vec{p}_{j_{2}}\otimes\cdots\otimes\vec{p}_{j_{n}})=\frac{1}{8^{n}}( \vec{u}^{\otimes_{n}}+\vec{p}_{j_{1},j_{2},\ldots,j_{n}})\quad. \tag{87}\] We then transform the first simplex vector, \(\tilde{s}_{j_{1}}\) by a sequence of \(n-1\) controlled rotation gates followed by a Hadamard gate [70]. If we track this evolution gate-by-gate, below are the simplex states that are produced: * The first Hadamard gate \[(\tilde{M}[\hat{H}]\cdot\vec{p}_{j_{1}})\otimes\vec{p}_{j_{2},j_{3 },\ldots,j_{n}} =\frac{1}{\sqrt{2}}\big{(}\vec{p}_{0}+(-1)^{j_{1}}\vec{p}_{1}) \otimes\big{(}\vec{p}_{j_{2},j_{3},\ldots,j_{n}}\big{)}\] \[=\frac{1}{\sqrt{2}}\big{(}\vec{p}_{0}+\vec{P}_{1}\big{(}e^{i\pi j _{1}}\big{)}\big{)}\otimes\vec{p}_{j_{2},j_{3}\ldots,j_{n}}\] \[=\frac{1}{\sqrt{2}}\big{(}\vec{p}_{0}+\vec{P}_{1}\big{(}e^{2\pi i (0,j_{1})_{2}}\big{)}\big{)}\otimes\vec{p}_{j_{2},j_{3}\ldots,j_{n}}\] * Controlled \(\hat{R}_{2}\) rotation \(\hat{C}_{\hat{R}_{2}}^{(2,1)}\): \[\tilde{M}_{n}[\hat{C}_{\hat{R}_{2}}^{(2,1)}]\cdot\left(\frac{1}{ \sqrt{2}}(\vec{p}_{0}+\vec{P}_{1}\big{(}e^{2\pi i(0.j_{1})_{2}}\big{)}\big{)} \otimes\vec{p}_{j_{2},j_{3},\ldots,j_{n}}\right)\] \[=\frac{1}{\sqrt{2}}\big{(}\vec{M}[\hat{R}_{2}^{j_{2}}]\cdot(\vec{ p}_{0}+\vec{P}_{1}\big{(}e^{2\pi i(0.j_{1})_{2}}\big{)}\big{)}\big{)}\otimes\vec{p}_{j _{2},j_{3},\ldots,j_{n}}\] \[=\frac{1}{\sqrt{2}}\big{(}\vec{p}_{0}+\vec{P}_{1}\big{(}e^{2\pi i ((0.j_{1})_{2}+j_{2}/2^{2})}\big{)}\big{)}\otimes\vec{p}_{j_{2},j_{3},\ldots,j _{n}}=\frac{1}{\sqrt{2}}\big{(}\vec{p}_{0}+\vec{P}_{1}\big{(}e^{2\pi i(0.j_{1 }j_{2})_{2}}\big{)}\big{)}\otimes\vec{p}_{j_{2},j_{3},\ldots,j_{n}}\] \[\vdots\] : * By continuing this sequence, after controlled \(\hat{R}_{n}\) rotation \(\hat{C}_{\hat{R}_{n}}^{(n,1)}\) the state will be: \[\tilde{M}_{n}[\hat{C}_{\hat{R}_{n}}^{(n,1)}]\cdot\left(\frac{1}{ \sqrt{2}}(\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j_{1}j_{2}\dots j_{n-1})_{2}})) \right)\otimes\vec{p}_{j_{2},j_{3}\dots j_{n}}\right)\] \[=\frac{1}{\sqrt{2}}\big{(}\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j_ {1}j_{2}\dots j_{n})_{2}})\big{)}\otimes\vec{p}_{j_{2},j_{3},\dots,j_{n}}\] Using a procedure similar to above, we next transform the second simplex state, \(\tilde{s}_{j_{2}}\), via \(n-2\) controlled rotations again starting with a Hadamard gate. For this case, the second simplex state will be transformed appropriately, producing the following output overall \(\vec{p}\) vector: \[\frac{1}{(\sqrt{2})^{2}}(\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j_{1}j_{2}\dots j _{n})_{2}}))\otimes(\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j_{2}j_{3}\dots j_{n}) }))\otimes\vec{p}_{j_{3},\dots,j_{n}}\quad. \tag{88}\] As shown in Fig. 6, this procedure is continued until the final simplex state. At the end of the procedure, the last simplex vector, \(\vec{s}_{j_{n}}\) is evolved by the application of a single Hadamard gate. The final output \(\vec{p}\) vector can be rewritten in the same manner as before giving us the form required in Eq. (84) but in reverse order, \[\frac{1}{(\sqrt{2})^{n}}(\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j_{1}j_{2}j_{3} \dots j_{n})_{2}}))\otimes\cdots\otimes(\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j _{n-1}j_{n})_{2}}))\otimes(\vec{p}_{0}+\vec{P}_{1}(e^{2\pi i(0.j_{n})_{2}}))\quad. \tag{89}\] Figure 7: Post processing after the method \(\tilde{M}[\hat{Q}_{n}^{\prime}]\) proposed in Fig. 6. Here we show an example of making a first phase ordered state. The SWAP operation after the phase ordering operation \(\Gamma_{n}^{(n)}\) will make up a first phase ordered state. This form achieves the quantum Fourier transform operation of Eq. (84), but in reverse order. By applying \(\lfloor n/2\rfloor\) SWAP operation, we would accomplish the quantum Fourier transform operation of Eq. (84). We next discuss the complication of the overall absolute phase of the wavefunction has on the simplex vectors. For the case of the quantum algorithm, because the phases are treated as numerical factors, the expressions in Eqs. (80) and (84) are equivalent. However the same cannot be said for the corresponding simplex expressions. Therefore, for post processing we first apply a particular phase ordering operation and then \(\lfloor n/2\rfloor\) SWAP operations. In Fig. 7, we give an example of constructing a specific phase order for the Fourier transformed states. After application of \(\tilde{M}[\hat{Q}_{n}]\), this specific phase ordering operation adds all the phases and stores it in the last state of all the product states when expanding the state in Eq. (80). Then the linear \(\lfloor n/2\rfloor\) SWAP operations will reverse each of the product states in the expansion thereby shifting the position of the phases. The final state that we produce will be of the following form, which we refer to as the first phase-ordered state: \[\frac{1}{(\sqrt{2})^{n}}\sum_{k_{1}\in\mathbb{B}}\cdots\sum_{k_{n}\in\mathbb{B }}\tilde{P}_{k_{1},\ldots,k_{n}}^{(1)}(e^{2\pi ijk/2^{n}})=\frac{1}{(\sqrt{2} )^{n}}\sum_{k_{1}\in\mathbb{B}}\cdots\sum_{k_{n}\in\mathbb{B}}\tilde{P}_{k_{1} }(e^{2\pi ijk/2^{n}})\otimes\vec{p}_{k_{2},\ldots,k_{n}}\quad. \tag{90}\] From this we can generate other phase ordered states by just applying appropriate order switching operations. The details of these operations are discussed in Appendix D. Since, we have the capacity to order phases of the states after performing the Fourier transform operation, it is important that we explicitly mention this in the operator notation. We do this by using an additional order subscript \(\sigma\): \(\tilde{M}[\hat{Q}_{n}]_{\sigma}\), where \(\sigma\in\{1,2,\ldots,n\}\) denotes the final phase ordering of the state after the Fourier transform operation has been performed. The transform \(T\) over the simplex for the Fourier transform operation \(\tilde{M}[\hat{Q}_{n}]_{\sigma}\) will be denoted by \(T[\hat{Q}_{\sigma}^{(n)}]\). Finally, we note that \(T[\hat{Q}_{\sigma}^{(n)}]\) has the desired effect on a general state as well. As before starting with a general state in some phase order \(\sigma\)[71], \[\varphi_{n}^{\sigma}\ket{x}=\bar{s}^{(\sigma)}(x)\doteq\frac{1}{\bar{s}^{n}}( \bar{u}^{\otimes_{n}}+\sum_{q=0}^{2^{n}-1}\tilde{P}_{\mathbf{q}}^{(\sigma)}(x_ {q})),\quad\mathbf{q}\in\mathbb{B}^{n},q=[\mathbf{q}], \tag{91}\] we apply the Fourier transform operation and simplify, \[T[Q_{\sigma}^{(n)}](\bar{s}^{(\sigma)}(x)) = \frac{1}{8^{n}}\left(\bar{u}^{\otimes_{n}}+\tilde{M}[\hat{Q}_{n}]_{ \sigma}\cdot\left(\sum_{q=0}^{2^{n}-1}\bar{P}_{\bf q}^{(\sigma)}(x_{q})\right)\right) \tag{92}\] \[= \frac{1}{8^{n}}\left(\bar{u}^{\otimes_{n}}+\left(\sum_{q=0}^{2^{n} -1}\tilde{M}[\hat{Q}_{n}]_{\sigma}\cdot\bar{P}_{\bf q}^{(\sigma)}(x_{q})\right)\right)\] (93) \[= \frac{1}{8^{n}}\left(\bar{u}^{\otimes_{n}}+\sum_{q=0}^{2^{n}-1} \frac{1}{(\sqrt{2})^{n}}\sum_{k=0}^{2^{n}-1}\bar{P}_{\bf k}^{(\sigma)}(e^{2\pi i \,qk/2^{n}}x_{q})\right),\quad k=[{\bf k}]\] (94) \[= \frac{1}{8^{n}}\left(\bar{u}^{\otimes_{n}}+\sum_{k=0}^{2^{n}-1} \bar{P}_{\bf k}^{(\sigma)}(y_{k})\left(\frac{1}{(\sqrt{2})^{n}}\sum_{q=0}^{2^ {n}-1}e^{2\pi i\,qk/2^{n}}x_{q}\right)\right)\] (95) \[= \frac{1}{8^{n}}\left(\bar{u}^{\otimes_{n}}+\sum_{k=0}^{2^{n}-1} \bar{P}_{\bf k}^{(\sigma)}(y_{k})\right)=\bar{s}^{(\sigma)}(y)=\varphi_{n}^{ \sigma}\left|y\right\rangle\quad. \tag{96}\] We, therefore, see that the final state indeed stores the Fourier transform of the input sequence in the same phase order. Note that to go from Eq. (94) to (95) we have used the additive property from Eq. (12). ## VII Conclusions and future directions In conclusion, we have discussed a new approach to simulate quantum algorithms using classical probabilistic bits and circuits. Each qubit (a two-level quantum system) is initially mapped to a vector in an eight dimensional probability space. Due to the identical tensor product structure of combining multiple quantum systems as well as multiple probability spaces, \(n\) qubits are then mapped to a tensor product of \(n\) 8-dimensional probabilistic vectors (i.e., the Hilbert space of dimension \(2^{n}\) is mapped to a probability space of dimension \(8^{n}\)). After this initial mapping, we showed how to implement analogs of single-qubit and two-qubit gates in the probability simplex. Remarkably, these results show that an exponentially large number of complex coefficients in the quantum evolution can be tracked in the probability space with a similar number of operations performed in the probability simplex. We also discussed how to simulate (1) the Deutsch-Jozsa algorithm in the probability space, and (2) the Quantum Fourier transform in the probability space. Identical to the quantum case, implementing the Quantum Fourier Transform in the probability space requires a polynomial number of gates (in an exponentially large probability space). Our work shows that the initial state and the evolution of an \(n\)-qubit quantum computer can be captured using a polynomial number of fully-correlated classical random variables and affine circuits. One exciting future direction is to show if our approach constitutes a truly efficient simulation of quantum evolution and whether there is an exponential overhead hiding in a certain aspect of our formalism. As the state of the quantum computer evolves in the Hilbert space, the entries of the simplex vector (i.e., the specific joint probabilities) become exponentially small (similar to how the magnitudes of the complex coefficients become exponentially small in a quantum computer). As a result, we believe understanding how the evolution is affected by noise is critical. To be able to claim efficient simulation, a detailed study of noise and error correction in probabilistic circuits of the form that we describe here is essential. This is one clear future research direction. We think it is possible that error correction is more straightforward with classical random variables in the probability space, since we are allowed to make a polynomial number of copies of the individual simplex vectors, and introduce a redundancy. We also note that the issue of noise and error correction is still an active research area for quantum computers. It is known that the threshold theorem is not applicable when there is correlated noise affecting all the qubits simultaneously in the quantum computer [49; 50; 51; 52]. Recent work has shown that such errors can happen when the qubits are coupled to a common bosonic bath (which, inevitably happens in every quantum computer) [53; 54; 55]. We discussed how correlated decay between the qubit levels causes an error on each qubit that scales with the number of qubits, thereby again violating one of the basic assumptions of the threshold theorem (the assumption that the gate errors can be assumed to be smaller than the a certain threshold) [56]. We also note that, another important issue is that while the \(\vec{p}\) vector tracks the complex coefficients of the quantum evolution, it is the total simplex vector, \(\vec{s}_{tot}\), that is physical and that contains the probabilities (and the final measurements are performed on this vector). In other words, the quantum evolution is tracked in the deviation of the probabilities from the uniform distribution, \(\vec{u}\). As a result, the individual entries of the simplex vector, \(\vec{s}_{tot}\), always remain exponentially small. This is different from the quantum case. In quantum systems, at the end of the evolution, the probability can be concentrated at a certain state. In our formalism, tracking this evolution, the entries of the \(\vec{p}\) vector would concentrate at a certain state, but this would not happen for the \(\vec{s}_{tot}\) vector (due to the initial uniform distribution in the definition of the \(\vec{s}_{tot}\) vector). An important open question is if perform measurements on the final output \(\vec{s}_{tot}\) vector (i.e. if we sample from the final joint distribution), how efficiently can we simulate the final measurement outcomes of the \(n\)-qubit quantum system (keeping in mind that we are allowed to make a polynomial number of copies of the simplex vector \(\vec{s}_{tot}\)?) We believe the approach presented here has practical and fundamental implications. On the practical side, as mentioned above, our approach may provide a unique way to simulate quantum systems that may be more efficient than currently possible. Within this context, an exciting immediate experimental direction is to experimentally demonstrate the simplex transformations for a single qubit that we have discussed. It may be possible to extend the recent experimental work of Datta and colleagues on probabilistic bits (p-bits) [22; 23]. One near-term future goal would be to observe the analog of the single-qubit Hadamard gate and "Rabi rotations" in the simplex using an appropriate circuit acting on 3 \(p\)-bits. We have discussed a specific procedure for implementing "Rabi rotations" using classical random variables and probabilities in our recent work [21]. Another goal would be to implement the analog two-qubits gates using correlation-inducing operations on multiple \(p\)-bits. On the fundamental side, we have shown that \(n\)-qubit state of a quantum computer can be tracked in the deviations of probabilities from a uniform distribution. For capturing the initial state and time evolution of a quantum system, the mathematical structure of complex wavefunctions that live in a Hilbert space is not necessary. We think it is also possible that progress along the above posed questions will help clarify the quantum/classical boundary [57; 58; 59], as well as the quantum measurement problem [60; 61]. We also note that, throughout this paper, we have focused on simulating quantum algorithms using classical probabilistic bits and circuits. We have not gone into a detailed discussion of concepts of entanglement [62; 63; 64; 65], nonlocality [65], contextuality [66], or the reality of the quantum state [67; 68; 69]. A rigorous discussion of these important concepts is beyond the scope of this work. We finally note that Quantum Fourier Transform is arguably the most important step in the celebrated Shor's factoring algorithm [5; 6]. An exciting future direction is to extend our analysis and provide the specific procedures for implementing an analog Shor's factoring algorithm in the probability simplex. ## VIII Acknowledgements We would like to thank Ben Lemberger and Volkan Rodoplu for many helpful discussions. D. D. Yavuz would also would like to thank Bin Yan for an early discussion on the subject. This work was supported by the National Science Foundation (NSF) Grant No. 2016136 for the QLCI center Hybrid Quantum Architectures and Networks (HQAN), and also by the University of Wisconsin-Madison, through the Vilas Associates award. ## Appendix A. Rabi rotations and phase gates in simplex space The unitary operation for a general Rabi rotation with angle \(\theta\) is given by the following action on the logical basis states: \[\hat{Y}_{\theta}\left|0\right\rangle = \cos\left(\frac{\theta}{2}\right)\left|0\right\rangle+\sin\left( \frac{\theta}{2}\right)\left|1\right\rangle\quad, \tag{97}\] \[\hat{Y}_{\theta}\left|1\right\rangle = \sin\left(\frac{\theta}{2}\right)\left|0\right\rangle-\cos\left( \frac{\theta}{2}\right)\left|1\right\rangle\quad. \tag{98}\] In this logical qubit basis, this operation can be represented by the following \(2\times 2\) unitary matrix: \[\hat{Y}_{\theta}=\left(\begin{array}{cc}\cos\left(\frac{\theta}{2}\right)& \sin\left(\frac{\theta}{2}\right)\\ \sin\left(\frac{\theta}{2}\right)&-\cos\left(\frac{\theta}{2}\right)\end{array} \right)\quad. \tag{99}\] Similarly, the single qubit phase gate \(\hat{Z}_{\phi}\) has the following action on the logical states: \[\hat{Z}_{\phi}\left|0\right\rangle=\left|0\right\rangle\quad,\quad\hat{Z}_{ \phi}\left|1\right\rangle=e^{i\phi}\left|1\right\rangle\quad. \tag{100}\] This phase gate, \(\hat{Z}_{\phi}\), can be represented in the same matrix notation as: \[\hat{Z}_{\phi}=\left(\begin{array}{cc}1&0\\ 0&e^{i\phi}\end{array}\right)=\left(\begin{array}{cc}1&0\\ 0&\cos\phi\end{array}\right)+i\left(\begin{array}{cc}0&0\\ 0&\sin\phi\end{array}\right)\quad. \tag{101}\] The actions of these operators in the Bloch sphere picture are shown visually in Fig. 8(a) and 8(b), respectively. Using the linear operator representations in Eq. (99) and Eq. (101), we can map these operations to their simplex counterparts respectively. The simplex transformation matrix for the Rabi rotations is given by (cf. Eq (19)): \[\tilde{M}[\hat{Y}_{\theta}]=\left(\begin{array}{cccc|cccc}\cos\left(\frac{ \theta}{2}\right)&\sin\left(\frac{\theta}{2}\right)&0&0&0&0&0&0\\ \sin\left(\frac{\theta}{2}\right)&-\cos\left(\frac{\theta}{2}\right)&0&0&0&0&0 \\ \hline 0&0&\cos\left(\frac{\theta}{2}\right)&\sin\left(\frac{\theta}{2} \right)&0&0&0&0\\ 0&0&\sin\left(\frac{\theta}{2}\right)&-\cos\left(\frac{\theta}{2}\right)&0&0&0 &0\\ \hline 0&0&0&0&\cos\left(\frac{\theta}{2}\right)&\sin\left(\frac{\theta}{2} \right)&0&0\\ 0&0&0&0&\sin\left(\frac{\theta}{2}\right)&-\cos\left(\frac{\theta}{2}\right) &0&0\\ \hline 0&0&0&0&0&0&\cos\left(\frac{\theta}{2}\right)&\sin\left(\frac{\theta}{2} \right)\\ 0&0&0&0&0&0&\sin\left(\frac{\theta}{2}\right)&-\cos\left(\frac{\theta}{2} \right)\\ \end{array}\right)\quad. \tag{102}\] From \(\tilde{M}[\hat{Y}_{\theta}]\) we can further define the affine transform \(T[\hat{Y}_{\theta}]\) that can be applied to a simplex state to steer the state along the longitude of the Bloch sphere. Similarly, the transformation Figure 8: This figure illustrates the action of Rabi rotations \(\hat{Y}_{\theta}\) and Phase gates \(\hat{Z}_{\phi}\) on a qubit state in the Bloch sphere picture. We note that these two rotations are sufficient to steer the qubit on the Bloch sphere to any location \((\theta,\phi)\). matrix for implementing the phase gate will be (again using Eq. (19)): \[\tilde{M}[\hat{Z}_{\phi}]=\left(\begin{array}{cccc|cccc|c}1&0&0&0&0&0&0&0&0\\ 0&\cos\phi&0&0&0&0&0&\sin\phi\\ \hline 0&0&1&0&0&0&0&0\\ 0&0&0&\cos\phi&0&\sin\phi&0&0\\ \hline 0&0&0&0&1&0&0&0\\ 0&\sin\phi&0&0&\cos\phi&0&0\\ \hline 0&0&0&0&0&0&1&0\\ 0&0&0&\sin\phi&0&0&0&\cos\phi\end{array}\right)\quad. \tag{103}\] As before, this provides us with the affine transform \(T[\hat{Z}_{\phi}]\) for steering states along the latitudes of the Bloch sphere. Using these two affine transformations (\(T[\hat{Y}_{\theta}]\) and \(T[\hat{Z}_{\phi}]\)), we can steer the simplex state as the quantum wavefunction evolves to any corresponding point \((\theta,\phi)\) on the Bloch sphere. ## Appendix B. Proof of measurement correspondence The measurement correspondence that we aim to prove is: \[\langle\psi|\,\hat{A}\,|\psi\rangle=(\bar{p}^{\sf T}\cdot\tilde{M}[\hat{A}] \cdot\bar{p})/2\quad. \tag{104}\] To prove this statement we expand the left hand side and the right hand side into their real and imaginary components. Again writing the wavefunction \(|\psi\rangle\) as: \[|\psi\rangle\equiv\bar{x}+i\bar{y} \tag{105}\] where, \(\bar{x}\equiv{\rm Re}\,|\psi\rangle\), \(\bar{y}\equiv{\rm Im}\,|\psi\rangle\) and, \(\hat{A}={\rm Re}(\hat{A})+i\,{\rm Im}(\hat{A})\) the left hand side then can be expanded as follows: \[\langle\psi|\,\hat{A}\,|\psi\rangle = (\bar{x}^{\sf T}-i\bar{y}^{\sf T})\cdot({\rm Re}(\hat{A})+i\,{ \rm Im}(\hat{A}))\cdot(\bar{x}+i\bar{y}) \tag{106}\] \[= (\bar{x}^{\sf T}\cdot{\rm Re}(\hat{A})\cdot\bar{x}-\bar{x}^{\sf T }\cdot{\rm Im}(\hat{A})\cdot\bar{y}+\bar{y}^{\sf T}\cdot{\rm Re}(\hat{A}) \cdot\bar{y}+\bar{y}^{\sf T}\cdot{\rm Im}(\hat{A})\cdot\bar{x})\] \[+ i(\bar{x}^{\sf T}\cdot{\rm Re}(\hat{A})\cdot\bar{y}+\bar{x}^{\sf T }\cdot{\rm Im}(\hat{A})\cdot\bar{x}-\bar{y}^{\sf T}\cdot{\rm Re}(\hat{A}) \cdot\bar{x}+\bar{y}^{\sf T}\cdot{\rm Im}(\hat{A})\cdot\bar{y})\quad.\] If \(\hat{A}\) is an observable then it is a Hermitian operator, implying that \(\mathrm{Re}(\hat{A})\) is symmetric and \(\mathrm{Im}(\hat{A})\) is anti-symmetric. As a result: \[\tilde{x}^{\mathsf{T}}\cdot\mathrm{Im}(\hat{A})\cdot\tilde{x}=\tilde{y}^{ \mathsf{T}}\cdot\mathrm{Im}(\hat{A})\cdot\tilde{y}=0 \tag{107}\] and, \[\tilde{x}^{\mathsf{T}}\cdot\mathrm{Re}(\hat{A})\cdot\tilde{y}=(\tilde{x}^{ \mathsf{T}}\cdot\mathrm{Re}(\hat{A})\cdot\tilde{y})^{\mathsf{T}}=\tilde{y}^{ \mathsf{T}}\cdot\mathrm{Re}(\hat{A})\cdot\tilde{x}\quad. \tag{108}\] Hence, the imaginary part of the right hand side of Eq. (106) is identically equal to zero because of the Hermiticity of \(\hat{A}\), which reduces the left hand side of Eq. (104) to: \[\langle\psi|\,\hat{A}|\psi\rangle=\tilde{x}^{\mathsf{T}}\cdot\mathrm{Re}(\hat {A})\cdot\tilde{x}-\tilde{x}^{\mathsf{T}}\cdot\mathrm{Im}(\hat{A})\cdot \tilde{y}+\tilde{y}^{\mathsf{T}}\cdot\mathrm{Re}(\hat{A})\cdot\tilde{y}+ \tilde{y}^{\mathsf{T}}\cdot\mathrm{Im}(\hat{A})\cdot\tilde{x}\quad. \tag{109}\] The right hand side of Eq. (104) can be evaluated by explicitly writing \(\tilde{p}\) and \(\tilde{M}[\hat{A}]\) according to the definitions in Section III: \[\frac{1}{2}(\tilde{p}^{\mathsf{T}}\cdot\tilde{M}[\hat{A}]\cdot \tilde{p})=\frac{1}{2}\left(\begin{array}{ccc}\tilde{x}^{\mathsf{T}}&- \tilde{x}^{\mathsf{T}}&\tilde{y}^{\mathsf{T}}&-\tilde{y}^{\mathsf{T}}\end{array} \right)\left(\begin{array}{c|c|c|c}\mathrm{Re}(\hat{A})&O&O&\mathrm{Im}(\hat {A})\\ \hline O&\mathrm{Re}(\hat{A})&\mathrm{Im}(\hat{A})&O\\ \hline\mathrm{Im}(\hat{A})&O&\mathrm{Re}(\hat{A})&O\\ \hline O&\mathrm{Im}(\hat{A})&O&\mathrm{Re}(\hat{A})\end{array}\right)\left( \begin{array}{c}\tilde{x}\\ -\tilde{x}\\ \tilde{y}\\ -\tilde{y}\end{array}\right)\] \[=\frac{1}{2}\left(\begin{array}{ccc}\tilde{x}^{\mathsf{T}}&- \tilde{x}^{\mathsf{T}}&\tilde{y}^{\mathsf{T}}&-\tilde{y}^{\mathsf{T}}\end{array} \right)\left(\begin{array}{c}\mathrm{Re}(\hat{A})\cdot\tilde{x}-\mathrm{Im }(\hat{A})\cdot\tilde{y}\\ -(\mathrm{Re}(\hat{A})\cdot\tilde{x}-\mathrm{Im}(\hat{A})\cdot\tilde{y})\\ \mathrm{Im}(\hat{A})\cdot\tilde{x}+\mathrm{Re}(\hat{A})\cdot\tilde{y}\\ -(\mathrm{Im}(\hat{A})\cdot\tilde{x}+\mathrm{Re}(\hat{A})\cdot\tilde{y}) \end{array}\right)\] \[=\tilde{x}^{\mathsf{T}}\cdot\mathrm{Re}(\hat{A})\cdot\tilde{x}- \tilde{x}^{\mathsf{T}}\cdot\mathrm{Im}(\hat{A})\cdot\tilde{y}+\tilde{y}^{ \mathsf{T}}\cdot\mathrm{Re}(\hat{A})\cdot\tilde{y}+\tilde{y}^{\mathsf{T}} \cdot\mathrm{Im}(\hat{A})\cdot\tilde{x}\quad. \tag{110}\] This expression is exactly equal to the reduced left hand side in Eq. (109) proving the original expression written in Eq. (104). ## Appendix C The simplex tensor operation We start with the definition of the bivalent simplex tensor operation \(\mathfrak{G}^{s}\), which was defined above in Section IV: \[\vec{s}_{12}=\vec{s}_{1}\otimes^{s}\vec{s}_{2}=\frac{1}{2}\big{(}\vec{s}_{1} \otimes\vec{s}_{2}+\Pi\big{(}\vec{s}_{1}\big{)}\otimes\Pi\big{(}\vec{s}_{2} \big{)}\big{)}=\frac{1}{8^{2}}\big{(}\vec{u}^{\otimes_{2}}+\vec{p}_{1}\otimes \vec{p}_{2}\big{)}\quad. \tag{111}\] As we discussed above, the main idea behind this operation is that it is an equal statistical mixture of \(\vec{s}_{1}\otimes\vec{s}_{2}\) and \(\Pi\big{(}\vec{s}_{1}\big{)}\otimes\Pi\big{(}\vec{s}_{2}\big{)}\), so that the above mentioned cross terms are eliminated producing a combined vector with the desired form, \(\vec{s}_{12}\). We note that it is evident from above that this operation is closed for two vectors, since the final state vector \(\vec{s}_{12}\) has the same form of deviations over uniform distribution. Next we must show that the operation \(\mathfrak{G}^{s}\) remains closed even for more than two state vectors. We prove this by induction. We know that the operation is closed for \(n=2\) and taking it to be true for \(n=r-1\) we can show that it holds true for \(n=r\). The form of the state vector for \(n=r-1\) will be \[\vec{s}_{1,r-1}=\frac{1}{8^{r}}\big{(}\vec{u}^{\otimes_{r-1}}+\bigotimes_{k=1 }^{r-1}\vec{p}_{k}\big{)}\quad. \tag{112}\] We next combine the vector \(\vec{s}_{1,r-1}\) with \(\vec{s}_{r}\), by explicitly evaluating each of the two components: \[\vec{s}_{1,r-1}\otimes\vec{s}_{r}=\frac{1}{8^{r}}\big{(}\vec{u}^{\otimes_{r-1 }}+\bigotimes_{k=1}^{r-1}\vec{p}_{k}\big{)}\otimes(\vec{u}+\vec{p}_{r})=\frac {1}{8^{r}}\big{(}\vec{u}^{\otimes_{r}}+\vec{u}^{\otimes_{r-1}}\otimes\vec{p}_ {r}+\bigotimes_{k=1}^{r-1}\vec{p}_{k}\otimes\vec{u}+\bigotimes_{k=1}^{r}\vec{p }_{k}\big{)} \tag{113}\] and, \[\Pi(\vec{s}_{1,r-1})\otimes\Pi(\vec{s}_{r})=\frac{1}{8^{r}}\big{(}\vec{u}^{ \otimes_{r-1}}-\bigotimes_{k=1}^{r-1}\vec{p}_{k}\big{)}\otimes(\vec{u}-\vec{p} _{r})=\frac{1}{8^{r}}\big{(}\vec{u}^{\otimes_{r}}-\vec{u}^{\otimes_{r-1}} \otimes\vec{p}_{r}-\bigotimes_{k=1}^{r-1}\vec{p}_{k}\otimes\vec{u}+\bigotimes_ {k=1}^{r}\vec{p}_{k}\big{)} \tag{114}\] We then use the definition of \(\mathfrak{G}^{s}\) operation, which is an equal statistical mixture of the above evaluated two terms: \[\vec{s}_{1r}=\vec{s}_{1,r-1}\otimes^{s}\vec{s}_{r}=\frac{1}{2}\big{(}\vec{s}_ {1,r-1}\otimes\vec{s}_{r}+\Pi\big{(}\vec{s}_{1,r-1}\big{)}\otimes\Pi\big{(} \vec{s}_{r}\big{)}\big{)}=\frac{1}{8^{r}}\big{(}\vec{u}^{\otimes_{r}}+\bigotimes _{k=1}^{r}\vec{p}_{k}\big{)}\quad. \tag{115}\] This completes the proof since the vector \(\vec{s}_{1r}\) has the desired form. Now that we have established closure we next prove associativity. For three state vectors, the combinations can be made in any sequence such that we maintain the overall order of the state vectors, to show this we expand the two sequences: \[\left(\vec{s}_{1}\otimes^{s}\vec{s}_{2}\right)\otimes^{s}\vec{s}_{3}=\vec{s}_{1 2}\otimes^{s}\vec{s}_{3}=\frac{1}{8^{3}}(\vec{u}^{\otimes_{3}}+\vec{p}_{1} \otimes\vec{p}_{2}\otimes\vec{p}_{3}) \tag{116}\] this is can be seen by substituting \(r=3\) in Eq. (115). The other sequence is, \[\vec{s}_{1}\otimes^{s}(\vec{s}_{2}\otimes^{s}\vec{s}_{3})=\vec{s}_{1}\otimes^{ s}\vec{s}_{23}=\frac{1}{2}\big{(}\vec{s}_{1}\otimes\vec{s}_{23}+\Pi(\vec{s}_{1}) \otimes\Pi(\vec{s}_{23})\big{)} \tag{117}\] where, \[\vec{s}_{1}\otimes\vec{s}_{23}=\frac{1}{8^{3}}(\vec{u}+\vec{p}_{1})\otimes( \vec{u}^{\otimes_{2}}+\vec{p}_{2}\otimes\vec{p}_{3})=\frac{1}{8^{3}}(\vec{u}^{ \otimes_{3}}+\vec{u}\otimes\vec{p}_{2}\otimes\vec{p}_{3}+\vec{p}_{1}\otimes \vec{u}^{\otimes_{2}}+\vec{p}_{1}\otimes\vec{p}_{2}\otimes\vec{p}_{3}) \tag{118}\] and, \[\Pi(\vec{s}_{1})\otimes\Pi(\vec{s}_{23})=\frac{1}{8^{3}}(\vec{u}-\vec{p}_{1}) \otimes(\vec{u}^{\otimes_{2}}-\vec{p}_{2}\otimes\vec{p}_{3})=\frac{1}{8^{3}}( \vec{u}^{\otimes_{3}}-\vec{u}\otimes\vec{p}_{2}\otimes\vec{p}_{3}-\vec{p}_{1} \otimes\vec{u}^{\otimes_{2}}+\vec{p}_{1}\otimes\vec{p}_{2}\otimes\vec{p}_{3}) \tag{119}\] proving that the sequence of combination does not matter. Lastly, we end this Appendix by noting that the simplex tensor operation \(\otimes^{s}\) is inherently distributive as it derives its definition form a bi-affine map \(\tau\) introduced in the main text Eq. (38). ## Appendix D Ordering of Phases We first utilize the notation introduced in the main manuscript to understand ordering of phases and then introduce the ordering operators next. In the most general case, note that a single qubit state \(\left|\psi\right\rangle=c_{0}\left|0\right\rangle+c_{1}\left|1\right\rangle\) is mapped as follows, \[\varphi\left|\psi\right\rangle=\vec{s}(\psi)\doteq\frac{1}{8}(\vec{u}+\vec{P} _{0}(c_{0})+\vec{P}_{1}(c_{1}))=\frac{1}{8}(\vec{u}+\sum_{b\in\mathbb{B}}\vec{ P}_{b}(c_{b})), \tag{120}\] where, \[\vec{P}_{b}(c)=\tilde{\gamma}\otimes\text{Re}(c\left|b\right\rangle)+\tilde{ \gamma}^{\prime}\otimes\text{Im}(c\left|b\right\rangle)=(\text{Re}(c)\tilde{ \gamma}+\text{Im}(c)\tilde{\gamma}^{\prime})\otimes\left|b\right\rangle \tag{121}\] such that \(\vec{P}_{b}(1)=\vec{p}_{b}\), \(\vec{P}_{b}(0)=\vec{0}\). We start by considering the mapping of two qubits in logical states \(\left|b\right\rangle\) and \(\left|b^{\prime}\right\rangle\), respectively. Let's associate two absolute phases, \(\phi\) and \(\phi^{\prime}\), with these logical states. When we consider mapping of this system to the probability space, using the above notation, we would have: \[re^{i\phi}\left|b\right\rangle\otimes r^{\prime}e^{i\phi^{\prime}}\left|b^{ \prime}\right\rangle=rr^{\prime}e^{i(\phi+\phi^{\prime})}\left|bb^{\prime} \right\rangle\longrightarrow\vec{P}_{b}(re^{i\phi})\otimes\vec{P}_{b^{\prime} }(r^{\prime}e^{i\phi^{\prime}})=rr^{\prime}\vec{P}_{b}(e^{i\phi})\otimes\vec{P} _{b^{\prime}}(e^{i\phi^{\prime}})\quad. \tag{122}\] We note that, although the phases that each state carry in the quantum case commute and can be associated to any state, the same for the corresponding states in the simplex version is not true. More explicitly, for the quantum case, all the below three expressions refer to exactly the same quantum state of the two-qubit wavefunction: \[e^{i\phi}\left|b\right\rangle\otimes e^{i\phi^{\prime}}\left|b^{\prime} \right\rangle=e^{i\phi^{\prime}}\left|b\right\rangle\otimes e^{i\phi}\left|b ^{\prime}\right\rangle=e^{i(\phi+\phi^{\prime})}\left|bb^{\prime}\right\rangle\quad. \tag{123}\] When this state is mapped to the probability space, due to an additional redundancy in the mapping, the following simplex vectors are not equivalent to each other: \[\vec{P}_{b}(e^{i\phi})\otimes\vec{P}_{b^{\prime}}(e^{i\phi^{\prime}})\neq\vec {P}_{b}(e^{i\phi^{\prime}})\otimes\vec{P}_{b^{\prime}}(e^{i\phi})\neq\vec{P}_ {b}(e^{i(\phi+\phi^{\prime})})\otimes\vec{p}_{b^{\prime}}\neq\vec{p}_{b} \otimes\vec{P}_{b^{\prime}}(e^{i(\phi+\phi^{\prime})})\quad. \tag{124}\] Each permutation of the phase in the simplex version can be defined as a different ordering of phases. We call each specific ordering of the phases by \(\sigma\). The operations on the simplex vectors can be defined so as to follow the transformation in the quantum case for each permutation in Eq. (124). The only difference will be the way we store the absolute phase information of the quantum state in the simplex vectors. Each distinct ordering (\(\sigma\), the specific permutation of phases) will define a different set of intermediate states of the simplex vectors under the same set of operations. The orderings that provide set of intermediate states that are equivalent to how quantum states transform are the ones in which we collect all the phases in a single overall phase. We refer to these specific orderings for the two qubit case as \(\sigma=1\) and \(\sigma=2\), respectively, defined as: \[\sigma=1: \vec{P}^{(1)}_{bb^{\prime}}\big{(}e^{i(\phi+\phi^{\prime})}\big{)} =\vec{P}_{b}\big{(}e^{i(\phi+\phi^{\prime})}\big{)}\otimes\vec{p}_{b^{\prime}} \tag{125}\] \[\text{and, }\sigma=2: \vec{P}^{(2)}_{bb^{\prime}}\big{(}e^{i(\phi+\phi^{\prime})}\big{)} =\vec{p}_{b}\otimes\vec{P}_{b^{\prime}}\big{(}e^{i(\phi+\phi^{\prime})}\big{)} \tag{126}\] Importantly, as we will prove below, the final measurement outcomes for the simplex states do not depend on which specific phase ordering that we choose. This is similar to the quantum case where the measurements of any observable for a given quantum state do not depend on the absolute phase of the wavefunction. We believe it is interesting that the absolute phase information of the quantum wavefunction can be stored in different phase orderings of the simplex vectors. We leave a detailed study of full implications of this for future work. For the subsequent sections we will only be concerned with the phase orderings \(\sigma=1\) and \(\sigma=2\) as described above where all the absolute phases are combined in a single overall phase factor. Moreover, in the following subsection we specify linear operations \(\Gamma^{(n)}_{\sigma}\) and \(\Omega^{(n)}_{\sigma}\) that would allow us to achieve and switch between these different ordered states. We now introduce affine transformations that allow us to switch between different phase-ordered states. Let \(\vec{P}\) be the state where the absolute phase information of each qubit is stored in the corresponding simplex vector, i.e., \[\vec{P}=\vec{P}_{b}\big{(}e^{i\phi}\big{)}\otimes\vec{P}_{b^{\prime}}\big{(}e^ {i\phi^{\prime}}\big{)} \tag{127}\] Using the definitions and notation introduced in Section III, Eq (7) and Eq (8), this state can be rewritten in the following form: \[\vec{P}=\big{(}\cos\phi\,\tilde{\gamma}+\sin\phi\,\tilde{\gamma}^{\prime} \big{)}\otimes|b\rangle\otimes\big{(}\cos\phi^{\prime}\,\tilde{\gamma}+\sin \phi^{\prime}\,\tilde{\gamma}^{\prime}\big{)}\otimes|b^{\prime}\rangle \tag{128}\] if \(\tilde{\omega}\) is a permutation operation with the following action, \[\tilde{\omega}\cdot\big{(}\vec{u}_{2\times 1}\otimes\vec{v}_{4\times 1} \big{)}=\vec{v}_{4\times 1}\otimes\vec{u}_{2\times 1} \tag{129}\] then \(\tilde{P}\) can be rewritten with the help of \(\tilde{\omega}\) as, \[\tilde{P} =(I_{4\times 4}\otimes\widehat{\tilde{\omega}^{\mathsf{T}}\tilde{ \omega}}\otimes I_{2\times 2})\cdot\left[(\cos\phi\,\tilde{\gamma}+\sin\phi\, \tilde{\gamma}^{\prime})\otimes\widehat{|b\rangle\otimes(\cos\phi^{\prime}\, \tilde{\gamma}+\sin\phi^{\prime}\,\tilde{\gamma}^{\prime})}\otimes|b^{\prime} \rangle\right] \tag{130}\] \[=(I_{4\times 4}\otimes\tilde{\omega}^{\mathsf{T}}\otimes I_{2 \times 2})\cdot\left[(\cos\phi\,\tilde{\gamma}+\sin\phi\,\tilde{\gamma}^{ \prime})\otimes(\cos\phi^{\prime}\,\tilde{\gamma}+\sin\phi^{\prime}\,\tilde{ \gamma}^{\prime})\otimes|bb^{\prime}\rangle\right] \tag{131}\] which is equivalent to: \[\tilde{P}=(I_{4\times 4}\otimes\tilde{\omega}^{\mathsf{T}}\otimes I_{2 \times 2})\cdot\left(\begin{array}{c}\tilde{\xi}\\ -\tilde{\xi}\\ \tilde{\zeta}\\ -\tilde{\zeta}\end{array}\right)\otimes|bb^{\prime}\rangle \tag{132}\] where, \[\tilde{\xi}=\left(\begin{array}{c}\cos\phi\,\cos\phi^{\prime}\\ -\cos\phi\,\cos\phi^{\prime}\\ \cos\phi\,\sin\phi^{\prime}\\ -\cos\phi\,\sin\phi^{\prime}\end{array}\right),\quad\tilde{\zeta}=\left( \begin{array}{c}\sin\phi\,\cos\phi^{\prime}\\ -\sin\phi\,\cos\phi^{\prime}\\ \sin\phi\,\sin\phi^{\prime}\\ -\sin\phi\,\sin\phi^{\prime}\end{array}\right), \tag{133}\] and, \[\tilde{\omega}=\left(\begin{array}{cccccccc}1&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&1&0\\ 0&1&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1\end{array}\right),\quad\tilde{\omega}^{\mathsf{T}}=\tilde{ \omega}^{2}=\left(\begin{array}{cccccccc}1&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0&1\end{array}\right) \tag{134}\] On the application of the phase ordering operations the resultant states should be of the form \(\tilde{P}^{(1)}=\tilde{P}_{b}(e^{i(\phi+\phi^{\prime})})\otimes\tilde{p}_{b^{ \prime}}\), \[\tilde{P}^{(1)}=\left(\begin{array}{c}\cos(\phi+\phi^{\prime})\\ -\cos(\phi+\phi^{\prime})\\ \sin(\phi+\phi^{\prime})\\ -\sin(\phi+\phi^{\prime})\end{array}\right)\otimes|b\rangle\otimes\tilde{ \gamma}\otimes|b^{\prime}\rangle=(I_{4\times 4}\otimes\tilde{\omega}^{\mathsf{T}} \otimes I_{2\times 2})\cdot\left(\begin{array}{c}\cos(\phi+\phi^{\prime}) \tilde{\gamma}\\ -\cos(\phi+\phi^{\prime})\tilde{\gamma}\\ \sin(\phi+\phi^{\prime})\tilde{\gamma}\\ -\sin(\phi+\phi^{\prime})\tilde{\gamma}\end{array}\right)\otimes|bb^{\prime}\rangle \tag{135}\] and \(\tilde{P}^{(2)}=\tilde{p}_{b}\otimes\tilde{P}_{b^{\prime}}(e^{i(\phi+\phi^{ \prime})})\), \[\tilde{P}^{(2)}=\tilde{\gamma}\otimes|b\rangle\otimes\left(\begin{array}{c} \cos(\phi+\phi^{\prime})\\ -\cos(\phi+\phi^{\prime})\\ \sin(\phi+\phi^{\prime})\\ -\sin(\phi+\phi^{\prime})\end{array}\right)\otimes|b^{\prime}\rangle=(I_{4 \times 4}\otimes\tilde{\omega}^{\mathsf{T}}\otimes I_{2\times 2})\cdot(\iota \otimes I_{4\times 4})\cdot\left(\begin{array}{c}\cos(\phi+\phi^{\prime})\tilde{ \gamma}\\ -\cos(\phi+\phi^{\prime})\tilde{\gamma}\\ \sin(\phi+\phi^{\prime})\tilde{\gamma}\\ -\sin(\phi+\phi^{\prime})\tilde{\gamma}\end{array}\right)\otimes|bb^{\prime}\rangle \tag{136}\] where, \(\tilde{\iota}\) is a symmetric permutation matrix and has action like \(\tilde{\omega}\), \[\tilde{\iota}\cdot(u_{4\times 1}\otimes v_{4\times 1})=v_{4\times 1} \otimes u_{4\times 1}, \tag{137}\] \[\left(\begin{array}{cccccccccccccccc}1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\ 0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&1&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\ 0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&1\end{array}\right) \tag{138}\] This implies that to be able to switch between different phase-orderings, we need to form linear combinations of the components of \(\vec{\xi}\) and \(\vec{\zeta}\) in \(\tilde{P}\). This can be done by various linear operations, we specify the following for the phase ordering operations, \[\tilde{\Gamma}=\left(\begin{array}{cccc|cccc|cccc|cccc}I_{4\times 4}&O&O&O&O&O&O&O&O&O&I_{4 \times 4}&O&O&O&O\\ O&I_{4\times 4}&O&O&O&O&O&O&O&O&I_{4\times 4}&O&O&O&O&O\\ O&O&I_{4\times 4}&I_{4\times 4}&O&O&O&O&O&O&O&O&O&O&O&O\\ \hline O&O&O&O&I_{4\times 4}&O&O&O&O&O&O&O&O&O&O&I_{4\times 4}\\ O&O&O&O&O&I_{4\times 4}&O&O&O&O&O&O&O&O&I_{4\times 4}&O\\ O&O&O&O&O&O&I_{4\times 4}&I_{4\times 4}&O&O&O&O&O&O&O&O\\ O&O&O&O&O&O&I_{4\times 4}&I_{4\times 4}&O&O&O&O&O&O&O&O\\ \hline O&O&I_{4\times 4}&O&O&O&O&O&I_{4\times 4}&O&O&O&O&O&O&O\\ O&O&O&I_{4\times 4}&O&O&O&O&O&I_{4\times 4}&O&O&O&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&I_{4\times 4}&I_{4\times 4}&O&O&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&I_{4\times 4}&I_{4\times 4}&O&O&O&O\\ \hline O&O&O&O&O&O&I_{4\times 4}&O&O&O&O&O&I_{4\times 4}&O&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O\\ \hline O&O&O&O&O&O&I_{4\times 4}&O&O&O&O&O&I_{4\times 4}&O&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&O&O&I_{4\times 4}&O&O&O\\ O&O&O&O&O&O&O&O&O&O&O&O&O&O&I_{4\times 4}&I_{4\times 4}\\ O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&I_{4\times 4}&I_{4\times 4}\end{array}\right)\] using this and transforming back we obtain the required ordering operations, \[\Gamma_{1}^{(2)} =(I_{4\times 4}\otimes\tilde{\omega}^{\mathsf{T}}\otimes I_{2 \times 2})\cdot\tilde{\Gamma}\cdot(I_{4\times 4}\otimes\tilde{\omega} \otimes I_{2\times 2}), \tag{139}\] \[\Gamma_{2}^{(2)} =(I_{4\times 4}\otimes\tilde{\omega}^{\mathsf{T}}\otimes I_{2 \times 2})\cdot(\iota\otimes I_{4\times 4})\cdot\tilde{\Gamma}\cdot(\iota \otimes I_{4\times 4})\cdot(I_{4\times 4}\otimes\tilde{\omega}\otimes I_{2 \times 2})\quad. \tag{140}\] We note that by using the above relations, we can transform between \(\Gamma_{1}^{(2)}\) and \(\Gamma_{2}^{(2)}\) using Figure 9: Phase ordering operations \(\Gamma_{\sigma}^{(2)}\) and the order switching operation \(\Omega^{(2)}\). a unique symmetric orthogonal transformation matrix, \(\Omega^{(2)}\), which we refer to as an order switching operator for two \(\bar{p}\) vectors: \[\Omega^{(2)} =(I_{4\times 4}\otimes\tilde{\omega}^{\intercal}\otimes I_{2\times 2 })\cdot(\tilde{\iota}\otimes I_{4\times 4})\cdot(I_{4\times 4}\otimes \tilde{\omega}\otimes I_{2\times 2}), \tag{141}\] \[\Gamma_{2}^{(2)} =\Omega^{(2)}\cdot\Gamma_{1}^{(2)}\cdot\Omega^{(2)},\quad\Gamma_ {1}^{(2)}=\Omega^{(2)}\cdot\Gamma_{2}^{(2)}\cdot\Omega^{(2)}\quad. \tag{142}\] The diagram that is shown in FIG. 9 summarizes the above operations for transforming between different phase-orderings when two qubits are mapped to the corresponding simplex vectors. Finally, the affine transformations corresponding to these operations are denoted by \(\tilde{\Gamma}_{\sigma}^{(2)}\) and \(\tilde{\Omega}^{(2)}\): \[\tilde{\Gamma}_{\sigma}^{(n)}=T[\Gamma_{\sigma}^{(n)}],\quad\tilde{\Omega}_{ \sigma}^{(n)}=T[\Omega_{\sigma}^{(n)}]\quad. \tag{143}\] Above we have discussed transformation matrices to switch between different phase-ordered vectors, when any two qubits are mapped to the probability simplex. When the number of qubits is larger than two, the required order generating and switching operations can be obtained from the ones that are used in the two qubit case. As an example let's consider the case of four qubits mapped to the probability space. The first order operation in this case is given by the following expression, \[\Gamma_{1}^{(4)}=(\Gamma_{1}^{(2)}\otimes I_{8\times 8}\otimes I_{8\times 8}) \cdot(I_{8\times 8}\otimes\Gamma_{1}^{(2)}\otimes I_{8\times 8})\cdot(I_{8 \times 8}\otimes I_{8\times 8}\otimes\Gamma_{1}^{(2)}) \tag{144}\] The circuit diagram for producing this operation is shown in FIG. 10. Now that we have the first order operation we can find all the other ones using three order switching operations \(\Omega_{1}^{(4)},\Omega_{2}^{(4)}\) and \(\Omega_{3}^{(4)}\). The commutative diagram in FIG. 11 provides a visual for the action of the three order switching operations. Therefore, each of the switching operation can be decomposed as, \[\Omega_{1}^{(4)} =\Omega^{(2)}\otimes I_{8\times 8}\otimes I_{8\times 8} \tag{145}\] \[\Omega_{2}^{(4)} =I_{8\times 8}\otimes\Omega^{(2)}\otimes I_{8\times 8}\] (146) \[\Omega_{3}^{(4)} =I_{8\times 8}\otimes I_{8\times 8}\otimes\Omega^{(2)} \tag{147}\] using these we can then obtain all the other phase ordering operations \(\Gamma_{\sigma}^{(4)},\sigma\geq 2\), \[\Gamma_{2}^{(4)} =\Omega_{1}^{(4)}\cdot\Gamma_{1}^{(4)}\cdot\Omega_{1}^{(4)} \tag{148}\] \[\Gamma_{3}^{(4)} =\Omega_{2}^{(4)}\cdot\Gamma_{2}^{(4)}\cdot\Omega_{2}^{(4)}\] (149) \[\Gamma_{4}^{(4)} =\Omega_{3}^{(4)}\cdot\Gamma_{3}^{(4)}\cdot\Omega_{3}^{(4)}\quad. \tag{150}\] Finally, we can comment on the mapping of a general \(n\)-qubit quantum state. Because we have the ability to put a state in a definite phase ordering, we additionally define phase ordered maps \(\varphi_{n}^{\sigma}\) that have action as described below. Given some general \(n\)-qubit quantum state in the logical basis: \[\left|\psi\right\rangle=\sum_{\mathbf{q}\in\mathbb{B}^{n}}c_{\mathbf{q}}\left| \mathbf{q}\right\rangle \tag{151}\] and \(\sigma\in\mathbb{Z}_{n}+1=\{m|\,1\leqslant m\leqslant n,m\in\mathbb{Z}^{+}\}\), \[\varphi_{n}^{\sigma}\left|\psi\right\rangle=\bar{s}^{(\sigma)}\big{(}\psi \big{)}\doteq\frac{1}{8^{n}}\big{(}\bar{u}^{\otimes_{n}}+\sum_{\mathbf{q} \in\mathbb{B}^{n}}\bar{P}_{\mathbf{q}}^{(\sigma)}(c_{\mathbf{q}})\big{)} \tag{152}\] Figure 11: Action of the three order switching operations in the case of four qubit simplex states. Figure 10: A circuit diagram for the application of first phase ordering operation in the case of four qubits. ## Appendix E Measurement invariance of phase ordering In this Appendix we show that the measurement of an observable in the logical basis is independent of the phase ordering chosen for the states. Moreover, we also prove that this allows us to measure states in any choice of basis other than the logical basis. From the main text we know that for a single-qubit quantum state \(\left|\psi\right\rangle=\sum_{u\in\mathbb{B}}c_{u}\left|u\right\rangle\), its mapped simplex state: \[\vec{s}=\varphi\left|\psi\right\rangle=\frac{1}{8}\left(\overbrace{\vec{u}+ \overbrace{\sum_{u\in\mathbb{B}}\vec{P}_{u}(c_{u})}^{\tilde{P}}}^{\tilde{P}} \right)\quad, \tag{153}\] and a quantum observable \(\hat{A}\), the following connection holds (cf. Eq. (30)): \[\left\langle\psi\right|\hat{A}\left|\psi\right\rangle=\frac{1}{2}(\bar{P}^{ \mathsf{T}}\cdot\tilde{M}[\hat{A}]\cdot\bar{P})\quad, \tag{154}\] We know that the set of Pauli operators and Identity matrix \(\mathcal{P}=\{\hat{\sigma}_{0}=\hat{I}_{2\times 2},\hat{\sigma}_{1}=\hat{X},\hat{ \sigma}_{2}=\hat{Y},\hat{\sigma}_{3}=\hat{Z}\}\) forms a complete and orthogonal basis for any \(2\times 2\) operator. Hence, for any \(2\times 2\) quantum observable \(\hat{A}\) we would have the following decomposition, \[\hat{A}=\sum_{i=0}^{3}a^{i}\,\hat{\sigma}_{i}\quad, \tag{155}\] where, each of the coefficients \(a^{i}\) are real (because \(\hat{A}\) is Hermitian) and can be obtained by the trace formula: \(a^{i}=\mathrm{Tr}\big{(}\hat{\sigma}_{i}\hat{A}\big{)}\). Using this fact and expanding the state \(\left|\psi\right\rangle\) and \(\vec{s}\) on the left and right hand side of Eq. (154) we obtain: \[\sum_{u,v\in\mathbb{B}^{2}}\sum_{i=0}^{3}c_{u}^{*}c_{v}\,a^{i}\left\langle u \right|\hat{\sigma}_{i}\left|v\right\rangle=\sum_{u,v\in\mathbb{B}^{2}}\sum_{i =0}^{3}c_{u}^{*}a^{i}\sigma_{i}^{uv}\,c_{v}=\frac{1}{2}\left(\sum_{u,v\in \mathbb{B}^{2}}\bar{P}_{u}^{\mathsf{T}}(c_{u})\cdot\tilde{M}[\sum_{i=0}^{3}a^ {i}\hat{\sigma}_{i}]\cdot\bar{P}_{v}(c_{v})\right)\quad, \tag{156}\] from the definition of \(\tilde{M}\) operator map we see that: \[\tilde{M}[\sum_{i=0}^{3}a^{i}\hat{\sigma}_{i}]=\sum_{i=0}^{3}a^{i}\tilde{M}[ \hat{\sigma}_{i}],\,\because a_{i}\in\mathbb{R},\,\forall\,i\in\{0,1,2,3\} \tag{157}\] as a consequence we can subsequently write the following equivalence for any \((u,v)\in\mathbb{B}^{2}\) \((c_{u},c_{v})\in\mathbb{C}^{2}\) and \(i\in\{0,1,2,3\}\): \[2\,c_{u}^{*}c_{v}\sigma_{i}^{uv}=\tilde{P}_{u}^{\mathsf{T}}(c_{u})\cdot\tilde{M}[ \hat{\sigma}_{i}]\cdot\tilde{P}_{v}(c_{v})\quad. \tag{158}\] With this notation and considerations for the single-qubit case in mind, let's move on to the \(n\)-qubit case. The quantum state \(\ket{\psi}=\sum_{\mathbf{q}\in\mathbb{B}^{n}}c_{\mathbf{q}}\ket{\mathbf{q}}\) under some phase ordering \(\omega\in\{1,2,\ldots,n\}\) can be mapped to a simplex state as follows: \[\varphi_{n}^{\omega}\ket{\psi}=\vec{s}^{(\omega)}=\frac{1}{8^{n}}\left(\vec{u} ^{\otimes_{n}}+\overbrace{\sum_{\mathbf{q}\in\mathbb{B}^{n}}\tilde{P}_{ \mathbf{q}}^{(\omega)}(c_{\mathbf{q}})}^{\tilde{P}^{(\omega)}_{(\omega)}} \right)\quad. \tag{159}\] Now for a given \(2^{n}\times 2^{n}\) quantum observable \(\hat{A}\) which we can decompose as: \[\hat{A}=\sum_{\mu,\ldots,\zeta,\ldots,\eta}a^{\mu,\ldots,\zeta,\ldots,\eta} \overbrace{\hat{\sigma}_{\mu}\otimes\cdots\otimes\hat{\sigma}_{\zeta}}^{ \omega\text{ terms}}\otimes\cdots\otimes\hat{\sigma}_{\eta}\quad, \tag{160}\] where \(a^{\mu,\ldots,\zeta,\ldots,\eta}\in\mathbb{R},\forall(\mu,\ldots,\zeta,\ldots, \eta)\in\{0,1,2,3\}^{n}\), we follow through the below steps: \[\frac{1}{2^{n}}((\tilde{P}^{(\omega)})^{\mathsf{T}}\cdot\tilde{ M}_{n}[\hat{A}]\cdot\tilde{P}^{(\omega)})=\frac{1}{2^{n}}\left(\sum_{ \mathbf{q},\mathbf{k}}(\tilde{P}_{\mathbf{q}}^{(\omega)}(c_{\mathbf{q}}))^{ \mathsf{T}}\cdot\tilde{M}_{n}[\hat{A}]\cdot\tilde{P}_{\mathbf{k}}^{(\omega)}( c_{\mathbf{k}})\right)\] \[=\frac{1}{2^{n}}\left(\sum_{\mathbf{q},\mathbf{k}}(\tilde{P}_{ \mathbf{q}}^{(\omega)}(c_{\mathbf{q}}))^{\mathsf{T}}\cdot\tilde{M}_{n}\left[ \sum_{\mu,\ldots,\zeta,\ldots,\eta}a^{\mu,\ldots,\zeta,\ldots,\eta}\hat{ \sigma}_{\mu}\otimes\cdots\otimes\hat{\sigma}_{\zeta}\otimes\cdots\otimes \hat{\sigma}_{\eta}\right]\cdot\tilde{P}_{\mathbf{k}}^{(\omega)}(c_{\mathbf{k }})\right)\] \[=\frac{1}{2^{n}}\left(\sum_{\mathbf{q},\mathbf{k}}\sum_{\mu, \ldots,\zeta,\ldots,\eta}a^{\mu,\ldots,\zeta,\ldots,\eta}\left(\tilde{p}_{q_{1 }}^{\mathsf{T}}\otimes\cdots\otimes\tilde{P}_{q_{\omega}}^{\mathsf{T}}(c_{ \mathbf{q}})\otimes\cdots\otimes\tilde{p}_{q_{n}}^{\mathsf{T}}\right)\cdot( \tilde{M}[\hat{\sigma}_{\mu}]\otimes\cdots\otimes\tilde{M}[\hat{\sigma}_{ \zeta}]\otimes\cdots\otimes\tilde{M}[\hat{\sigma}_{\eta}])\cdot\right.\] \[\qquad\qquad\left.(\tilde{p}_{k_{1}}\otimes\cdots\otimes\tilde{ P}_{k_{\omega}}(c_{\mathbf{k}})\otimes\cdots\otimes\tilde{p}_{k_{n}})\right) \quad(\because a^{\mu,\ldots,\zeta,\ldots,\eta}\in\mathbb{R},\forall(\mu,\ldots, \zeta,\ldots,\eta)\in\{0,1,2,3\}^{n})\] \[=\frac{1}{2^{n}}\left(\sum_{\mathbf{q},\mathbf{k}}\sum_{\mu, \ldots,\zeta,\ldots,\eta}a^{\mu,\ldots,\zeta,\ldots,\eta}\left(\tilde{p}_{q_{1 }}^{\mathsf{T}}\cdot\tilde{M}[\hat{\sigma}_{\mu}]\cdot\tilde{p}_{k_{1}}\right) \cdots\left(\tilde{P}_{q_{\omega}}^{\mathsf{T}}(c_{\mathbf{q}})\cdot\tilde{M} [\hat{\sigma}_{\zeta}]\cdot\tilde{P}_{k_{\omega}}(c_{\mathbf{k}})\right)\cdots \left(\tilde{p}_{q_{n}}^{\mathsf{T}}\cdot\tilde{M}[\hat{\sigma}_{\eta}]\cdot \tilde{p}_{k_{n}}\right)\right)\] \[=\frac{1}{2^{n}}\left(\sum_{\mathbf{q},\mathbf{k}}\sum_{\mu, \ldots,\zeta,\ldots,\eta}a^{\mu,\ldots,\zeta,\ldots,\eta}\left(2\,\sigma_{\mu}^ {q_{1}k_{1}}\right)\cdots\left(2\,c_{\mathbf{q}}^{*}\,\sigma_{\zeta}^{q_{\omega} k_{\omega}}c_{\mathbf{k}}\right)\cdots\left(2\,\sigma_{\eta}^{q_{n}k_{ n}}\right)\right)\quad(\text{cf. Eq.~{}\eqref{eq quantum measurement value \(\langle\hat{A}\rangle\) proving the validity of the premise of this appendix. We finally note that, in place of the logical basis states let us suppose that we have the basis set: \(\left\{\left|\phi_{i}\right\rangle\right|i\in\left\{0,1,\ldots,2^{n}-1\right\}\right\}\), and the operator \(\hat{\Phi}\) that links the logical basis to this new basis of states: \[\hat{\Phi}\left|\mathbf{q}\right\rangle=\left|\phi_{q}\right\rangle,q=\left[ \mathbf{q}\right]\in\left\{0,1,\ldots,2^{n}-1\right\}\quad. \tag{161}\] We can then construct a quantum observable \(\hat{A}_{q}=\hat{\Phi}\hat{M}_{\mathbf{q}}\hat{\Phi}^{\dagger}=\left|\phi_{q} \right\rangle\!\!\left\langle\phi_{q}\right|\) which measures any simplex state \(\tilde{s}=\varphi_{n}\left|\psi\right\rangle\) (note here that the phase order has not been specified because of its irrelevance) in the new chosen basis set \(\left\{\left|\phi_{i}\right\rangle\right|i\in\left\{0,1,\ldots,2^{n}-1\right\}\right\}\): \[\langle T[\hat{A}_{q}]\rangle_{\tilde{s}}=\tilde{s}\cdot T[\hat{A}_{q}]( \tilde{s})=\frac{1}{8^{n}}(1+\frac{1}{4^{n}}|\left\langle\phi_{q}|\psi\right \rangle|^{2}),\quad q=\left[\mathbf{q}\right]\in\left\{0,1,\ldots,2^{n}-1\right\} \tag{162}\]
2305.02449
Bayesian Safety Validation for Failure Probability Estimation of Black-Box Systems
Estimating the probability of failure is an important step in the certification of safety-critical systems. Efficient estimation methods are often needed due to the challenges posed by high-dimensional input spaces, risky test scenarios, and computationally expensive simulators. This work frames the problem of black-box safety validation as a Bayesian optimization problem and introduces a method that iteratively fits a probabilistic surrogate model to efficiently predict failures. The algorithm is designed to search for failures, compute the most-likely failure, and estimate the failure probability over an operating domain using importance sampling. We introduce three acquisition functions that aim to reduce uncertainty by covering the design space, optimize the analytically derived failure boundaries, and sample the predicted failure regions. Results show this Bayesian safety validation approach provides a more accurate estimate of failure probability with orders of magnitude fewer samples and performs well across various safety validation metrics. We demonstrate this approach on three test problems, a stochastic decision making system, and a neural network-based runway detection system. This work is open sourced (https://github.com/sisl/BayesianSafetyValidation.jl) and currently being used to supplement the FAA certification process of the machine learning components for an autonomous cargo aircraft.
Robert J. Moss, Mykel J. Kochenderfer, Maxime Gariel, Arthur Dubois
2023-05-03T22:22:48Z
http://arxiv.org/abs/2305.02449v2
# Bayesian Safety Validation for Black-Box Systems ###### Abstract Accurately estimating the probability of failure for safety-critical systems is important for certification. Estimation is often challenging due to high-dimensional input spaces, dangerous test scenarios, and computationally expensive simulators; thus, efficient estimation techniques are important to study. This work reframes the problem of black-box safety validation as a Bayesian optimization problem and introduces an algorithm, _Bayesian safety validation_, that iteratively fits a probabilistic surrogate model to efficiently predict failures. The algorithm is designed to search for failures, compute the most-likely failure, and estimate the failure probability over an operating domain using importance sampling. We introduce a set of three acquisition functions that focus on reducing uncertainty by covering the design space, optimizing the analytically derived failure boundaries, and sampling the predicted failure regions. Mainly concerned with systems that only output a binary indication of failure, we show that our method also works well in cases where more output information is available. Results show that Bayesian safety validation achieves a better estimate of the probability of failure using orders of magnitude fewer samples and performs well across various safety validation metrics. We demonstrate the algorithm on three test problems with access to ground truth and on a real-world safety-critical subsystem common in autonomous flight: a neural network-based runway detection system. This work is open sourced1 and currently being used to supplement the FAA certification process of the machine learning components for an autonomous cargo aircraft. Footnote 1: [https://github.com/sisl/BayesianSafetyValidation.jl](https://github.com/sisl/BayesianSafetyValidation.jl) ## Nomenclature \begin{tabular}{r l l} \(f\) & = & black-box system under test \\ \(\hat{f}\) & = & predicted mean of probabilistic surrogate model \\ \(\hat{\sigma}\) & = & predicted standard deviation of probabilistic surrogate model \\ \(\hat{g}\) & = & surrogate model binary failure classification \\ \(p\) & = & operational likelihood model (target/nominal distribution) \\ \(q\) & = & importance sampling proposal distribution \\ \(\mathbf{x}\) & = & input vector from the design space \\ \end{tabular} ## I Introduction CERTIFYing safety-critical autonomous systems is a crucial step for their safe deployment in aviation. Examples of safety-critical systems include those for detect and avoid [1, 2], collision avoidance [3], runway detection [4], and auto-land [5]. One way to provide a quantitative measure of safety is to estimate the probability of system failure. The process of estimating the probability of failure can highlight areas of weakness in the system (by uncovering failures) and can show how well the system performs in their operating environments. The rarity of failures makes it challenging to accurately estimate failure probability especially when using computationally expensive simulators [6]. Therefore, it is important to efficiently sample the design space when searching for failures (using a minimum set of inputs) and to maximize a measure of confidence in the resulting failure probability estimate. A standard approach to estimating this rare-event probability involves Monte Carlo (MC) sampling to generate a set of system inputs from a likelihood model of the operating environment. Estimating this rare-event probability through Monte Carlo sampling can be computationally expensive and usually requires a large number of samples to minimize variance of the estimate [6]. A variance-reduction technique to more efficiently estimate the failure probability uses _importance sampling_[7, 8] to instead draw samples from a different distribution, called the proposal, and then re-weight the expectation based on the likelihood ratio between the operational model and the proposal (details left for section III.A). Importance sampling is especially useful in the safety-critical case due to the unbiased failure probability estimate [8]. Bayesian optimization algorithms such as the cross-entropy method (CEM) [9, 10] have been adapted to the problem of rare-event estimation through a multi-level procedure [6, 11], but rely on a real-valued system output with a defined failure threshold to adaptively narrow the search. Arief et al. [12] proposed a deep importance sampling approach for rare-event estimation of black-box systems (Deep-PrAE) but rely on similar assumptions regarding the output of the system. In our problem, the system under test outputs a binary value indicating failure and thus cannot effectively use these methods. Population-based methods, like population Monte Carlo (PMC) [13] and optimized population Monte Carlo (O-PMC) [14], have no assumption on the system outputs and use adaptive importance sampling [15] to iteratively estimate the optimal proposal distribution. The PMC algorithms use self-normalized importance sampling (SNIS) to estimate the probability in question, which can be slightly biased [8]. Population-based approaches often require a large number of system evaluations to adequately converge (see Luengo et al. [16] for a comprehensive survey of Monte Carlo estimation algorithms). Vazquez and Bect [17] and Wang et al. [18] consider the problem of failure probability estimation when dealing with computationally expensive systems. They fit a Gaussian process surrogate model to the underlying real-valued system (i.e., not the system output indication of failure) and then estimate the failure probability over this surrogate; similar to work from Renganathan et al. [19] for the multifidelity case. Those methods may not work on binary-valued systems or scale to complex systems such as image-based neural networks. He and Schumann [20] propose a framework for analyzing safety-critical deep neural networks using Bayesian statistics to iteratively fit a decision boundary from a predefined dictionary of shapes. They use a boundary acquisition function that is based on expected improvement [21], requiring a definition of an \(\epsilon\)-threshold around the predicted boundaries at \(0.5\pm\epsilon\). Our proposed approach constructs the probabilistic surrogate model so that a failure boundary can be analytically derived. With the goal of sample efficiency, this work reformulates the safety validation problem [22] as a Bayesian optimization problem [23, 24, 25] and introduces a set of acquisition functions each with their own safety validation objective. Applying a Bayesian approach allows us to fit a probabilistic surrogate model to a minimal set of design points evaluated from the true system and then estimate failure probability using importance sampling on the inexpensive surrogate. As a real-world case study, we use the proposed algorithm to estimate the probability of failure for a neural network-based runway detection system where the design space consists of the glide slope angle and the distance to runway. The parametric design space is used to generate an input image of a runway in simulation, conditioned on the knowledge that the aircraft is in an approach, and the output is a binary value of failure (i.e., a misdetection). The goals of this work are to: (1) estimate the probability of failure for a black-box safety-critical subsystem, (2) focus on sample efficiency using the minimal number of data points, (3) find realistic cases using a model of the environment the system will be operating in, weighting the failures based on their operational likelihood, (4) characterize the entire set of failure regions to identify model weaknesses for further development, and (5) ensure the entire design space is adequately covered. The proposed _Bayesian safety validation_ (BSV) algorithm can be applied to general black-box systems to find failures, determine the most-likely failure, and estimate the overall failure probability. An open-source Julia framework1 was developed to extend this work to other black-box systems and reproduce the results in this paper. Footnote 1: [https://github.com/sisl/BayesianSafetyValidation.jl](https://github.com/sisl/BayesianSafetyValidation.jl) ## II Background To understand the methods developed in this work, we will provide the necessary background by first introducing the problem of safety validation and then will briefly discuss Gaussian processes and their use in Bayesian optimization. ### Safety Validation Safety validation has three primary tasks [22] shown in fig. 1. The first task, _falsification_, is the process of finding any input that results in system failure. The second task, _most-likely failure analysis_, tries to find the failures with maximum likelihood. And the third task, _failure probability estimation_, estimates the probability that a failure will occur. In focusing on failure probability estimation, we can achieve all three safety validation tasks. This is because when we estimate the probability of failure, we generate a distribution of failures. Thus, we achieve falsification by finding failures in the process of constructing the distribution and can easily compute the most-likely failure by maximizing the input likelihood across the distribution. Motivated to achieve all three safety validation tasks, this work develops an efficient approach to estimate probability of failure for black-box systems. For a survey of existing black-box safety validation algorithms, including falsification and most-likely failure analysis, we refer to Corso et al. [22]. In the case of _black-box safety validation_, we treat the system \(f\) as a "black box" and attempt to perform the three tasks described above. The black-box assumption means that the only way to interact with the system is by passing inputs \(\mathbf{x}\) and observing outputs \(y=f(\mathbf{x})\). This is in contrast to _white-box validation_ which requires information about the internals of the system to prove properties of safety [26, 27, 28]. In choosing to perform black-box validation, we can apply the developed methods to more general systems, particularly to systems with neural network components. Although recent work has focused on verifying deep neural networks [29, 30], scaling to large networks remains a challenge. ### Bayesian Optimization and Probabilistic Surrogate Models The basic optimization problem [24] is to maximize (or minimize) a real-valued function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) subject to \(\mathbf{x}\) lying in the design space \(\mathcal{X}\subseteq\mathbb{R}^{n}\): \[\underset{\mathbf{x}}{\text{maximize}} f(\mathbf{x})\] (1) subject to \[\mathbf{x}\in\mathcal{X}\] Bayesian optimization is a black-box approach to globally optimize the objective \(f\) without requiring any information about internals of the function, e.g., no requirement on gradient information [25, 31]. The main idea is to iteratively fit a _probabilistic surrogate model_--such as a Gaussian process [32]--to evaluation points of the true objective function and then propose new design points to evaluate based on the information and uncertainty quantified in the surrogate. Bayesian optimization is especially useful when \(f\) is computationally expensive to evaluate and the surrogate \(\hat{f}\) is fast to evaluate in comparison [31]. Figure 2 illustrates a Bayesian optimization example where the next sampled design point \(\mathbf{x}^{\prime}\) (shown as a green triangle) maximizes the _upper-confidence bound_ (UCB) acquisition function [24]: \[\mathbf{x}^{\prime}=\underset{\mathbf{x}\in\mathcal{X}}{\text{arg max}}\ \hat{f}(\mathbf{x})+\lambda\hat{\sigma}(\mathbf{x}) \tag{2}\] where \(\hat{f}\) is the mean of the surrogate model, \(\hat{\sigma}\) is the standard deviation, and \(\lambda\geq 0\) controls the trade-off between exploration (based on the uncertainty) and exploitation (based on the mean). Using a probabilistic approach when fitting the surrogate model allows us to use uncertainty in the underlying objective when acquiring subsequent samples. ### Gaussian Processes One method for constructing a probabilistic surrogate model is to use a _Gaussian process_ (GP) [32]. Given true observations from the objective function, a GP is defined as a distribution over possible underlying functions that describe the observations [24] (illustrated in fig. 2 as purple dashed lines showing five functions sampled from the GP). Figure 1: The three tasks of safety validation. Given the set of \(n\) inputs \(\mathbf{X}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) and \(n\) true observations \(\mathbf{y}=[y_{1},\ldots,y_{n}]\) where \(y_{i}=f(\mathbf{x}_{i})\), a Gaussian process is parameterized by a mean function \(\mathbf{m}(\mathbf{X})\), generally set to the zero-mean function \(\mathbf{m}(\mathbf{X})_{i}=m(\mathbf{x}_{i})=0\) if no prior information is given, and a kernel function \(\mathbf{K}(\mathbf{X},\mathbf{X})\) that captures the correlations between data points as covariances. The output of the kernel function is an \(n\times n\) matrix where the element \(\mathbf{K}(\mathbf{X},\mathbf{X})_{i,j}=k(\mathbf{x}_{i},\mathbf{x}_{j})\). The kernel \(k(\mathbf{x}_{i},\mathbf{x}_{j})\) may be selected based on spatial information about the relationship of neighboring data points in design space (e.g., if the relationship is smooth, then one can use a squared exponential kernel [24]). In this work, we use the isotropic Matern \(1/2\) kernel [33] with length scale \(\ell=\exp(-1/10)\) and signal standard deviation \(s_{\sigma}=\exp(-1/10)\): \[k(\mathbf{x}_{i},\mathbf{x}_{j})=s_{\sigma}^{2}\exp\left(-|\mathbf{x}_{i}- \mathbf{x}_{j}|/\ell\right) \tag{3}\] The choice of kernel and its parameters can be separately optimized depending on the problem (see Williams and Rasmussen [32]), where the kernel was chosen for this work based on the characteristic that the Matern kernel can capture more variation in neighboring values [32]. Using the mean function and kernel parameterization and conditioning on the true observations \(\mathbf{y}\), the GP produces samples for new points \(\mathbf{X}^{\prime}\) of the function it is trying to estimate as \[\mathbf{\hat{y}}\mid\mathbf{y}\sim\mathcal{N}\big{(}\mathbf{\mu}(\mathbf{X}, \mathbf{X}^{\prime},\mathbf{y}),\mathbf{\Sigma}(\mathbf{X},\mathbf{X}^{\prime })\big{)} \tag{4}\] where \[\mathbf{\mu}(\mathbf{X},\mathbf{X}^{\prime},\mathbf{y})=\mathbf{m}(\mathbf{X}^ {\prime})+\mathbf{K}(\mathbf{X}^{\prime},\mathbf{X})\mathbf{K}(\mathbf{X}, \mathbf{X})^{-1}(\mathbf{y}-\mathbf{m}(\mathbf{X})) \tag{5}\] \[\mathbf{\Sigma}(\mathbf{X},\mathbf{X}^{\prime})=\mathbf{K}(\mathbf{X}^{\prime },\mathbf{X}^{\prime})-\mathbf{K}(\mathbf{X}^{\prime},\mathbf{X})\mathbf{K}( \mathbf{X},\mathbf{X})^{-1}\mathbf{K}(\mathbf{X},\mathbf{X}^{\prime}). \tag{6}\] Across the domain \(\mathbf{X}^{\prime}\), these estimate \(\mathbf{\hat{y}}\) can now be used as surrogates for the true function \(f(\mathbf{x}^{\prime})\) for \(\mathbf{x}^{\prime}\in\mathbf{X}^{\prime}\). Predicting a probability with a Gaussian process.Because our system \(f\) returns discrete values in \(\{0,1\}\) and we want to predict a real-valued probability in \([0,1]\), we consider this a binary classification problem [34, 35]. We construct the GP to predict the logits \(\mathbf{\hat{z}}\) (which we naturally define with zero mean to indicate no prior knowledge about failures) and then apply the logistic function (i.e., inverse logit or sigmoid) to get the predictions \(\mathbf{\hat{y}}\): \[\mathbf{\hat{z}}\mid\mathrm{logit}(\mathbf{y})\sim\mathcal{N}\big{(}\mathbf{ \mu}(\mathbf{X},\mathbf{X}^{\prime},\mathrm{logit}(\mathbf{y})),\mathbf{\Sigma }(\mathbf{X},\mathbf{X}^{\prime})\big{)} \tag{7}\] \[\mathrm{logit}(y_{i})=\log\left(\frac{\phi(y_{i})}{1-\phi(y_{i})}\right)/s \tag{8}\] \[\mathbf{\hat{y}}=\phi^{-1}\left(\mathrm{logit}^{-1}(\mathbf{\hat{z}})\right)= \phi^{-1}\left(\frac{1}{1+\exp(-s\mathbf{\hat{z}})}\right) \tag{9}\] where \(\phi(y_{i})=y_{i}(1-\epsilon)+(1-y_{i})\epsilon\) and \(\phi^{-1}(\hat{y}_{i})=(\hat{y}_{i}-\epsilon)/(1-2\epsilon)\) to ensure well defined logits and \(s\) controls the steepness of the sigmoid curve. We set \(\epsilon=10^{-5}\) and \(s=10^{-1}\) for our experiments. This construction can still be used even if \(f\) already outputs values in \([0,1]\) instead of binary indicators; the GP will fit directly to the provided failure probability of each point. When the output is binary, applying the logit transformations ensure that the prediction lies in \([0,1]\) and can be interpreted probabilistically. Other approaches to predict a probability using a Gaussian process explore the case where \(f\) is bounded and can be modeled as a Beta distribution [36]. We chose the logit approach which allows us to analytically compute failure boundaries. Figure 2: An example maximization problem using Gaussian process Bayesian optimization with UCB exploration. ## III Problem Formulation We reframe the black-box safety validation problem as a Bayesian optimization problem and use a Gaussian process surrogate model to predict failures. Bayesian optimization is a natural approach to optimize some function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), e.g., a black-box system. But our problem uses a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{B}\), where \(\mathbb{B}\) represents the Boolean domain (returning \(\mathtt{true}\) for failures and \(\mathtt{false}\) for non-failures, which can also be interpreted as 1 and 0, respectively). Instead of maximizing or minimizing \(f\), we reframe the problem to find failure regions through exploration, refine failure boundaries, and refine likely failure regions through sampling the theoretically optimal failure distribution [8, 37, 38]. We introduce a set of three acquisition functions that accomplish these objectives and call the acquisition procedure _failure search and refinement_ (FSAR), shown together in algorithm 1. Although we are primarily interested in the more restrictive case where \(f\) outputs a Boolean, we define the procedure to also work when \(f\) outputs a probabilistic value of failure (demonstrated in section IV.F). Throughout, we use the fact that the surrogate model provides a probabilistic interpretation of the failure predictions regardless of the type of system outputs, namely Boolean or probabilistic. Uncertainty exploration.To find failures and provide coverage of the design space \(\mathcal{X}\), we want to explore areas with high uncertainty. The first proposed acquisition function is a simple search over the uncertainty provided by the Gaussian process \(\hat{\sigma}(\mathbf{x})\) to find the points \(\mathbf{x}\in\mathcal{X}\) with maximal uncertainty: \[\mathbf{x}_{1}^{\prime}=\operatorname*{arg\,max}_{\mathbf{x}\in\mathcal{X}} \hat{\sigma}(\mathbf{x}) \tag{10}\] This will ensure that the design space \(\mathcal{X}\) is fully explored in the limit [32] (noting that in practice the limiting factor is the \(\mathcal{O}(n^{3})\) time for the Gaussian process to fit \(n\) data points due to the \(n\times n\) matrix inversion [39]). Boundary refinement.To better characterize the areas of all failure regions, we want to refine the known failure boundaries to tighten them as much as possible. Because our surrogate \(\hat{f}(\mathbf{x})\) is modeled as a logistic function (shown in eq. (9)), we can take the derivative and get the analytical form as \[\mu^{\prime}(\mathbf{x})=\hat{f}(\mathbf{x})(1-\hat{f}(\mathbf{x})) \tag{11}\] where \(\mu^{\prime}(\mathbf{x})\) is maximal when \(\hat{f}(\mathbf{x})=0.5\), thus giving us the failure boundary at the peaks. Therefore, the second proposed acquisition function selects the point that maximizes the upper confidence of \(\mu^{\prime}\) to refine the failure boundary: \[\mathbf{x}_{2}^{\prime}=\operatorname*{arg\,max}_{\mathbf{x}\in\mathcal{X}} \bigl{(}\mu^{\prime}(\mathbf{x})+\lambda\hat{\sigma}(\mathbf{x})\bigr{)}p( \mathbf{x})^{1/t} \tag{12}\] where upper confidence provides an over estimation and the factor parameter is set to \(\lambda=0.1\) in our experiments. The operational model \(p(\mathbf{x})\) is used to first focus on the failure boundary with high operational likelihood, then decay the emphasis of the likelihood as a function of the current iteration \(t\) (here using an inverse decay of \(1/t\)). This will first acquire likely points along the boundaries, then refine all of the boundaries because as \(t\rightarrow\infty\) then \(p(\mathbf{x})^{1/t}\to 1\). Failure region sampling.It has been shown [8, 37, 38] that the optimal importance sampling distribution is \[q_{\text{opt}}\propto f(\mathbf{x})p(\mathbf{x}), \tag{13}\] which, intuitively, is the distribution of failures (when \(f(\mathbf{x})=1\)) over the likely region (weighed by \(p(\mathbf{x})\)). Yet this is exactly what we are trying to estimate and sampling this distribution would require a prohibitive number of evaluations of \(f\), which we would like to avoid. Therefore, the third acquisition function we propose uses the surrogate to get the upper confidence of the failure prediction \[\hat{h}(\mathbf{x}) =\hat{f}(\mathbf{x})+\lambda\hat{\sigma}(\mathbf{x}) \tag{14}\] \[\hat{g}(\mathbf{x}) =\mathds{1}\bigl{\{}\hat{h}(\mathbf{x})\geq 0.5\bigr{\}} \tag{15}\] and then using the estimated failure region \(\hat{g}\) we draw a sample from the distribution \[\mathbf{x}_{3}^{\prime}\sim\hat{g}(\mathbf{x})p(\mathbf{x}). \tag{16}\] Here we use the indicator function \(\mathds{1}\{\cdot\}\) that returns 1 when the input is \(\mathtt{true}\) and 0 otherwise. Sampling from the approximate failure distribution defined by the surrogate helps to refine likely failure regions to ensure a better estimate of the probability of failure. But if the system under test \(f\) outputs a probability of failure value in \([0,1]\) instead of a binary failure indication, then we can use this information and sample from the following distribution that weights towards those failures that have higher confidence: \[\mathbf{x}_{3}^{\prime}\sim\hat{g}(\mathbf{x})\hat{h}(\mathbf{x})p(\mathbf{x}) \tag{17}\] The proposed acquisition functions work under a more restrictive binary system \(f:\mathbb{R}^{n}\rightarrow\{0,1\}\) and a system \(f:\mathbb{R}^{n}\rightarrow[0,1]\) that outputs a probabilistic value of failure (which can be interpreted as confidence or stochasticity). We define _failure region sampling_ using the more general distribution in eq. (17) because it works for both types of system outputs. When a granular measure of system failure is available, it can be used to make a more informative surrogate model. One use of such a surrogate would be to analyze the severity of failure based on the regions with the highest failure confidence. When using this type of surrogate during system development, developers could focus on addressing or mitigating those failures with both high confidence and high operational likelihood as a form of triage. If only binary failure information is available, developers could simply focus on those failures with high operational likelihood. Applying to binary-valued systems is more general and thus the primary focus of this work, but we demonstrate on a probability-valued case in section IV.F. Algorithm 1 describes the full _failure search and refinement_ procedure to compute the subsequent points from the three proposed acquisition functions and fig. 3 provides an illustrative example. ``` 1:functionFailureSearchAndRefinement(\(\mathcal{GP},p,t\)) 2:\(\hat{f}\leftarrow\textsc{MeanFunction}(\mathcal{GP})\) 3:\(\hat{\sigma}\leftarrow\textsc{StandardDeviationFunction}(\mathcal{GP})\) 4:\(\#\)\(1\)\(\textsc{uncertainty exploration}\) 5:\(\mathbf{x}_{1}^{\prime}\leftarrow\arg\max_{\mathbf{x}\in\mathcal{X}}\hat{ \sigma}(\mathbf{x})\) 6:\(\#\)\(2\)\(\textsc{boundary refinement}\) 7:\(\mu^{\prime}(\mathbf{x})\leftarrow\hat{f}(\mathbf{x})(1-\hat{f}(\mathbf{x}))\)\(\triangleright\) compute failure boundaries 8:\(\mathbf{x}_{2}^{\prime}\leftarrow\arg\max_{\mathbf{x}\in\mathcal{X}}\left(\mu^{ \prime}(\mathbf{x})+{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ } }}}}}}}}}} \mathbf{\mathbf{}}}}\)\(p(\mathbf **A. Importance Sampling Estimate of Failure Probability** To compute an efficient and unbiased estimate of the probability of failure, we use _importance sampling_[8]. Probability estimation can be defined as computing the expectation of the Boolean-valued function \(f\) over the _target/nominal distribution_\(p\) (what we call the _operational likelihood model_ in this work) as \[\mathbb{P}[f(\mathbf{x})]=\operatorname*{\mathbb{E}}_{\mathbf{x}-p}[f(\mathbf{ x})]=\int_{\mathcal{X}}p(\mathbf{x})f(\mathbf{x})\ dx. \tag{18}\] In general, the expectation of the indicator function of an event \(A\), denoted \(\mathds{1}\{A\}\), is equal to the probability of that event occurring \(\operatorname*{\mathbb{E}}[\mathds{1}\{A\}]=\mathbb{P}[A]\). In our problem, we define \(f:\mathbb{R}^{n}\rightarrow\mathbb{B}\) as a Boolean-valued function for convenience. Nevertheless, the following work could easily be extended to a real-valued function \(v:\mathbb{R}^{n}\rightarrow\mathbb{R}\) where failures are defined by violating some safety threshold \(c\), i.e., \(f(\mathbf{x})=\mathds{1}\{v(\mathbf{x})\geq c\}\). Now to approximate the expectation--and therefore the probability of failure--we can use \(n\) samples from \(p\): \[\operatorname*{\mathbb{E}}_{\mathbf{x}-p}\left[f(\mathbf{x})\right]\approx \frac{1}{n}\sum_{i=1}^{n}f(\mathbf{x}_{i}) \tag{19}\] If failures are rare under the distribution \(p\) (i.e., \(f(\mathbf{x}_{i})\) is rarely equal to \(1\) when \(\mathbf{x}_{i}\sim p\)), then we may need an extremely large number of samples from \(p\) to get an accurate estimate. But this would require prohibitively many system evaluations of \(f\). Instead, importance sampling states that we can sample from some other distribution \(q\), called the _proposal distribution_, and re-weight the outputs of \(f\) based on the _likelihood ratio_[8]. To see this, consider the following [40]: \[\operatorname*{\mathbb{E}}_{\mathbf{x}-p}\left[f(\mathbf{x})\right] =\int_{\mathcal{X}}p(\mathbf{x})f(\mathbf{x})\ dx\] (expected value of \[f(\mathbf{x})\], same as eq. ( 18 )) \[=\int_{\mathcal{X}}p(\mathbf{x})\frac{q(\mathbf{x})}{q(\mathbf{x })}f(\mathbf{x})\ dx\] (introduce proposal \[\frac{q(\mathbf{x})}{q(\mathbf{x})}=1\] ) \[=\int_{\mathcal{X}}q(\mathbf{x})\frac{p(\mathbf{x})}{q(\mathbf{x })}f(\mathbf{x})\ dx\] (reorder to isolate likelihood ratio \[\frac{p(\mathbf{x})}{q(\mathbf{x})}\] ) \[=\operatorname*{\mathbb{E}}_{\mathbf{x}-q}\left[\frac{p(\mathbf{ x})}{q(\mathbf{x})}f(\mathbf{x})\right]\] (definition of expectation over \[q\] ) \[\approx\frac{1}{n}\sum_{i=1}^{n}\frac{p(\mathbf{x}_{i})}{q( \mathbf{x}_{i})}f(\mathbf{x}_{i})\] (importance sampling estimate using samples \[\mathbf{x}_{i}\sim q\] ) Now we can use samples from \(q\) to approximate the probability over \(p\). But selecting the right \(q\)-proposal distribution is a challenge and an important topic of research in itself (see Bugallo et al. [15]). To address this challenge, we could use a uniform proposal over the design space \(q=\mathcal{U}_{\mathcal{X}}\) and replace the expensive function calls to \(f\) with inexpensive evaluations of the surrogate \(\hat{f}\) using orders of magnitude more samples. Thus our problem gets simplified to estimate \[\hat{P}_{\text{fail}}=\operatorname*{\mathbb{E}}_{\mathbf{x}-p}\left[\hat{f}( \mathbf{x})\right]=\operatorname*{\mathbb{E}}_{\mathbf{x}-q}\left[\frac{p( \mathbf{x})}{q(\mathbf{x})}\hat{f}(\mathbf{x})\right]\approx\frac{1}{n}\sum_{ i=1}^{n}\frac{p(\mathbf{x}_{i})}{q(\mathbf{x}_{i})}\hat{f}(\mathbf{x}_{i}). \tag{20}\] Yet using a uniform distribution can induce variance in the estimate [41]. Therefore, an even further simplification is to use a discretized set of \(n\) points \(\tilde{\mathcal{X}}\) over the range \(\mathcal{X}\) as our proposal. We assigned equal likelihood to each point \(\mathbf{x}_{i}\in\tilde{\mathcal{X}}\), namely \(q(\mathbf{x}_{i})=1/n\sum_{j=1}^{n}p(\mathbf{x}_{j})\). Then eq. (20) becomes \[\hat{P}_{\text{fail}}\approx\frac{1}{n}\sum_{i=1}^{n}\frac{p(\mathbf{x}_{i})}{1 /n\sum_{i=1}^{n}p(\mathbf{x}_{i})}\hat{f}(\mathbf{x}_{i})=\frac{\sum_{i=1}^{n}p( \mathbf{x}_{i})\hat{f}(\mathbf{x}_{i})}{\sum_{i=1}^{n}p(\mathbf{x}_{i})}=\frac{ \mathbf{w}^{\top}\mathbf{\hat{y}}}{\sum_{i=1}^{n}w_{i}} \tag{21}\] where \(\mathbf{w}=[p(\mathbf{x}_{1}),\dots,p(\mathbf{x}_{n})]\) and \(\mathbf{\hat{y}}=[\hat{f}(\mathbf{x}_{1}),\dots,\hat{f}(\mathbf{x}_{n})]\) for \(\mathbf{x}_{i}\in\tilde{\mathcal{X}}\). Here we are using _likelihood weighting_ which is a special case of importance sampling [38, 41]. Using a discretized set of points as the proposal distribution has lower variance than sampling the uniform space, but may not scale well to higher dimensions. For this paper, we use a simplified \(500\times 500\) discrete grid as the proposal for two-dimensional systems. To incorporate better proposal distributions when scaling to higher dimensions, see Bugallo et al. [15] for recent adaptive importance sampling work. Importantly, we show our discrete choice works well but a better proposal would only benefit the following approach. ### Bayesian Safety Validation The proposed algorithm, _Bayesian safety validation_ (BSV), takes as input the black-box system \(f:\mathbb{R}^{n}\rightarrow\mathbb{B}\), an operational likelihood model \(p:\mathbb{R}^{n}\rightarrow\mathbb{R}\), and a proposal distribution \(q:\mathbb{R}^{n}\rightarrow\mathbb{R}\), each taking an input \(\mathbf{x}\in\mathbb{R}^{n}\) of some parametric space, and iteratively refits a probabilistic surrogate model given selected points from three acquisition functions using _failure search and refinement_ described in section III. The first acquisition function, _uncertainty exploration_, explores areas with high uncertainty to provide coverage and search for failure regions. The next acquisition function, _boundary refinement_, selects operationally likely points that refine the failure boundaries to better characterize likely failure regions (and includes a decaying weighted operational likelihood to refine all failure boundaries in the limit). The final acquisition function, _failure region sampling_, is based on the theoretically optimal importance sampling \(q\)-proposal distribution [37] and will sample from the likely failure regions to ensure a better estimate of the probability of failure. After the algorithm runs for \(T\) iterations, a total of \(3T\) sampled points were used to fit the surrogate model. The three safety validation tasks are then computed (lines 10-12). Falsification and most-likely failure analysis use only the true observations \(y_{i}\in\mathbf{Y}\) and actual inputs \(\mathbf{x}_{i}\in\mathbf{X}\) to find those inputs that led to failures and the most-likely failure, respectively. The final surrogate model is then used to efficiently compute an importance sampling estimate of the probability of failure. Algorithm 2 describes the Bayesian safety validation algorithm and fig. 4 illustrates the process. ``` 1:functionBayesianSafetyValidation(\(f,p,q,T\)) 2:\(\mathcal{GP}\leftarrow\textsc{InitializeGaussianProcess}(m,k)\) 3:\(\mathbf{X},\mathbf{Y}\leftarrow\emptyset,\emptyset\) 4:for\(t\gets 1\)to\(T\) 5:\(\mathbf{X}^{\prime}\leftarrow\textsc{FailureSearchAndRefinement}(\mathcal{GP},p,t)\)\(\triangleright\) select new design points (defined in algorithm 1) 6:\(\mathbf{Y}^{\prime}\leftarrow\{f(\mathbf{x}^{\prime})\mid\mathbf{x}^{\prime} \in\mathbf{X}^{\prime}\}\)\(\triangleright\) evaluate true system \(f\) across design points 7:\(\mathbf{X},\mathbf{Y}\leftarrow\mathbf{X}\cup\mathbf{X}^{\prime},\mathbf{Y} \cup\mathbf{Y}^{\prime}\)\(\triangleright\) append to input and output sets 8:\(\mathcal{GP}\leftarrow\textsc{Fift}(\mathcal{GP},\mathbf{X},\mathbf{Y})\)\(\triangleright\) refit surrogate model over all points conditioned on observations 9:end 10:\(\mathbf{X}_{\mathrm{fail}}\leftarrow\{\mathbf{x}_{i}\mid\mathbf{x}_{i}\in \mathbf{X},y_{i}\in\mathbf{Y},\mathbbm{1}\{y_{i}\}\}\)\(\triangleright\) (1) falsification (set of all true failures) 11:\(\mathbf{x}^{*}\leftarrow\arg\max_{\mathbf{x}_{i}\in\mathbf{X},y_{i}\in \mathbf{Y}}p(\mathbf{x}_{i})\mathbbm{1}\{y_{i}\}\)\(\triangleright\) (2) most-likely failure analysis 12:\(\hat{P}_{\mathrm{fail}}\leftarrow\frac{1}{n}\sum_{i=1}^{n}\frac{p(\mathbf{x} _{i})}{q(\mathbf{x}_{i})}\mathbbm{1}\left\{\hat{f}(\mathbf{x}_{i})\geq 0.5\right\}\)\(\triangleright\) (3) failure probability estimation (using importance sampling) 13:return\(\mathbf{X}_{\mathrm{fail}},\mathbf{x}^{*},\hat{P}_{\mathrm{fail}}\)\(\triangleright\) return all three safety validation tasks 14:end ``` **Algorithm 2** Bayesian safety validation algorithm. Figure 4: The proposed _Bayesian safety validation_ algorithm used for all three safety validation tasks. ## IV Experiments and Results To test the effectiveness of the Bayesian safety validation algorithm, we ran experiments across several different example systems and a real-world case study using a prototype neural network-based runway detection system. We split the experiments into two sections: 1) comparison against existing methods for rare-event simulation (this tests the full Bayesian safety validation algorithm), and 2) comparison of the Gaussian process-based approach with different sampling/selection methods (this tests the failure search and refinement acquisition functions). We ran an ablation study to empirically show the influence of each acquisition function on the performance of the safety validation tasks. We demonstrate the algorithm on a system that outputs a probabilistic value of failure (instead of strictly binary) to show the algorithms general applicability to a less restrictive problem. Lastly, we report results on the runway detection system as a real-world case study. ### A. Simplified Test Problems Three example toy problems with access to the true value of \(P_{\text{fail}}\) were used for testing. The first problem (called Representative) was chosen based on the observed shape of the failure region of the runway detection system, which is our primary case study and a system which we do not have access to the true failure probability. The representative toy problem is modeled using Booth's function [24]\(f(\mathbf{x})=(x_{1}+2x_{2}-7)^{2}+(2x_{1}+x_{2}-5)^{2}\leq 200\), thresholded to make this a binary function. We define the operational parameters to be over the range \([-10,5]\) for both \(x_{1}\) and \(x_{2}\) and set the operational likelihood model as \(x_{1}\sim\mathcal{N}_{\text{trunc}}(-10,1.5;[-10,5])\) and \(x_{2}\sim\mathcal{N}(-2.5,1)\) where \(\mathcal{N}_{\text{trunc}}(\mu,\sigma;[a,b])\) is the normal distribution truncated between \([a,b]\). The second toy problem (called Squares) has two, square, disjoint failure regions to test the exploration of BSV and refinement of rigid and disjoint failure boundaries. The operational parameters are over the range \([0,10]\) for both \(x_{1}\) and \(x_{2}\), each with the operational likelihood model of \(\mathcal{N}(5,1)\). The third toy problem (called Mixture) has three, smooth, disjoint failure regions and is designed to test the failure region refinement characteristic of BSV using a multimodal operational model. The operational range is over \([-6,6]\) with identical Gaussian mixture models that have two equal components of \(\mathcal{N}_{\text{trunc}}(2,1;[-6,6])\) and \(\mathcal{N}_{\text{trunc}}(-2,1;[-6,6])\). Similar to the representative example, we define this last problem as the thresholded Himmelblau function [42]: \(f(\mathbf{x})=(x_{1}^{2}+x_{2}-11)^{2}+(x_{1}+x_{2}^{2}-7)^{2}\leq 15\). The test problems and their operational models are shown in fig. 5. Fig. 5: Failure regions (in red) for the test problems. Operational models are illustrated as subplots/contours. The true system is shown above the surrogate model failure classification which is fit with \(999\) samples using BSV. ### Neural Network-based Runway Detection System As a real-world case of a safety-critical subsystem, we chose a common application in autonomous flight: runway detection (RWD) using neural networks. Synthetic images were generated using the flight simulator X-Plane [43], sampling over different parameters of the final approach to land (e.g., glide slope angle and distance to runway). We search over this parametric space instead of dealing directly in pixel or image space. We use an operational model that centers around likely glide slope angles with a small standard deviation, namely \(\alpha\sim\mathcal{N}(3,0.5)\) and a model that increases the likelihood of requiring a detection as the distance to the runway decreases, namely \(d\sim\mathcal{N}_{\text{trunc}}(0,1;[0,4])\). The parametric space is continuous in glide slope angle \(\alpha\in[1,7]\) degrees and distance to runway \(d\in[0.1,4]\) nmi. These models can be learned from historical flight data for more accurate estimates of the failure probability. Treated as a black box, the runway detector is a convolutional neural network that processes runway images from a front-facing RGB camera. The network predicts the runway corners and the runway bounding box and is intended to be used as a subsystem to provide position estimates during autonomous landing [44]. Figure 6 illustrates several example images with detected runway corners and bounding boxes. A failure is defined as a misdetection (i.e., a false negative). Since the system is designed to be active only during the landing phase, we condition on the aircraft being on the approach. The use of a simulator means the runway detection system can be stressed outside the normal flight envelope to better characterize the full range of system failures, with a potentially dangerous-to-fly example shown in fig. 5(c). ### Safety Validation Metrics In this section, we define several metrics to measure how well BSV and the baselines perform across the three safety validation tasks (see section II.A). Falsification metrics.The total number of failure cases, or more generally, the proportion of all system evaluations that resulted in failures is the primary metric used to assess falsification (sometimes called the failure rate): \[R_{\text{fail}}=\frac{\text{number of failures}}{\text{total number of evaluations}}=\frac{|\mathbf{X}_{\text{fail}}|}{|\mathbf{X}|} \tag{22}\] Most-likely failure analysis metrics.The goal of most-likely failure analysis, as the name suggests, is to determine the failure with maximum operational likelihood. A natural way to assess the relative performance of this task against baselines is to compare the likelihood of the determined most-likely failure: \[\mathcal{L}^{*}=\max_{\mathbf{x}_{i}\in\mathbf{X},\mathbf{y}_{i}\in\mathbf{Y} }p(\mathbf{x}_{i})\mathds{1}\{y_{i}\} \tag{23}\] Failure probability estimation metrics.Because probability of failure estimation is the primary objective of this work--capturing all three safety validation tasks--we look at the performance across several different metrics. When we have access to the true \(P_{\text{fail}}\) (e.g., in the toy examples), then we can measure the relative error in the estimated \(\hat{P}_{\text{fail}}\): \[\hat{\Delta}_{\text{fail}}=\frac{|P_{\text{fail}}-\hat{P}_{\text{fail}}|}{P_ {\text{fail}}} \tag{24}\] Measured as a proportion, relative error can be interpreted as the percent difference in the estimate and makes it easier to compare performance across problems. We are also interested in analyzing the failure likelihood distribution \(\{\log p(\mathbf{x}_{i})\}_{\mathbf{x}_{i}\in\mathbf{X}_{\text{fail}}}\). Distributions with higher concentration in operationally likely regions are preferred as they cover more relevant example failures. Fig. 6: Applying the neural network runway detector to simulated runway conditions in X-Plane. Coverage of design space.To measure the coverage of the design space, an average dispersion coverage metric has been used in the context of safety validation to estimate how well the sampled points cover the input space [22, 45]: \[C_{\text{input}}(\mathbf{X})=1-\frac{1}{\delta}\sum_{j=1}^{n}\frac{\min(d_{j}( \mathbf{X}),\delta)}{n} \tag{25}\] where \(C_{\text{input}}(\mathbf{X})\in[0,1]\) and the metric is defined over a grid of \(n\) points, separated by \(\delta\). The distance function \(d_{j}(\mathbf{X})\) is defined as the minimum distance between the \(j\)th point in the grid to a point in \(\mathbf{X}\)[22, 45]. When ground truth is available, we are also interested in the characterization of the failure and non-failure regions over the entire domain as predicted by the surrogate model. We define \(C_{\text{output}}\in[0,1]\) as the proportion of the output space that the surrogate and the true system agree upon. This can be interpreted as the surrogate classification accuracy. ### Baseline Methods To compare against existing rare-event estimation algorithms, we test the proposed BayesianSafetyValidation algorithm (algorithm 2) compared to standard Monte Carlo (MC) sampling and population Monte Carlo (PMC) [13] which uses self-normalized importance sampling [8]. PMC requires an initial adaptive proposal \(q_{\text{PMC}}\) which we set to be equal to the operational likelihood model for each of the example problems. The experiments were run for \(T_{\text{max}}=100\) iterations using \(N_{q}=50\) samples per iteration and ran across 3 RNG seeds. This results in \(N_{q}T_{\text{max}}(T_{\text{max}}+1)/2=252\),\(500\) total number of samples per seed. Because sampling-based methods like MC and PMC tend to require many samples to adequately estimate the rare-event [6], we only test these methods on the example toy problems as scaling to the runway detection system would be prohibitively expensive. Motivated by sample efficiency, we focus our comparison on the relative error in the estimated probability of failure as a function of the number of samples, defined in eq. (24). Figure 7: Test problem baseline results. Shaded regions show standard error in (a) and standard deviation in (b). Figure 6(a) shows the error curves over the number of samples, which is equivalent to the number of system evaluations. It is clear that BSV outperforms both MC and PMC in reducing the error in the probability of failure estimate using several orders of magnitude fewer samples; converging before 1000 samples in each case and closer to 100 samples in the first two problems. Results also indicate that BSV has lower variance compared to MC and PMC, which can partially be explained by the fact that two of the three acquisition functions take deterministic maximums and only one, the failure region sampling acquisition, samples predicted failure points stochastically. To test the proposed FailureSearchAndRefinement procedure (algorithm 1), we use the same GP fitting technique as in algorithm 2 but replace the selection process in line 5 with several baseline methods. The baselines we use are Latin hypercube sampling (LHS) [46], Sobol sequence selection [47], discrete grid selection, and uniform sampling. Each technique is defined over the entire operational domain. Importantly, we note that all methods fit the selected points to the same initial Gaussian process and use the same importance sampling procedure defined in algorithm 2. The failure search and refinement (FSAR) approach is the only method that uses incremental information to optimize the subsequent points. Figure 6(b) illustrates the relative error in the estimate when running BSV for \(T=333\) iterations (\(N=999\) system evaluations), run over 3 RNG seeds with shaded regions reporting standard deviation (noting that Sobol and discrete do not use stochasticity). Exponential smoothing is applied to the curves with the raw values as thin lines of the same color. Using FSAR for acquiring subsequent points outperforms the baselines by orders of magnitude in the first two problems, and is comparable to Sobol sequence selection in the third problem but with more stability in the estimate. Table 1 reports the quantitative results from the baseline experiments. FSAR achieves the best performance across the various safety validation metrics and comparable input coverage relative to the baselines. ### Ablation Study To empirically test the importance of all three acquisition functions, we perform an ablation study on the disjoint squares problem to determine the effect of the combinations of acquisition functions across the safety validation metrics. The Squares example problem was chosen based on having two disjoint failure regions with precise boundaries; one region which is less likely than the other. Thus this problem requires careful balance between boundary refinement and deliberate exploration for multiple potential failure regions. Each ablation was run with 90 samples over 5 RNG seeds for a fair comparison (i.e., when using all three acquisitions, each one gets a third of the budget). Results in table 2 indicate that the individual acquisitions perform well on the single metric they were designed for (i.e., _exploration_ covers the input space, _failure sampling_ has the highest failure rate, yet _boundary refinement_ requires exploration in order to avoid exploiting a single failure mode). Using all three acquisition functions balances between the safety validation metrics and achieves the smallest error in the probability of failure estimate \(\hat{\Delta}_{\text{fail}}\) while also finding a failure with the highest relative likelihood. In table 2, arrows indicate whether the given metric is better to be high (\(\uparrow\)) or low (\(\downarrow\)). \begin{table} \begin{tabular}{l l r r r r r} \hline \hline Example Problem & Selection Method & \(R_{\text{fail}}\uparrow\) & \(\mathcal{L}^{*}\uparrow\) & \(\hat{\Delta}_{\text{fail}}\downarrow\) & \(C_{\text{input}}\uparrow\) & \(C_{\text{output}}\uparrow\) \\ \hline \multirow{3}{*}{Representative} & Latin hypercube sampling & 0.332 & \(2.49\times 10^{-8}\) & 0.44777 & 0.682 & 0.9922 \\ & Sobol sequence sampling & 0.335 & \(4.98\times 10^{-8}\) & 0.51179 & 0.719 & 0.9912 \\ & Discrete grid selection & 0.336 & \(5.62\times 10^{-8}\) & 0.58225 & **0.774** & 0.9933 \\ & Uniform sampling & 0.342 & \(4.02\times 10^{-8}\) & 0.25978 & 0.673 & 0.9919 \\ & Failure search and refinement & **0.585** & \(\mathbf{5.78\times 10^{-8}}\) & **0.00667** & 0.638 & **0.9998** \\ \hline \multirow{3}{*}{Squares} & Latin hypercube sampling & 0.0525 & 0.00192 & 0.24486 & 0.682 & 0.9907 \\ & Sobol sequence sampling & 0.0525 & 0.00142 & 0.27050 & 0.719 & 0.9935 \\ & Discrete grid selection & 0.0439 & 0.00196 & 0.26473 & **0.774** & 0.9894 \\ & Uniform sampling & 0.0532 & 0.00086 & 0.17017 & 0.673 & 0.9909 \\ & Failure search and refinement & **0.4800** & **0.00253** & **0.00727** & 0.643 & **1.0** \\ \hline \multirow{3}{*}{Mixture} & Latin hypercube sampling & 0.0441 & 0.0349 & 0.10469 & 0.682 & 0.9864 \\ & Sobol sequence sampling & 0.0505 & 0.0369 & 0.00787 & 0.719 & 0.9881 \\ & Discrete grid selection & 0.0479 & 0.0345 & 0.00279 & **0.774** & 0.9900 \\ & Uniform sampling & 0.0576 & 0.0360 & 0.04919 & 0.673 & 0.9871 \\ & Failure search and refinement & **0.4640** & **0.0383** & **0.00124** & 0.663 & **0.9984** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison against other sampling/selection methods. ### Test on Probabilistic-Valued System As mentioned in section III, the GP construction and proposed acquisition functions were designed to estimate failure probability over binary-valued systems that indicate failure, but the same techniques are applicable when the system outputs a probabilistic value of failure (that can be interpreted as confidence in the output, distance to failure boundary, or stochasticity of the system--which has been addressed by similar approaches from Gong and Pan [48]). Using the same Himmelblau function [42] defined for the Mixture problem in section IV.A, we change the system to output a measure of failure (where \(f(\mathbf{x})\geq 0.5\) means failure): \(f(\mathbf{x})=\text{logit}^{-1}\left(c-\left((x_{1}^{2}+x_{2}-11)^{2}+(x_{1}+ x_{2}^{2}-7)^{2}\right)\right)\) for the threshold \(c=15\) (same as the previous problem) and passing the output through a sigmoid to interpret it as a probability with a steepness of \(s=1/c\). Figure 8 illustrates this example with a final relative error of \(\hat{\Delta}_{\text{fail}}=0.012\), a falsification rate of \(53.6\%\) of samples, an input coverage of \(C_{\text{input}}=0.653\), and output coverage of \(C_{\text{output}}=0.9998\). \begin{table} \begin{tabular}{l c c c c c} \hline \hline Acquisition(s) & \(R_{\text{fail}}\uparrow\) & \(\mathcal{L}^{*}\uparrow\) & \(\hat{\Delta}_{\text{fail}}\downarrow\) & \(C_{\text{input}}\uparrow\) & \(C_{\text{output}}\uparrow\) \\ \hline [1] exploration & 0.044 & 0.00029 & 0.04382 & **0.233** & 0.9795 \\ [2] boundary refinement & 0.411 & 0.00025 & 0.93670 & 0.056 & 0.9598 \\ [3] failure sampling & **0.707** & 0.00186 & 0.21682 & 0.066 & 0.9604 \\ [1, 2] exploration + boundary refinement & 0.189 & 0.00205 & 0.18483 & 0.159 & **0.9801** \\ [2, 3] boundary refinement + failure sampling & 0.436 & 0.00197 & 0.24817 & 0.087 & 0.9647 \\ [1, 3] exploration + failure sampling & 0.318 & 0.00171 & 0.10057 & 0.163 & 0.9761 \\ [1, 2, 3] exploration + boundary refinement + failure sampling & 0.298 & **0.00243** & **0.03553** & 0.138 & 0.9777 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of the effect of the three _failure search and refinement_ acquisition functions. Figure 8: Test on the Mixture (Himmelblau) problem, where the output is a probability value indicating distance to the failure boundary at \(f(\mathbf{x})=0.5\). Top row illustrates BSV and the FSAR acquisition functions after \(N=999\) true observations (shown as red/green squares), with a uniform operational model \(p\) shown as subplots. The uniform model helps highlight that the failure region sampling is now more influenced by those failures that are farther away from the failure threshold \(c\), shown as yellow peaks. The bottom row shows the surrogate model and ground truth, where “soft” is the probabilistic output and “hard” is the binary failure classification. **G. Real-World Case Study Results** After empirically validating the BSV algorithm on the example problems with access to the ground truth, we now report the performance on a real-world example: a runway detection system. Figure 8(a) shows the final surrogate after running BSV for \(T=333\) iterations (resulting in 999 sampled points). The algorithm focused the search budget on the highly likely regions of the design space and found several disjoint failure modes. We can efficiently determine the most-likely failure, which is indicated in fig. 8(a), and found 571 failures out of 999 evaluations, shown in table 3 as the failure rate \(R_{\text{fail}}=57.2\%\). The primary goal, estimating failure probability, is shown to quickly converge in fig. 8(c) after just over 400 system evaluations. The final estimated failure probability was \(\hat{P}_{\text{fail}}=5.8\times 10^{-3}\). If we instead used Monte Carlo sampling of \(p\), we would expect to find only about 6 failures in the 999 system evaluations. One way to characterize the spread of failures is to plot the log-likelihood of the observed failures under the operational model. Shown in fig. 8(b), the right skewed peak of the distribution indicates that the failures that were found have high likelihood and thus are more useful failures to fix first before system deployment. Five different iterations of the BSV algorithm and acquisition functions for the RWD system are illustrated in fig. 10 and table 3 reports the safety validation metrics, noting that \(\hat{\Delta}_{\text{fail}}\) and \(C_{\text{output}}\) are not reported since we do not have access to the true failure boundaries of the system. Results show that we can characterize failure regions, generate a set of likely failures, compute the most-likely failure, and use the surrogate model to estimate the probability of failure of the runway detector in a small number of samples; only use 999 samples in our experiments. Post-analysis could even further characterize the failure boundary by focusing on the likely region centered around the glide slope angle of 3 degrees. We demonstrate the BSV algorithm on a two-dimension case but this work could be scaled to higher-dimensional problems that incorporate additional environmental parameter models such as roll angle, time-of-day, weather, and across different airport runways. Binois and Wycoff [49] provide a survey on methods, challenges, and guidelines when modeling high-dimensional problems using Gaussian processes for Bayesian optimization. Figure 10: Five different iterations of _Bayesian safety validation_ showing the probabilistic surrogate model (red for failure) and acquisition functions (lighter colors indicate maximums) applied to the runway detector. The red and green squares overlaid on the surrogate are the true system evaluations and the red circles in the acquisition functions show the next selected point. Notice the low uncertainty in the concentration of points around the likely glide slope region (the operational models for glide slope and distance to runway are shown as subplots). The likelihood decay in the boundary refinement acquisition is illustrated as the spread of the likelihood influence as it dissipates over time. Finally, the refined failure region using \(N=999\) samples represents the predicted distribution of failures. ## V Conclusion In this paper, we reframe the black-box safety validation problem as a Bayesian optimization problem and introduce an iterative algorithm, _Bayesian safety validation_ (BSV), to build a probabilistic surrogate model that predicts system failures and uses importance sampling to efficiently estimate the failure probability. In the process, we propose a set of acquisition functions, called _failure search and refinement_ (FSAR), that each help achieve the safety validation tasks; primarily by covering the design space to search for failures and refining likely failure boundaries and regions. The Gaussian process construction allows us to analytically derive the predicted failure boundaries and we show that the combination of acquisition functions is important to find more failures, find more likely failures, and minimize error in the failure probability estimate. Primarily interested in cases where the black-box system only outputs a binary indication of failure, we also show that our method works well in the less restrictive case where the system outputs a real-valued measure of failure confidence, severity, or distance. As a real-world example, this technique was applied to validate an image-based neural network runway detection system in simulation. Alongside traditional DO-178C procedures [50], this work is currently being used to supplement the FAA certification process of an autonomous cargo aircraft [44]. Extensions of this work include investigating efficient proposal distributions when scaling to higher input dimensions. The use of a simulator allows us to quickly assess the performance of systems, yet validating that the simulator correctly captures reality is an important research challenge being address by the sim-to-real community [51]. For the runway detection case, one way to perform this validation would be to run BSV and select a representative subset of safe-to-fly points to flight test, then compare their outputs. We emphasize that the exact values of the most-likely failure likelihood and the estimated probability of failure are largely dependent on the choice of operational model and learning these models from collected flight data would provide a more realistic understanding of the true probability of failure. This work is open sourced and available at [https://github.com/sisl/BayesianSafetyValidation.jl](https://github.com/sisl/BayesianSafetyValidation.jl). ## Appendix ### Open-Source Interface This work has been open sourced2 as a Julia package3 to be applied to other of black-box systems and is intended to fit into the suite of safety validation tools when considering autonomous aircraft certification. To extend to another system, a user can implement the following interface: Footnote 2: The package and experiment code are available at [https://github.com/sisl/BayesianSafetyValidation.jl](https://github.com/sisl/BayesianSafetyValidation.jl). Footnote 3: We make use of the GaussianProcesses.jl package [52]. ``` abstracttypeSystemParametersend functionreset() end functioninitialize()end functiongenerate_input(sample::Vector)::Inputend functionevaluate(input::Input)::Vector[Bool]end ``` Below is an example of setting up a system with a two-dimensional operational model, running the Bayesian safety validation algorithm to learn the surrogate, and then computing the three safety validation tasks. ``` usingBayesianSafetyValidation system_params=RunwayDetectionSystemParameters()#definedbyuseras<:SystemParameters px1=OperationalParameters("distance",[0.1,4],TruncatedNormal(0,1.0,0,4)) px2=OperationalParameters("slope",[1,7],Normal(3,0.5)) model=[px1,px2] surrogate=bayesian_safety_validation(system_params,model;T=100) X_failures=falsification(surrogate.x,surrogate.y) ml_failure=most_likely_failure(surrogate.x,surrogate.y,model) p_failure=p_estimate(surrogate,model) ## Acknowledgments We would like to thank Jean-Guillaume Durand and Anthony Corso for their thoughtful and continuous insights into this research. We thank Rob Timpe and Alexander Bridi for the development of the runway detection neural network. We also thank Harrison Delecki for inspiration of fig. 1. This work was supported by funding from Xwing, Inc.
2310.06499
Non-relativistic torque and Edelstein effect in noncollinear magnets
The Edelstein effect is the origin of the spin-orbit torque: a current-induced torque that is used for the electrical control of ferromagnetic and antiferromagnetic materials. This effect originates from the relativistic spin-orbit coupling, which necessitates utilizing materials with heavy elements. Here we show that in magnetic materials with non-collinear magnetic order, the Edelstein effect and consequently also a current-induced torque can exist even in the absence of the spin-orbit coupling. Using group symmetry analysis, model calculations, and realistic simulations on selected compounds, we identify large classes of non-collinear magnet candidates and demonstrate that the current-driven torque is of similar magnitude as the celebrated spin-orbit torque in conventional transition metal structures. We also show that this torque can exist in an insulating material, which could allow for highly efficient electrical control of magnetic order.
Rafael González-Hernández, Philipp Ritzinger, Karel Výborný, Jakub Železný, Aurélien Manchon
2023-10-10T10:15:58Z
http://arxiv.org/abs/2310.06499v1
# Non-relativistic torque and Edelstein effect in noncollinear magnets ###### Abstract The Edelstein effect is the origin of the spin-orbit torque: a current-induced torque that is used for the electrical control of ferromagnetic and antiferromagnetic materials. This effect originates from the relativistic spin-orbit coupling, which necessitates utilizing materials with heavy elements. Here we show that in magnetic materials with non-collinear magnetic order, the Edelstein effect and consequently also a current-induced torque can exist even in the absence of the spin-orbit coupling. Using group symmetry analysis, model calculations, and realistic simulations on selected compounds, we identify large classes of non-collinear magnet candidates and demonstrate that the current-driven torque is of similar magnitude as the celebrated spin-orbit torque in conventional transition metal structures. We also show that this torque can exist in an insulating material, which could allow for highly efficient electrical control of magnetic order. ## I Introduction In materials and heterostructures with spin-orbit coupling, the interconnection between the spin and momentum degrees of freedom of the electronic Bloch states underscores a rich landscape of microscopic "spin-orbitronics" phenomena, such as anomalous Hall effect [1] and anisotropic magnetoresistance [2], spin Hall effect [3], Dzyaloshinskii-Moriya interaction or spin-orbit torques [4; 5]. To maximize these effects, materials displaying reasonably large spin-orbit coupling are necessary, which implies using metals with large atomic numbers Z, such as Pt, W, Bi, etc. Some of these elements are however scarce, expensive, and environmentally unfriendly. In addition, arbitrarily large spin-orbit coupling does not necessarily lead to arbitrarily large spin-orbitronics phenomena [6; 7] because of the competition with crystal field and exchange. Contrary to a common conception though, spin-orbit coupling is not a mandatory ingredient to obtain spin-momentum locking. In fact, as noticed by Pekar and Rashba in the mid-sixties [8], electronics states in materials with a spatially inhomogeneous magnetization display a spin texture in momentum space that share similarities with the one obtained through spin-orbit coupling. In other words, non-collinear magnetism mimics spin-orbit coupling to some extent and can support a number of phenomena that are well known in spin-orbit coupled materials such as electric-dipole spin resonance [8; 9], topological Hall effect [10; 11; 12], spin Hall effect [13], and magnetic spin Hall effect [14; 15], the latter being specific to magnetic materials. It is therefore natural to wonder whether another hallmark of spin-orbit coupled materials, the Edelstein effect [16; 17] (also called the Rashba-Edelstein effect, inverse spin-galvanic effect, or the magneto-electric effect), and its associated spin-orbit torque can also be achieved in spin-orbit free non-collinear magnets. The Edelstein effect refers to the generation of non-equilibrium spin density by an applied electric field in non-centrosymmetric semiconducting or metallic materials and heterostructures with spin-orbit coupling. The magnitude of the nonequilibrium spin density is governed by the competition between the spin-orbit coupling energy and the crystal field energy associated with inversion symmetry breaking. In magnetic materials, the spin-momentum locking is governed by the magnetic exchange between local and itinerant electrons, rather than by the atomic spin-orbit coupling, suggesting that a large Edelstein effect can be obtained in non-centrosymmetric magnetic materials. A possible advantage of such a mechanism is that it does not require the presence of heavy elements and it could exist even in materials with negligible spin-orbit coupling such as organic magnetic materials. In addition, since the magnitude of the Edelstein effect is directly related to the magnetic configuration of the material, it should be highly tunable using an external magnetic field. A first step towards the realization of this non-relativistic Edelstein effect was established recently in a model non-collinear coplanar kagome antiferromagnet with in-plane broken inversion symmetry [18]. In this particular case, the current-driven spin density is polarized out of plane. What makes this Edelstein effect particularly appealing is that the nonequilibrium spin density can exert a torque on the local magnetization that is responsible for generating this spin density. This feature suggests that current-driven dynamics, high-frequency excitation, and reversal, might exhibit properties fundamentally different from those usually observed in spin-transfer and spin-orbit torque devices. We have recently noticed such a torque in a work studying spin-transfer torque in magnetic junctions composed of centrosymmetric non-collinear antiferromagnets [19]. Here, we study the non-relativistic Edelstein effect in non-collinear systems in detail, focusing on systems where the spin-orbit torque is normally studied, i.e. bulk non-centrosymmetric magnets and heterostructures with current flowing parallel to the interface. We demonstrate that wide classes of antiferromagnets lacking a center of inversion can support the current-driven Edelstein effect in the absence of spin-orbit coupling, and therefore also current-driven spin torques. We establish general symmetry principles to discover new materials, propose selected promising candidates, demonstrate and quantify the effect in specific materials, and extend the idea to the case of magnetic multilayers. We also show the existence of non-relativistic antisymmetric spin-textures in reciprocal space, which are in some cases directly related to the Edelstein effect, and discuss their general symmetry properties. We implemented an algorithm for determining the symmetry of the non-relativistic Edelstein effect as well as other non-relativistic phenomena and we released it within an open source code. Remarkably, we show that the non-relativistic Edelstein effect can also be present in insulating materials. This could allow for controlling the magnetic order by a voltage in the absence of any Ohmic conduction, resulting in a much higher efficiency than the conventional current-induced torques. ## II General principles ### Conditions for an antisymmetric spin texture In non-magnetic materials lacking inversion symmetry, the relativistic Edelstein effect is associated with an antisymmetric spin texture in the reciprocal space [20]. These spin textures arise from the spin-momentum locking imposed by the spin-orbit coupling and are characterized by a spin direction that varies in momentum space. In the absence of spin-orbit coupling and in the presence of non-collinear magnetism, one expects non-relativistic analogs of the antisymmetric spin textures. Therefore, before addressing the non-relativistic Edelstein effect and its associated torque, we first consider the conditions of the emergence of such antisymmetric spin textures. Recently, spin textures in the absence of spin-orbit coupling have been studied in non-collinear [14; 18; 21] as well as in collinear magnetic materials [21; 22; 23; 24; 25]. In collinear systems, however, the direction of spin is _fixed_ and only the magnitude and sign of the spin-splitting varies in momentum space. In addition, most of the non-relativistic spin-textures studied so far (with the exception of Ref. [18]) are typically symmetric in momentum \(\mathbf{k}\), \(\mathbf{S}_{\mathbf{n}\mathbf{k}}=\mathbf{S}_{n-\mathbf{k}}\), \(n\) being the band index, which forbids the realization of the non-relativistic Edelstein effect at the level of the magnetic unit cell. In the absence of relativistic spin-orbit coupling, the spin and orbital degrees of freedom are decoupled, which also means that the spin is not coupled to the lattice. In such a case the symmetry of magnetic systems is described by the so-called spin space groups [26; 27]. In addition to crystallographic symmetry operations that form the magnetic space groups, which describe the relativistic symmetry of magnetic systems, the spin space groups contain also pure spin rotations. Elements of the spin space groups can be written in the form \(\{R_{s}||R|\boldsymbol{\tau}\}\), where \(R_{s}\) denotes the spin-rotation, \(R\) is a crystallographic point group operation, i.e. a proper or improper rotation, and \(\boldsymbol{\tau}\) is a translation. We denote symmetry operations that contain time-reversal as \(\{R_{s}||R|\boldsymbol{\tau}\}^{\prime}\). In a 3D periodic system, the rules for the existence of spin-splitting are simple to determine. For an arbitrary \(\mathbf{k}\)-point (that is, a \(\mathbf{k}\)-point lying away from any high-symmetry lines or planes), the only symmetry operations that can keep the spin invariant are the combined space-inversion and time-reversal (the so-called \(P\mathcal{T}\) symmetry), a pure spin rotation, translation, or any combination of these symmetry operations. In a \(P\mathcal{T}\) symmetric system, the bands with opposite spin must be degenerate, which is known as Kramers degeneracy and holds even in the presence of spin-orbit coupling. If a pure spin rotation is present, the spin of all states must lie along the spin-rotation axis. If more than one spin rotation with different spin axes is present, this cannot be satisfied without a spin degeneracy. This can also be seen from the fact that spin rotations along different axes do not commute. Since translation does not change spin, the same conclusions apply to symmetry operations that contain translation. Thus spin-splitting exists in all systems, except those that have a \(P\mathcal{T}\) symmetry or two spin rotation axes. In ferromagnetic systems, spin splitting can exist anywhere in the Brillouin zone since no symmetry operations connecting states with opposite spin exist. In antiferromagnetic materials, there are specific lines or planes where the bands with opposite spin must be degenerate, see Ref. [28]. This has been studied systematically for collinear antiferromagnets [21; 22; 23; 25; 29]. Note that the spin-split collinear antiferromagnets have sometimes been referred to as "alternagnets" [24]. In a collinear magnetic system, any spin rotation along the magnetic axis is a symmetry. Thus if there exists another spin-rotation around a perpendicular axis, the bands must be degenerate. Such a spin rotation must contain translation (otherwise the system could not be magnetic) and can only be a \(180^{\circ}\) rotation, which in a collinear system has the same effect on the magnetic order as the time-reversal. The existence of such a symmetry thus implies that the system is invariant under a \(\mathcal{T}\boldsymbol{\tau}\) (a combined time-reversal and translation) symmetry. Collinear antiferromagnetic systems can thus be separated into three types. In systems with \(P\mathcal{T}\) symmetry, bands are spin degenerate even with spin-orbit coupling. In systems with \(\mathcal{T}\boldsymbol{\tau}\) but broken \(P\mathcal{T}\) symmetry, spin-splitting occurs only when the spin-orbit coupling is present. Finally, in systems with broken \(P\mathcal{T}\) and \(\mathcal{T}\boldsymbol{\tau}\) symmetries, a non-relativistic spin-splitting can be present. In non-collinear magnets, this does not hold since time reversal does not have the same effect as a \(180^{\circ}\) spin rotation, and spin rotations with different angles can also occur. The existence of antisymmetric spin textures is governed by symmetries that transform \(\mathbf{k}\rightarrow-\mathbf{k}\). This involves in particular the inversion symmetry, which implies \(\mathbf{S}_{n\mathbf{k}}=\mathbf{S}_{n-\mathbf{k}}\). In systems with inversion symmetry, any spin texture thus must be symmetric. In a coplanar system a combined spin-rotation and time-reversal operation \(\{R_{s}(\hat{\mathbf{n}}_{\perp},180^{\circ})||E||E\}^{\prime}\) is a symmetry. Here \(R_{s}(\hat{\mathbf{n}}_{\perp},180^{\circ})\) denotes a spin rotation by \(180^{\circ}\) around the direction perpendicular to the magnetic plane \(\hat{\mathbf{n}}_{\perp}\). As a consequence, it must hold that \(\mathbf{S}_{n\mathbf{k}}^{||}=\mathbf{S}_{n-\mathbf{k}}^{||}\) and \(\mathbf{S}_{n\mathbf{k}}^{\perp}=-\mathbf{S}_{n-\mathbf{k}}^{\perp}\), where \(\mathbf{S}^{||}\) and \(\mathbf{S}^{\perp}\) denote the components of spin parallel and perpendicular to the plane, respectively. In a coplanar system, the only antisymmetric component is thus perpendicular to the magnetic plane, as in the case studied by Hayami et al. [18]. We note that even when all magnetic moments lie within a plane, the electron spins can contain an out-of-plane component. In a collinear magnetic system, this is not possible, however, since in this case spin is a good quantum number and all spins must lie along a single axis. There, any non-relativistic spin splitting must thus be symmetric in momentum. ### Conditions for nonequilibrium spin densities and torques Let us now turn our attention towards the non-relativistic Edelstein effect. The nonequilibrium properties of materials obtained via the Kubo formula are often parsed into so-called Fermi surface and Fermi sea contributions [30; 31], the former being even under \(\mathcal{T}\) and the latter being odd. In the context of the Edelstein effect, the \(\mathcal{T}\)-even Fermi surface contribution is related to the antisymmetric spin texture in momentum space [16; 17], whereas the \(\mathcal{T}\)-odd Fermi sea contribution is related to the Berry curvature in mixed spin-momentum space in the weak scattering limit [32; 33; 34]. As a consequence, in spin-orbit coupled non-centrosymmetric magnetic heterostructures, the Fermi surface contribution produces the so-called field-like torque whereas the Fermi sea contribution is responsible for the antidamping-like torque [5]. Notice that the \(\mathcal{T}\)-odd Fermi sea contribution can also be nonzero in \(P\mathcal{T}\) symmetric antiferromagnets with Kramers degeneracy. Furthermore, for manipulating the magnetic order in more complex magnetic systems, especially in antiferromagnets, the nonequilibrium spin density one should be concerned with is the _local_ one, projected on the magnetic sublattices, rather than the _global_ one, at the level of the magnetic unit cell [35; 30]. The \(\mathcal{T}\)-even component of the local Edelstein effect can be understood as originating from the antisymmetric spin texture obtained upon _projecting_ on the local atom. Such a "hidden" texture can again exist even in systems with Kramers degeneracy [36]. Consequently, the symmetry conditions that allow for the existence of an Edelstein effect and a torque on the magnetic order are distinct from those for the existence of anti-symmetric spin textures. The symmetry of the non-relativistic global and local Edelstein effects and the resulting torque can be determined in a similar fashion as for the relativistic one, just replacing the magnetic space groups with spin groups. The key symmetry that needs to be broken for the existence of the Edelstein effect is the inversion symmetry. This holds regardless of the presence of spin-orbit coupling. For the global Edelstein effect, the global inversion symmetry must be broken, whereas for the local Edelstein effect, it has to be broken locally (see, e.g., [36]). This means that for the presence of the Edelstein effect on a given magnetic site, there must be no inversion symmetry operation that would leave this site invariant. As already mentioned, in magnets the Edelstein effect can be nonzero even without spin-orbit coupling, similar to the spin-Hall effect, for example, [37]. This applies even to collinear magnets; however, in such a case the induced spin density must be oriented along the magnetic order and does not lead to a torque (although it could play a role for example in the presence of magnons). Consequently, we focus here on non-collinear magnetic systems. In presence of a pure spin-rotation \(\{R_{s}(\hat{\mathbf{n}},\theta)||E||\boldsymbol{\tau}\}\}\), where \(\boldsymbol{\tau}\) could also be zero, the global Edelstein effect must obey \(\mathbf{S}||\mathbf{n}\). In presence of spin-rotation coupled with time-reversal \(\{R_{s}(\hat{\mathbf{n}},\theta)||E||\boldsymbol{\tau}\}^{\prime}\) it obeys \(\mathbf{S}^{\mathrm{even}}||\mathbf{n}\) and \(\mathbf{S}^{\mathrm{odd}}\perp\mathbf{n}\). The same holds for the local Edelstein effect as long as the site is invariant under \(\boldsymbol{\tau}\). Consequently in coplanar systems, \(\mathbf{S}^{\mathrm{even}}\) must be oriented perpendicular to the magnetic plane and \(\mathbf{S}^{\mathrm{odd}}\) must lie within the plane for both the global and the local Edelstein effects. To determine the full symmetry of the non-relativistic Edelstein effect, it is necessary to consider all the symmetry operations of the spin group. We have implemented an algorithm for determining all spin group symmetry operations of a given magnetic system within the freely available open-source _Symmetric_ code [38]. The process of determining the non-relativistic symmetry is described in detail in Supplementary Materials. We have utilized this program to explore the symmetry of non-collinear materials from the MAGNDATA database of magnetic materials. We have analyzed the symmetry of 484 non-collinear magnetic materials and have found that the global Edelstein effect is allowed in 160 of these materials, whereas the local Edelstein effect on a magnetic sublattice is allowed in 355 compounds. The full list is given in the Supplementary Materials [39]. As also described in the Supplementary Materials, the Symmetr code allows one to directly obtain the non-relativistic symmetry of the Edelstein effect (as well as other phenomena) for materials from the MAGNDATA. Among the noticeable materials whose crystal structure admits both a global and local (sublattice) torque, we identified ferroelectric antiferromagnets such as orthorhombic DyFeO\({}_{3}\), hexagonal HoMnO\({}_{3}\), YbMnO\({}_{3}\), and LuFeO\({}_{3}\), as well as metallic antiferromagnets such as \(\alpha\)-Mn, Tb\({}_{3}\)Ge\({}_{5}\) and Tb\({}_{5}\)Ge\({}_{4}\). Interestingly, the centro-symmetric metallic antiferromagnets Mn\({}_{5}\)Si\({}_{3}\), Mn\({}_{3}\)(Sn, Ge, As), and Mn\({}_{3}\)CuN do not display a global torque but do admit a local torque on the individual magnetic sublattices. These torques are expected to induce magnetic excitations and potentially magnetic order reversal. In the following, we explicitly compute the global and local Edelstein effects in both LuFeO\({}_{3}\) and Mn\({}_{3}\)Sn as an illustration of both cases. ## Non-relativistic Edelstein effect in non-collinear antiferromagnets To calculate the Edelstein effect and torque we use the Kubo formula within the constant relaxation time approximation. We only consider an Edelstein effect linear in an electric field: \(\delta S_{i}=\chi_{ij}E_{j}\), where \(\delta S_{i}\) is the induced spin, \(E_{j}\) is the electric field, and \(\chi_{ij}\) is a response tensor. The \(\mathcal{T}\)-even and \(\mathcal{T}\)-odd components are computed using the Kubo formula derived in Refs. [33; 40] and replacing the torque operator with the spin operator, \[\chi_{ij}^{\text{even}} = -\frac{e\hbar}{\pi}\sum_{\mathbf{k},m,n}\frac{\text{Re}[\bra{ \psi_{\mathbf{k}n}}\hat{S}_{i}\ket{\psi_{\mathbf{k}m}}\bra{\psi_{\mathbf{k}m} }\hat{v}_{j}\ket{\psi_{\mathbf{k}n}}]\Gamma^{2}}{(\left(\varepsilon_{F}- \varepsilon_{\mathbf{k}m}\right)^{2}+\Gamma^{2})(\left(\varepsilon_{F}- \varepsilon_{\mathbf{k}m}\right)^{2}+\Gamma^{2})}, \tag{1}\] \[\chi_{ij}^{\text{odd}} = 2e\hbar\sum_{\mathbf{k},n\neq m}^{m\text{ unocc.}}\text{Im}[\bra{\psi_{\mathbf{k}n}}\hat{S}_{i}\ket{\psi_{m\mathbf{k}}} \bra{\psi_{m\mathbf{k}}}\hat{v}_{j}\ket{\psi_{\mathbf{n}k}}]\] (2) \[\times\frac{\Gamma^{2}-(\varepsilon_{\mathbf{k}n}-\varepsilon_{ \mathbf{k}m})^{2}}{[(\varepsilon_{\mathbf{k}n}-\varepsilon_{\mathbf{k}m})^{2} +\Gamma^{2}]^{2}}.\] Here \(\psi_{\mathbf{k}n}\) is the Bloch function of band \(n\), \(\mathbf{k}\) is the Bloch wave vector, \(\varepsilon_{\mathbf{k}n}\) is the band energy, \(\varepsilon_{F}\) is the Fermi energy, \(\hat{v}_{j}\) is the velocity operator, \(e>0\) is the elementary charge, \(\hat{S}_{i}\) is the spin operator, and \(\Gamma\) is a parameter that describes the strength of disorder, which is related to the relaxation time \(\tau=\hbar/2\Gamma\). To calculate the local Edelstein effect on a given sublattice, a local spin operator is used instead. In the limit \(\Gamma\to 0\), Eq. (1) goes to the semiclassical Boltzmann constant relaxation formula, which scales as \(1/\Gamma\) whereas Eq. (2) goes to the so-called intrinsic formula, which is \(\Gamma\) independent and can be understood in terms of Berry curvature in mixed spin-momentum space [34]. ### A non-coplanar 3Q antiferromagnet An example of a non-relativistic Edelstein effect in a non-collinear _coplanar_ antiferromagnet was recently given by Hayami et al. [18]. In this case, the coplanarity of the magnetic texture imposes the current-driven spin density to be oriented perpendicular to the magnetic plane. Here, we adopt a triangular antiferromagnet with a 3Q spin texture, as displayed in Fig. 1(a). This magnetic texture can be stabilized in the presence of 4-spin interaction [41; 42] and hosts quantum anomalous Hall effect [10; 11; 12]. The 3Q texture is also commonly observed in three-dimensional materials such as \(\gamma\)-FeMn[43] and pyrochlores [44]. We use a simple tight-binding model with a 3Q spin texture to illustrate the physical properties of such systems. The model is given by \[H=-\sum_{<ab>\alpha}t_{ab}c_{aa}^{\dagger}c_{b\alpha}+J\sum_{a\alpha,\beta}( \boldsymbol{\sigma}\cdot\mathbf{m}_{a})_{\alpha\beta}c_{aa}^{\dagger}c_{a \beta}. \tag{3}\] Here, \(c^{\dagger}\) and \(c\) denote the creation and annihilation operators respectively; \(a,b\) denote the site index and \(\alpha,\beta\) the spin index. The first term is the nearest neighbor hopping term, with \(t_{ab}\) representing the hopping magnitude. The second term represents the coupling of the conduction electrons to the on-site magnetic moments. Here \(\mathbf{m}_{i}\) is the magnetic moment direction, \(J\) is the exchange parameter and \(\boldsymbol{\sigma}\) is the vector of Pauli matrices. We only consider nearest-neighbor hopping. To break the inversion symmetry we use two different hopping magnitudes as shown in Fig. 1(b). This could be understood as due to the presence of another atom illustrated in Fig. 1(a). The band structures of the 3Q antiferromagnet are given in Figs. 1(c) and (f), without and with inversion symmetry breaking. In the absence of inversion symmetry breaking, the band structure is doubly degenerate. Breaking the inversion symmetry lifts the band degeneracy [Fig. 1(f)] and results in a spin-texture, shown in Fig. 1(e). This spin texture contains both symmetric and anti-symmetric components, the latter giving rise to the current-driven Edelstein effect and its torque. When the inversion symmetry is broken, we observe a finite Edelstein effect as shown in Fig. 1(d) and (g). Several features are worth noticing. First, because the magnetic texture of the 3Q antiferromagnet spans the 3D space, the current-driven spin density possesses all three components, \(S_{x}\), \(S_{y}\) and \(S_{z}\), which strikingly contrasts with the result of Ref. [18]. Second, both \(\mathcal{T}\)-even and \(\mathcal{T}\)-odd components contribute with similar magnitude. Finally, for the set of parameters adopted in this calculation, i.e., the exchange and hopping energies are of comparable magnitude (\(\Delta=\)2t=4t'= 2 eV), we obtain a nonequilibrium spin density of about \(10^{-11}\)\(\mu\)m/V. For the sake of comparison, in a two-dimensional Rashba gas, the nonequilibrium spin density is [17]\(S_{\text{surf}}^{R}/eE=(\alpha_{\text{R}}/\hbar^{2})(m_{0}/\pi\Gamma)\) Taking \(\Gamma=0.1\) eV, \(m_{0}\) being the free electron mass, \(\alpha_{\rm R}=10^{-9}\) eV\(\cdot\)m as the typical Rashba strength expected in transition metal heterostructures [45], and \((3\,\mathrm{\AA})^{2}\) as a unit cell area, the Edelstein effect yields \(\chi_{S}\sim 3.6\times 10^{-11}\,\hbar\)m/V, which is in the same range as our calculations for the two-dimensional 3Q system reported in Fig. 1. ### A centrosymmetric antiferromagnet: Mn\({}_{3}\)Sn In antiferromagnets, and in general in more complex magnetic systems, the magnetic dynamics is not determined by the global Edelstein effect, but rather by the local Edelstein effect on each magnetic sublattice. Consequently, in antiferromagnets, it is the local rather than the global inversion symmetry breaking that is necessary for the existence of the Edelstein effect and the current-induced torque [30]. An example of an antiferromagnet with a global inversion symmetry and a local inversion symmetry breaking is the well-known non-collinear antiferromagnet Mn\({}_{3}\)Sn [46; 47]. In this material, the global Edelstein effect vanishes but the local Edelstein effect is allowed on each sublattice, even in the absence of spin-orbit coupling. The crystal and magnetic structure of Mn\({}_{3}\)Sn is given in Fig. 2(c). Mn\({}_{3}\)Sn has six magnetic sublattices, which are composed of three pairs of sites with equivalent moments connected by inversion symmetry. The inversion partners are denoted by \({}^{\prime}\) in Fig. 2(b). Due to the inversion symmetry, the Edelstein effect on the two inversion-connected sites must be opposite. The local Edelstein effect tends to drive the system into a state where the magnetic moments of the inversion-connected sites are not parallel and thus it acts against the exchange. As such, it is unlikely to reverse the magnetic order but it can excite different magnon modes. Leaving the rigorous analysis of the magnetic dynamics to future studies, we emphasize that the global inversion symmetry can be broken, for example, by an interface. Then the two sites with the same moments are no longer connected by inversion symmetry and consequently can experience the Edelstein effect of the same sign, enabling the electric manipulation of the magnetic order. We evaluate the Edelstein effect in Mn\({}_{3}\)Sn using ab-initio calculations with and without spin-orbit coupling (see the Methods section for details of the calculation setup). The result of the calculation for \(\Gamma=0.01\) eV is shown in Fig. 2(a). We find substantial \(\mathcal{T}\)-even and \(\mathcal{T}\)-odd Edelstein effect on all sublattices. Including the spin-orbit coupling does not change the results substantially, similar to previous calculations of the spin Hall effect in this material [13]. Our calculations agree well with the symmetry analysis shown in the Supplementary Materials. Notice that, again, the magnitude of the current-driven spin density is rather large, corresponding to a Rashba strength of \(10^{-9}-10^{-10}\) eV\(\cdot\)m. We note that current-induced switching of Mn\({}_{3}\)Sn has been experimentally observed in Mn\({}_{3}\)Sn/non-magnetic metal heterostructures [48; 49; 50; 51]. The switching has been attributed to the spin-Hall effect from the non-magnetic Figure 1: The 3Q triangular antiferromagnet: (a) Sketch of the triangular lattice with 3Q non-coplanar configuration of the magnetic moments. The green atoms break the planar inversion symmetry. (b) Top view of the triangular lattice. The hopping parameters of the black and red bounds are \(t\) and \(t^{\prime}\), respectively. (c, f) Band structure for \(t^{\prime}=t\) and \(t^{\prime}=t/2\), respectively. (e) In-plane spin texture in momentum space at energy \(\varepsilon=-5\) eV, corresponding to the red dashed line in panel (f). In the absence of inversion symmetry breaking, \(t^{\prime}=t\), the degenerate bands display compensating spin texture in momentum space. When \(t^{\prime}\neq t\), the band degeneracy is lifted and the spin textures do not longer compensate. Notice that a momentum-asymmetric out-of-plane spin texture is also present (not shown). (d, g) \(\mathcal{T}\)-even (d) \(\mathcal{T}\)-odd (g) contributions for \(t^{\prime}=t/2\) and \(\varepsilon=-4eV\) when rotating the electric field direction in the (x,y) plane. We set \(t=1\) eV, \(\Gamma=0.1\) eV and the exchange is \(J=-2\) eV. metal layer and to spin-transfer torque and inter-grain spin-transfer torque, however, it is possible that the non-relativistic Edelstein effect also contributes. ### A non-centrosymmetric antiferromagnet: \(\mathrm{LuFeO_{3}}\) As an example of a real non-collinear antiferromagnet that can exhibit a global non-relativistic Edelstein effect and torques, we consider the hexagonal \(\mathrm{LuFeO_{3}}\), a multiferroic with antiferromagnetic order. In bulk, \(\mathrm{LuFeO_{3}}\) is typically orthorhombic. However, the hexagonal phase has been stabilized in thin layers [52] and can also be stabilized in the bulk [53]. It has a non-collinear coplanar antiferromagnetic structure with magnetic space group (MSG) #185.201 (P6\({}_{3}\)c'm') as presented in Figure 3(a) [52; 53]. The inversion symmetry is broken in this material by the crystal structure, which suggests the possibility of non-relativistic spin-torques. The system has a small net moment \(\sim 0.02\,\mu_{\mathrm{B}}\) along the \(z\) direction (weak ferromagnetism). This moment is of relativistic origin, and thus in the absence of spin-orbit coupling the magnetic structure is perfectly compensated. Apart from the magnetic order, hexagonal \(\mathrm{LuFeO_{3}}\) also exhibits a ferroelectric order that is present below \(\sim 1000\,\mathrm{K}\) and the material has attracted large attention for its multiferroic properties and the possibility of magneto-electric coupling [52; 53; 54]. The non-relativistic electronic structure is shown in Fig. 3(b). The material is insulating; here we only show the valence bands, which are relevant to our calculations. As can be also seen in Fig. 3(b) the bands are spin-split and thus there is also a non-relativistic spin-texture, shown in Fig. 3(c) for two cuts through the Brillouin zone. Due to the coplanarity of the magnetic order, the spin-texture is symmetric for the \(S_{x}\) and \(S_{y}\) components and anti-symmetric for the \(S_{z}\) component. The \(S_{z}\) component is non-zero but very small. For the calculation of the Edelstein effect, we move the Fermi level into the valence band to simulate doping. Our symmetry analysis shown in the Supplementary Materials shows that the Edelstein effect is allowed in \(\mathrm{LuFeO_{3}}\) even with no spin-orbit coupling. Results of the calculation for \(\Gamma=0.01\,\mathrm{eV}\) with and without spin-orbit coupling are given in Fig. 4. We calculate both the global Edelstein effect and the local one for all Fe sublattices. For brevity though, we only show here the result for the global effect and for one sublattice. The full results are shown in the Supplementary Materials. Our calculations reveal a large non-relativistic global and local Edelstein effect in good agreement with the symmetry analysis. Without spin-orbit coupling for the global effect only the \(\mathcal{T}\)-odd component is allowed [Fig. 4(b)]. With spin-orbit coupling even the global \(\mathcal{T}\)-even component appears [Fig. 4(a)]. We find that the effect of spin-orbit coupling is quite small for the \(\mathcal{T}\)-odd component, but fairly large for the \(\mathcal{T}\)-even component. For the local effect [Fig. 4(c,d)], both \(\mathcal{T}\)-even and \(\mathcal{T}\)-odd components are allowed. An important remark is in order. The \(\mathcal{T}\)-even component has to vanish within the gap since it is a Fermi surface property [see Eq. (1)]. The global \(\mathcal{T}\)-odd component vanishes within the gap as well [Fig. 4(b)]. However, we find that the local \(\mathcal{T}\)-odd components on the Fe atoms are non-zero within the gap, as shown in Fig. 4(d) and in the Supplementary Materials. Only the \(xz\) and \(yz\) components are non-zero within the gap, reaching a constant value. Such a result is intriguing as within the gap there is no Ohmic conduction and thus heat dissipation is absent. This could consequently allow for electric field control of magnetic order in the absence of Ohmic dissipation. The existence of spin-orbit torque in an insulator was previously studied in topological materials [34; 55]. Our results are similar except that in the case of \(\mathrm{LuFeO_{3}}\) the origin of the torque is non-relativistic, due to the coexistence of the non-collinear magnetic order with inversion symmetry breaking. We point out that the torque is not quantized, contrary to the quantized magnetoelectric effect in topological insulators [56], and therefore unlikely to be of topological origin. We are also not aware of any topological properties of \(\mathrm{LuFeO_{3}}\). The \(\mathcal{T}\)-odd torque is governed by the Berry curvature in mixed spin-momentum space and involves electrically-driven interband transitions, resulting in a finite (but not quantized) value in the gap. We note that in metals the torques induced by an elec Figure 2: (a),(b) Calculation of the \(\mathcal{T}\)-even (a) and \(\mathcal{T}\)-odd (b) local Edelstein effect in \(\mathrm{Mn_{3}Sn}\) with (dashed lines) and without (solid lines) spin-orbit coupling. The individual lines correspond to different tensor components, see Eq. (1)-(2). (c) The crystal structure and magnetic structure of \(\mathrm{Mn_{3}Sn}\). tric field are accompanied by an electric current and thus often referred to as "current-induced" torques. However, even in metals, the torques are in fact due to the electric field, rather than to the current flow, although the torque cannot exist without Ohmic conduction. In non-centrosymmetric insulating magnets though, Ohmic conduction is suppressed while the electrically-driven torque remains sizable as demonstrated in LuFeO\({}_{3}\). This opens promising perspectives for the dissipation-free electrical control of magnetization. ### Non-centrosymmetric heterostructures The examples we have discussed so far all have inversion symmetry (globally or locally) broken in the bulk of their crystal structure. Such a constraint however severely restricts the Edelstein effect to the materials listed in the Supplemental Materials. For this reason, we propose to exploit the broken inversion symmetry taking place at the interface between the non-collinear antiferromagnet and an adjacent metal. Such heterostructures are commonly utilized for spin-orbit torque, where the ferro- or antiferromagnet is typically interfaced with a heavy element metal such as platinum [5]. This simple but instrumental configuration allows for observing the spin-orbit torque in a wide variety of systems and enables interfacial engineering of the spin-orbit torque properties. The same concept can be applied to the non-relativistic Edelstein effect. When a non-collinear magnetic material with inversion symmetry is interfaced with a different material, the broken inversion symmetry can result in a non-relativistic Edelstein effect, which in turn generates a torque on the magnetic order. We illustrate this concept using the example of the well-known non-collinear antiferromagnet Mn\({}_{3}\)Ir whose crystal and magnetic structures are displayed in Fig. 5(a). In this material, each magnetic site is an inversion center and thus no Edelstein effect is allowed in the bulk. To break the inversion symmetry we consider a thin layer of Mn\({}_{3}\)Ir interfaced with a thin layer of a non-magnetic material. When Mn\({}_{3}\)Ir is Figure 4: Edelstein effect calculation in LuFeO\({}_{3}\) with (dashed lines) and without spin-orbit coupling (solid lines). Here zero energy corresponds to the top of the valence band. (a) \(\mathcal{T}\)-even component of the total Edelstein effect. (b) \(\mathcal{T}\)-odd component of the total Edelstein effect. (c) \(\mathcal{T}\)-even component of the local Edelstein effect. (d) \(\mathcal{T}\)-odd component of the local Edelstein effect. Figure 3: (a) The crystal and magnetic structure of the hexagonal LuFeO\({}_{3}\). (b) LuFeO\({}_{3}\) band structure without spin-orbit coupling. The color denotes the \(S_{x}\) projection. \(X\) points represent opposite \(k_{z}\) coordenates, \(Z\) points represent opposite \(k_{z}\) coordenates and \(P\) points represent opposite \(k_{x},k_{y},k_{z}\) coordenates in the Brillouin zone. Asymmetric -_odd_ in k- and symmetric -_even_- spin splitting is labeled in the correspondent \(k\)-path. (c) The spin texture of LuFeO\({}_{3}\) at the Fermi surface for Fermi level 0.45 eV below the top of the valence band. We plot the spin-texture for two planes corresponding to \(k_{z}\) = 0.25 Å\({}^{-1}\) and \(k_{z}\) = -0.25 Å\({}^{-1}\). The center of the figure lies at the \(\Gamma\) point. The arrows show the spin and we use the color to highlight the \(z\)-component of the spin since it would be hard to distinguish otherwise. grown along the [001] direction, the non-relativistic Edelstein effect in such a heterostructure is only allowed for an electric field along the [001] direction. In such a case no electric current can flow, however. Thus we instead consider Mn\({}_{3}\)Ir grown along the [111] direction. For this orientation, the symmetry is lowered and the Edelstein effect is allowed for an electric field oriented along the interface. We consider a structure composed of 12 atomic layers of the Mn\({}_{3}\)Ir, as shown in Fig. 5(b). The individual atomic layers are shown in Fig. 5(c). We utilize a simple tight-binding model that is not meant to give quantitative predictions but rather to confirm that the effect can exist and illustrate its basic properties. The model is analogous to the one we have used for the 3Q antiferromagnet. It is composed of s-electrons on each site with nearest neighbor hopping and exchange coupling to the atomic magnetic moments. Similar models have been utilized to demonstrate other properties of the Mn\({}_{3}\)Ir and similar antiferromagnets such as the non-relativistic spin currents [13] or the anomalous Hall effect [57]. We do not include any spin-orbit coupling in the model. The Hamiltonian is in Eq. (3), where we consider nearest neighbor hopping \(t=1\) eV and magnetic exchange \(J=1.7\) eV on Mn atoms (gray and gold atoms representing non-magnetic layer and Ir atoms, respectively, are not endowed with magnetic moment). The calculated Edelstein effect is shown in Fig. 5(c) for each sublattice and atomic layer. For this calculation we have used \(\Gamma=0.01\,\mathrm{eV}\) and \(\varepsilon_{F}=0\,\mathrm{eV}\). Our calculations are fully in agreement with the symmetry analysis, shown in the Supplementary Materials. Both \(\mathcal{T}\)-even and \(\mathcal{T}\)-odd components are present. We find the largest effect close to the interfaces, although the current-driven spin density remains sizable in the center of the Mn\({}_{3}\)Ir layer. A large effect is found both at the interface with the non-magnetic layer and at the top surface, which illustrates that the presence of another layer is in principle not necessary. In Figure 5, we only give the result of the calculation for the tensor components that correspond to an in-plane electric field. Interestingly, the out-of-plane components do not vanish even though no current can flow in the out-of-plane direction, similarly to the case of the LuFeO\({}_{3}\). In this case, however, since the system is metallic, the out-of-plane electric field is screened and thus the effect is hard to observe in practice. ## IV Discussion and Conclusion The torque induced by the non-relativistic Edelstein effect shares important similarities with the conventional spin-orbit torque. Both torques are electrically driven self-induced torques that necessitate inversion symmetry breaking. The key difference though is that the torque due to the non-relativistic Edelstein effect does not originate from spin-orbit coupling, but rather from the non-collinear magnetic order. As a consequence, the microscopic origin of these two torques is quite distinct. In the non-relativistic limit, spin is conserved and the torque is directly associated with a spin current: the torque corresponds to spin sources [19] and can be understood as a local transfer of spin angular momentum within the magnetic unit cell. In the present work, we have computed the non-relativistic Edelstein effect in four different systems, all displaying inversion symmetry breaking, either in the magnetic unit cell or locally. It is quite remarkable that all the examples discussed here display a sizable electrically induced spin density, in spite of the absence of spin-orbit coupling. For the sake of comparison, in our previous calculations of the _relativistic_ Edelstein effect in a collinear antiferromagnet Mn\({}_{2}\)Au, we found a magnitude of \(\chi_{S}\sim 4.3\times 10^{-11}\,\mathrm{\hbar m/V}\) for \(\Gamma=0.01\,\mathrm{eV}\)[30], corresponding to a Rashba strength of \(10^{-9}\) eV\(\cdot\)m, similarly to the magnitude reported in our realistic simulations on LuFeO\({}_{3}\) and Mn\({}_{3}\)Sn. Further systematic studies as necessary to determine the conditions for a maximal non-relativistic Edelstein effect. A central feature of the non-relativistic nature of the torque is its dependence on the magnetic order. In most magnetic systems, the magnetic exchange is much larger than any other magnetic interactions or torques acting on the system. Hence, during the dynamics of the magnetic order, the angles between the individual magnetic moments stay approximately unchanged. Therefore, the dynamics of the magnetic order are described by an overall rotation of all magnetic moments and a small canting. In the non-relativistic limit (ignoring the small canting) the rotated states are connected by a spin rotation and, consequently, the corresponding torques must also be transformed by this spin rotation. Specifically, any torque acting on magnetic moment \(\mathbf{M}_{i}\) can be written as \(\mathbf{T}_{i}=\mathbf{M}_{i}\times\mathbf{B}_{i}\), where \(\mathbf{B}_{i}\) is an effective magnetic field. When the magnetic moments are rotated by rotation \(R\) then the torque reads \(\mathbf{T}_{i}=R\mathbf{M}_{i}\times R\mathbf{B}_{i}\). This is quite distinct from the conventional spin-orbit torque for which the two most important terms are the field-like torque, in which \(\mathbf{B}_{i}\) is independent of \(\mathbf{M}\), and the anti-damping torque, in which \(\mathbf{B}_{i}\sim\mathbf{M}_{i}\times\mathbf{p}\), \(\mathbf{p}\) being some constant direction [5]. Because of the dependence of the non-relativistic torque on the magnetic order, it may be difficult to realize reversible switching since there can be no magnetic configuration for which the effective field \(\mathbf{B}_{i}\) vanishes. This might not be such a limitation in practice, however, since some spin-orbit coupling is always present, which may enable deterministic switching even in cases where the non-relativistic torque is dominant. Furthermore, deterministic switching could be achieved by using field-assisted switching or precise pulse timing. In the presence of antiferromagnetic domain walls, the non relativistic torque could provide an additional source of spin current and therefore enhance or quench the domain wall mobility, depending on the wall configuration. In fact, the very dependence of the torque on the magnetic ordering makes the interplay between the flowing electrons and the magnetic order particularly rich and, as such, the non-relativistic torque is well adapted to excite magnetic modes and self-oscillations which, in antiferromagnets, are particularly appealing for THz applications. ###### Acknowledgements. We acknowledge the Grant Agency of the Czech Republic Grant No. 22-21974S. A. M. acknowledges support from the Excellence Initiative of Aix-Marseille Universite - A*Midex, a French "Investissements d'Avenir" program. ## Methods The DFT calculations use the VASP code [58] and we use the Wannier90 code [59] to construct the Wannier Hamiltonian that serves as an input to the linear response calculations. For LuFeO\({}_{3}\) we use 9x9x3 k-point mesh and 520 eV cutoff energy. For the Wannierization we use the \(s\) and \(d\) orbitals for Fe, \(s\) and \(p\) orbitals for O and \(\varepsilon_{F}\pm 3\) eV frozen energy window. For Mn\({}_{3}\)Sn we use 11x11x11 k-point mesh and set the cutoff energy to 520 eV. For the Wannierization we use the \(s\),\(p\), and \(d\) orbitals for the Mn atoms, \(s\) and \(p\) orbitals for the Sn atoms and we set the frozen energy window to \(\varepsilon_{F}\pm 2\) eV. For the linear response calculations we use the Lines code [60]. This code uses the Wannier or tight-binding Hamiltonian as an input. This Hamiltonian is then Fourier transformed to a dense mesh in the reciprocal space, which is used for evaluating the Kubo formulas as described in Ref. [14].
2306.03207
H2-Mapping: Real-time Dense Mapping Using Hierarchical Hybrid Representation
Constructing a high-quality dense map in real-time is essential for robotics, AR/VR, and digital twins applications. As Neural Radiance Field (NeRF) greatly improves the mapping performance, in this paper, we propose a NeRF-based mapping method that enables higher-quality reconstruction and real-time capability even on edge computers. Specifically, we propose a novel hierarchical hybrid representation that leverages implicit multiresolution hash encoding aided by explicit octree SDF priors, describing the scene at different levels of detail. This representation allows for fast scene geometry initialization and makes scene geometry easier to learn. Besides, we present a coverage-maximizing keyframe selection strategy to address the forgetting issue and enhance mapping quality, particularly in marginal areas. To the best of our knowledge, our method is the first to achieve high-quality NeRF-based mapping on edge computers of handheld devices and quadrotors in real-time. Experiments demonstrate that our method outperforms existing NeRF-based mapping methods in geometry accuracy, texture realism, and time consumption. The code will be released at: https://github.com/SYSU-STAR/H2-Mapping
Chenxing Jiang, Hanwen Zhang, Peize Liu, Zehuan Yu, Hui Cheng, Boyu Zhou, Shaojie Shen
2023-06-05T19:28:34Z
http://arxiv.org/abs/2306.03207v2
# H\({}_{2}\)-Mapping: Real-time Dense Mapping Using Hierarchical Hybrid Representation ###### Abstract Constructing a high-quality dense map in real-time is essential for robotics, AR/VR, and digital twins applications. As Neural Radiance Field (NeRF) greatly improves the mapping performance, in this paper, we propose a NeRF-based mapping method that enables higher-quality reconstruction and real-time capability even on edge computers. Specifically, we propose a novel hierarchical hybrid representation that leverages implicit multiresolution hash encoding aided by explicit octree SDF priors, describing the scene at different levels of detail. This representation allows for fast scene geometry initialization and makes scene geometry easier to learn. Besides, we present a coverage-maximizing keyframe selection strategy to address the forgetting issue and enhance mapping quality, particularly in marginal areas. To the best of our knowledge, our method is the first to achieve high-quality NeRF-based mapping on edge computers of handheld devices and quadrotors in real-time. Experiments demonstrate that our method outperforms existing NeRF-based mapping methods in geometry accuracy, texture realism, and time consumption. The code will be released at [https://github.com/SYSU-STAR/H2-Mapping](https://github.com/SYSU-STAR/H2-Mapping). ## I Introduction Using robots to build highly-detailed dense maps in real-time benefits advanced robot autonomous navigation, AR/VR, and digital twins applications. These maps enable robots to perform high-level tasks and provide humans with real-time feedback on the environment, allowing them to adjust the robot's tasks promptly as needed. Besides, high-fidelity maps serve as critical assets for AR/VR and digital twins. The automatic and faithful recreation of environments in real-time using robots can be more efficient and time-saving than manual or offline reconstruction methods. To be suitable for real-time and high-quality robot mapping in unknown environments with limited onboard computation power, a mapping system must meet four key requirements: (1) Adaptability to growing scenes, allowing the robot to dynamically expand the map without prior knowledge of the scene; (2) High level of detail; (3) Real-time capability and high memory efficiency; and (4) Novel view synthesis ability, which allows rendering high-quality images from views apart from the sparse input views. This is particularly important for creating scenes for AR/VR applications. In robotics, mapping has been studied for decades. Previous works utilize explicit scene representations like occupancy grids [1], TSDF [2, 3, 4, 5, 6, 7], surfels [8, 9], and meshes [10] to achieve real-time performance. However, these methods face challenges in balancing memory consumption and mapping accuracy [11] and are weak in novel view synthesis. In recent years, implicit representations have gained popularity following the introduction of NeRF [12]. Several works [13, 14, 15] employ NeRF to overcome limitations associated with explicit representations and achieve better mapping results in various aspects. These NeRF-based methods can produce high-fidelity reconstructions using less memory and generate high-quality images from novel views by continuously querying the scene attributes. However, the implicit representation describes the scene as high-dimensional features and neural networks that lack physical meaning, resulting in a long time for training. As a result, these methods cannot run in real-time even on the most powerful edge computers like AGX Orin (as evaluated in Sec.IV-A.6). Aiming to design a real-time and high-quality robot mapping method that fulfills the four requirements mentioned above, we propose a NeRF-based mapping method using a hierarchical hybrid representation. Our approach accelerates the optimization of implicit representation with the aid of an easy-to-optimize explicit representation, describing the scene at different levels of detail. For the coarse scene geometry, we describe it with explicit octree SDF priors. Specifically, we incrementally build a sparse voxel octree with a large voxel size, where we store the optimizable SDF Fig. 1: We tested our methods on a handheld device (a) and quadrotors (b). Our method builds a high-quality map in real-time on edge computers and can support robotic applications. of each leaf node's vertex. To represent geometry details and texture, we use implicit multiresolution hash encoding [16] to encode high-resolution scene properties in a memory-efficient way. By using octree SDF priors to capture coarse geometry efficiently, the multiresolution hash encoding can focus solely on the residual geometry, which is much simpler to learn than the complete geometry, thereby improving the geometry accuracy and convergence rate. To further speed up, we leverage a simple yet effective method to initialize the octree SDF priors. We project the voxel vertices to the depth image and calculate the associated SDF values. This initialization is based on the observation that a single measurement is usually sufficient to provide a promising estimation of coarse SDF values. Therefore, such a representation can obtain accurate geometry early on, which accelerates the optimization of texture with higher fidelity. Besides, to realize higher mapping accuracy, we propose a coverage-maximizing keyframe selection strategy to address the crucial forgetting issue in the online mapping task. Our method avoids redundant sample calculations across all keyframes [13] and ensures quality in marginal areas, without increasing the number of training samples [15]. Our method achieves faster and higher-quality NeRF-based mapping. To summarize, contributions are as follows: * A hierarchical hybrid representation with an effective initialization technique enables real-time dense mapping with high-fidelity details and dynamical expansion ability, even on edge computers. * An effective coverage-maximizing keyframe selection strategy that mitigates the forgetting issue and improves quality, especially in marginal areas. * Extensive experiments show our method achieves superior mapping results with less runtime compared to existing NeRF-based mapping methods. To the best of our knowledge, our method is the first to run a NeRF-based mapping method onboard in real-time. ## II Related Works ### _Explicit Dense Mapping_ Various explicit representations have been used to store scene information for dense mapping. Octomap [1] uses probabilistic occupancy estimation to represent occupied, free, and unknown space. As a pioneer in using SDF for dense mapping, Kinect-Fusion [3] leverages volumetric SDF to enable real-time tracking and mapping. Following works improve the scalability [4, 6], the efficiency [2, 7], and the global consistency [5]. Moreover, [8] stores surfel to represent the environment, and [10] represents the robot's surrounding as a watertight 3D mesh. These methods are well known for their fast processing speed, which can be attributed to the physical meaning of explicit representations that make them easy to optimize. However, they require large amounts of memory to handle high-detailed mapping [11] and are incapable of realistically rendering from novel views. ### _Implicit Dense Mapping_ Implicit representations utilize latent features and neural networks to represent a 3D scene in a high-dimensional space. DeepSDF [17] and Occupancy Networks [18] have shown the potential of implicit representation to model geometry. Recently, NeRF [19] further shows promising results in realistic novel view synthesis from sparse input views. Numerous studies [13, 14, 20, 15] have been inspired by NeRF [19] and utilize implicit representation for incremental dense mapping. These methods achieve more compact and accurate results than explicit representations. The NeRF-based mapping pipeline consists of two main components: (1) Scene representation; and (2) Keyframe selection strategy. #### Ii-B1 Scene representation iMap [13] demonstrates, for the first time, that an MLP can serve as the only scene representation. To overcome the limited representation capacity of a single MLP, NICE-SLAM [14] introduces multi-resolution dense grids to store encoded features of the scene, and MLPs are used to unfold the hidden information. But the pre-allocated grids make NICE-SLAM less scalable and memory inefficient. Vox-Fusion [15], instead, only allocates voxels Fig. 2: The pipeline of \(\mathrm{H}_{2}\)-Mapping. Taking RGB-D image from sensors and pose from other tracking modules, we utilize an expanded octree SDF priors and multiresolution hash encoding to represent the scene from rough to detailed. Additionally, the proposed coverage-maximizing keyframe selection strategy ensures quality in the edge regions. to the area containing the surface, forcing the network to learn more details in those regions. Nonetheless, due to the difficulty in optimizing implicit representations, it is challenging for these methods to meet real-time requirements for robotics applications. In contrast, our method utilizes a hierarchical hybrid representation for acceleration and accuracy improvement. This approach enables the implicit representation only to handle the residual geometry and texture, by taking the benefit of explicit structure. Optimizing the residual geometry is generally easier and faster. In order to speed up, some previous works aim to accelerate geometry convergence by incorporating geometry priors. INGeo [21], for instance, scales up the initial density prediction by a factor to increase density as it approaches the surface. However, it requires manual configuration and does not provide a reasonable way to set the scaling factor. Go-surf [20] initializes its feature grid and geometry decoder to ensure that the initial SDF can represent a sphere centered at the scene origin, but this initialization process cannot adapt to map expansion and has little effect on the observed region. However, due to the hybrid representation, our method can directly initialize the explicit representation by projecting to the input depth image, which can speed up the texture optimization process with higher fidelity by providing accurate geometry in the early stage. Therefore, our method can be deployed to robots for accurate mapping in real time. #### Ii-B2 Keyframe selection strategy iMap [13] allocates samples to every keyframe and calculates the loss distribution for selecting keyframes, which can be redundant. NICE-SLAM [14] selects optimized keyframes based on the overlap with the current frame. This strategy can keep the geometry outside the current field of view static by using a fixed, pre-trained decoder, but it cannot perform well in marginal areas that are seldom observed. Vox-Fusion [15] adds a new keyframe to be optimized based on the ratio of newly allocated voxels to the currently observed voxels. All keyframes are selected to sample the same number of pixels for ray casting, leading to the increasing number of training samples over time. However, our coverage-maximizing keyframe selection strategy ensures that all allocated voxels are covered with minimal iteration rounds, thereby improving the mapping quality, especially in edge regions. ## III H\({}_{2}\)-Mapping In this work, we propose a real-time and high-quality mapping method, as outlined in Fig.2. Given a set of sequential poses and RGB-D frames, we utilize a hierarchical hybrid representation (Sec.III-A) to depict the scene geometry and appearance. By employing a coverage-maximizing keyframe selection strategy (Sec.III-B), we use the volume rendering approach like NeRF [19] to obtain the depth and color of each sampled ray (Sec.III-C) and then optimize the hierarchical hybrid representation (Sec.III-D). ### _Hierarchical Hybrid Representation_ To accelerate the optimization of implicit representation, we propose a hierarchical hybrid representation that explicitly stores SDF priors in an expanded octree and uses the implicit multiresolution hash encoding to only handle residual geometry and texture. #### Iii-A1 Expanded Octree SDF Priors Octree SDF priorsWhen a new frame is received, we allocate new voxels based on the given pose and depth image and incrementally maintain a sparse voxel octree that covers all visible areas. We only add voxels containing more than ten points to the sparse voxel octree to reduce the impact of measurement noise. For each voxel, we store the optimizable SDF in every vertex to represent the coarse geometry of the scene. The coarse SDF \(s^{c}\) of any sample point in a leaf node is obtained from its surrounding eight vertices through the trilinear interpolation function \(TriLerp(\cdot)\): \[s^{c}=TriLerp(\mathbf{p},\{s^{c}_{k}\}),\quad k\in V, \tag{1}\] where \(\mathbf{p}\) is the position of the sample point, \(s^{c}_{k}\) is the optimizable SDF of its surrounding vertex, and \(V\) is the set of eight vertices in the leaf node. To accelerate the convergence rate, we provide an initial SDF to each \(s^{c}_{k}\) when allocating new voxels. As shown in the left figure of Fig.3, we project every vertex of each voxel onto the corresponding pixel in the RGB-D camera's frame to obtain an approximate SDF at that position: \[s^{c}_{prior}=\mathbf{D}(\mathbf{u})-d_{\mathbf{p}}, \tag{2}\] where \(d_{\mathbf{p}}\) is the z-axis distance between the sensor and the vertex position \(\mathbf{p}\), \(\mathbf{u}\) is the projected pixel, and \(\mathbf{D}(\mathbf{u})\) is the depth value at the pixel \(\mathbf{u}\). To avoid unreasonable SDF priors due to occlusion, we only provide the prior to the vertices where \((\mathbf{D}(\mathbf{u})-d_{\mathbf{p}})<\sqrt{6}\times\) (VOXEL SIZE). The right figure in Fig.3 shows the reconstruction results using only the SDF priors without any optimization. These coarse geometry priors accelerate the geometry optimization and then enhance the the scene's appearance by providing accurate geometry in the early stage, which is evaluated in Sec.IV-B1. Fig. 4: Different situations when using voxel expansion (c,d) or not (b) as the surface is near the voxel’s boundary. Fig. 3: Process of octree SDF priors and the reconstruction results using only the SDF priors without any optimization. Expanded Voxels AllocationIf the surface is close to the voxel's boundary, the accurate SDF at the position of the vertex near the surface will be close to 0. Therefore, it is possible for the SDF priors stored in that vertex to be optimized to the wrong sign, leading to the loss of the surface. To ensure that a surface will be created, we expand a new voxel if all the points obtained from back-projecting the depth image in the voxel are located at the edge. In Fig.4, for example, the accurate SDF priors of the upper vertices should be positive but are close to 0 (Fig.4). Any slight disturbance in the optimization may cause these values to become negative, resulting in no surface being reconstructed (Fig.4). However, if we allocate an extra voxel on top of it, regardless of the sign to which the vertex near the surface is optimized, a surface will always be built (Fig.4). #### Iii-A2 Multiresolution Hash Encoding In Sec.III-A1, we efficiently obtain a coarse SDF of the scene. In order to obtain the scene's appearance and more detailed geometry, we employed a multiresolution hash encoding approach inspired by Instant-NGP [16]. Differing from the SDF implementation in Instant-NGP [16], we only utilize the multiresolution hash encoding to handle the residual SDF which is easy to learn than the complete SDF of the scene. The multiresolution hash encoding works by arranging the surrounding voxels of a particular sample point at \(L\) resolution levels. At each level, \(F\) dimensional features are assigned to the corners of the voxels by looking up a hash table. To obtain the feature of the sample point, tri-linear interpolation is performed, and the feature at each level is concatenated. We employ two multiresolution hash encoding and shallow MLP attached to individually represent the color and residual SDF of the scene in a compact manner: \[s=s^{c}+\mathcal{M}_{s}(\phi^{s};\theta_{s}^{w}),\quad\mathbf{c}=\mathcal{M}_ {c}(\phi^{c};\theta_{c}^{w}). \tag{3}\] where \(\phi^{c}\) and \(\phi^{s}\) are \(L\times F\) dimensional features obtained from the multiresolution hash encoding, \(\mathcal{M}_{s}\) and \(\mathcal{M}_{c}\), parameterized by \(\theta_{s}^{w}\) and \(\theta_{c}^{w}\), are MLPs to output the residual SDF prediction \(s\) and color prediction \(\mathbf{c}\) (three dimensions for \(R\), \(G\), \(B\)), respectively. ### _Coverage-maximizing Keyframe Selection_ For a new input RGB-D frame, we insert this frame as a new keyframe if the ratio \(N_{o}/(N_{c}+N_{l})\) is smaller than a threshold, where \(N_{c}\) is the number of currently observed voxels, \(N_{l}\) is the number of voxels observed at the last inserted keyframe, and \(N_{o}\) is the number their mutual voxels. Our keyframe insertion strategy ensures that the frames in the keyframe set have relatively little overlap. To select the optimized keyframes from the keyframe set, we employ a coverage-maximizing keyframe selection strategy, as illustrated in Fig. 2. At the initial time step \(t_{0}\), all voxels are labeled as unobserved. We begin by selecting \(K\) keyframes that cover the largest number of voxels from the entire keyframe set. We mark these covered voxels as observed, and then optimize these selected keyframes and the current frame jointly. In the next time step \(t_{1}\), we use the same coverage-maximizing strategy but only for voxels that are still labeled as unobserved. If all voxels have been labeled as observed, we reset the voxels that were previously marked as observed to unobserved and repeat the above process. By using this strategy iteratively, all the scene areas can be covered. As shown in Fig. 5, most of the voxels are covered in the first time step. In Fig. 5 and, the strategy continues to cover other remaining parts of the scene, ensuring the reconstruction quality of the edge regions. In Sec.IV-B3, we further evaluate this strategy. ### _SDF-based Volume rendering_ Like Vox-Fusion [15], we only sample points along the ray that intersects with any voxel. And then get rendered color \(\mathbf{C}\) and depth \(D\) for each ray as follows: \[w_{j} =\sigma(\frac{s_{j}}{tr})\cdot\sigma(-\frac{s_{j}}{tr}) \tag{4}\] \[\mathbf{C} =\frac{1}{\sum_{j=0}^{N-1}w_{j}}\sum_{j=0}^{N-1}w_{j}\cdot \mathbf{c}_{j},\ D=\frac{1}{\sum_{j=0}^{N-1}w_{j}}\sum_{j=0}^{N-1}w_{j}\cdot d _{j},\] where \(\sigma(\cdot)\) is the sigmoid function, \(s_{j}\) and \(\mathbf{c}_{j}\) are the predicted SDF and color obtained from the hierarchical hybrid representation described in Sec. III-A, \(N\) is the number of samples along the ray, \(tr\) is a truncation distance and \(d_{j}\) is the sample's depth along the ray. ### _Optimization Process_ #### Iii-D1 Loss Function We apply loss functions like Vox-Fusion [15]: RGB Loss (\(\mathcal{L}_{rgb}\)), Depth Loss (\(\mathcal{L}_{d}\)), Free Space Loss (\(\mathcal{L}_{fs}\)) and SDF Loss (\(\mathcal{L}_{sdf}\)) on a batch of rays \(R\). \[\begin{split}\mathcal{L}_{fs}&\!\!=\!\frac{1}{|R|} \!\!\sum_{r\in R}\!\!\frac{1}{P_{r}^{fs}}\!\!\sum_{p\in P^{fr}}\!\!(s_{p}\!-\! \text{tr})^{2},\ \mathcal{L}_{sdf}\!\!=\!\frac{1}{|R|}\!\!\sum_{r\in R}\!\!\frac{1}{P_{r}^{tr}} \!\!\sum_{p\in P_{r}^{tr}}\!\!(s_{p}\!-\!s_{p}^{gt})^{2}\\ \mathcal{L}_{d}&\!\!=\!\frac{1}{|R|}\!\!\sum_{r\in R }\!\!\|D_{r}\!-\!D_{r}^{gt}\|,\ \mathcal{L}_{rgb}\!=\!\frac{1}{|R|}\!\!\sum_{r\in R}\!\!\|\mathbf{C}_{r}\!-\! \mathbf{C}_{r}^{gt}\|\end{split} \tag{5}\] where \(P_{r}^{fs}\) is a set of points on the ray \(r\) that lies between the camera and the truncation region of the surface measured by the depth sensor, \(P_{r}^{tr}\) is a set of points within the truncation area. (\(D_{r}\), \(D_{r}^{gt}\)) and (\(\mathbf{C}_{r}\), \(\mathbf{C}_{r}^{gt}\)) are rendered and input depth and color. \(s_{p}\) is the predicted SDF and \(s_{p}^{gt}\) is the difference between the distance to point \(p\) on the ray \(r\) and the depth measurement of that ray. The final loss function is a weighted Fig. 5: This figure illustrates the voxels covered by the selected keyframes in three consecutive frames, which are represented by red bounding boxes. (a) covers most of the voxels in the scene. (b) and (c) covers most of the remaining voxels, such as the toilet and poster areas on the wall. sum of these loss functions. The weights are determined by \(\alpha_{sdf}\), \(\alpha_{fs}\),\(\alpha_{d}\), and \(\alpha_{rgb}\). \[\mathcal{L}=\alpha_{sdf}\mathcal{L}_{sdf}+\alpha_{fs}\mathcal{L}_{fs}+\alpha_{ d}\mathcal{L}_{d}+\alpha_{rgb}\mathcal{L}_{rgb} \tag{6}\] #### Iii-B2 Adaptive Early Ending In the training process, the current frame and selected keyframes will be used for optimization several times. As shown in Fig. 10, the average iteration time that achieves PSNR convergence varies in different scenarios. Therefore, to adaptively choose an appropriate iteration time that can balance the time consumption and mapping precision in various scenarios, we employ an early stopping policy if the total loss exceeds twice the average total loss of the current training round, indicating that further optimization can only result in a little improvement. ## IV Experiment To evaluate the performance of our proposed method, we compare its reconstruction accuracy and time consumption with other NeRF-based RGB-D mapping systems on both the synthetic Replica dataset [22] and the real-world ScanNet dataset [23]. Additionally, we conduct an ablation study to demonstrate the effectiveness of each module in our approach. Furthermore, we deploy our method on a handheld device and quadrotors with limited computational power to test its mapping performance. ### _Mapping and Rendering Evaluation_ #### Iv-A1 Implementation Details In our method, the voxel size of the octree's leaf node is \(10cm\), \(tr=5cm\), and the maximum number of iterations is \(10\). For the multi-resolution hash encoding, \(L=4\), \(F=2\), \(T=2^{19}\), and the scale difference of the adjacent level is \(2\). Due to octree SDF priors, we only use one-layer MLP of size \(64\) to decode the geometry features. The appearance decoder is a two-layer MLP of size \(64\). \(4096\) pixels are selected for each iteration to generate rays and the distance between adjacent sampled points is \(1cm\). The number of keyframes that are selected to be optimized is \(K=10\). #### Iv-A2 Baselines We select two advanced NeRF-based dense RGB-D SLAM methods currently open-source, NICE-SLAM [14] and Vox-Fusion [15] for comparison. However, since we solely focus on incremental mapping, we remove their tracking component and instead provide the ground truth pose. All other aspects remain unchanged. #### Iv-A3 Metrics To evaluate scene geometry, we use the Depth L1 Error\([cm]\) Accuracy\([cm]\), Completion\([cm]\) and Completion Ratio\([<5cm\%]\) of reconstructed mesh. Besides, We use SSIM and PSNR to evaluate scene appearance on rendered images from all training views (Interpolation) and distant novel views (Extrapolation). Only the portions of the mesh included in voxels are considered, and non-depth regions are not measured in Depth L1 Error, SSIM, and PSNR. #### Iv-A4 Evaluation on Replica [22] In Table I, we present a quantitative comparison of the reconstruction and rendering performance of our method and the baselines. The results demonstrate that our approach outperforms the baselines for both 2D and 3D metrics. Additionally, we provide a qualitative analysis of the reconstructed mesh and rendered images. Notably, in Fig. 6, the mesh obtained by our method appears smoother in areas such as the sofa and floor, while our approach exhibits enhanced geometric details, particularly for smaller objects like chair legs and vases. Fig. 7 shows that our method can generate renderings with more realistic details from both training views and novel views. ## V Conclusion Fig. 8: Textured mesh on the ScanNet dataset [23]. Our method produces more accurate geometry and higher-fidelity textures. Fig. 6: Reconstructed mesh on the Replica dataset [22]. Our method closely approximates the ground truth mesh and can capture richer geometric details in small objects, such as chair legs and vases. Fig. 7: Image rendering results on the Replica dataset [22]. (a) Rendering results from the input views (interpolation). (b) Rendering results from the novel views (extrapolation). The image rendered by our method has more high-fidelity details and rich textures, such as pillows and bed sheets #### V-A6 Runtime Analysis We select Room0 from Replica [22] and Scene0000 from ScanNet [23] to evaluate the runtime by comparing our method with the baselines in Table II. We report the average frame processing time (FPT) on RTX 4090, RTX 2080Ti, AGX Orin, Orin NX separately. Our method is much faster than previous work on various devices. ### _Ablation Study_ #### V-B1 Octree SDF Priors Octree SDF priors represent an easy-to-optimize explicit structure with a computationally efficient initialization process. As a result, the reconstructed geometry depicted in Fig. 9 is more accurate at the beginning of optimization, leading to a rapid PSNR increase for the first frame during the early stage. Furthermore, by promptly providing a coarse geometry and utilizing the implicit multiresolution hash encoding exclusively for the residual part, the accuracy and completion metrics in Fig. 9 eventually converge to a lower level, leading to higher PSNR for the same average iteration times on the entire sequence in Fig. 10. The PSNR arises because accurate geometry ensures the gradient of color prediction mainly affects the surface region during backpropagation. Therefore, by enabling a faster and more accurate geometry reconstruction, this hybrid representation achieves the same reconstruction quality with fewer training iterations. This is particularly meaningful for robotic applications with limited computing power. #### V-B2 Expanded Voxels Allocation Table III shows the expanded voxel allocation technique has a greater impact on completion. Besides, Fig. 11 shows the visualization results about Office3 and Office4 of Replica [22]. In the left part of (a) and (b), holes are generated in regions where the surface is close to the voxel's boundary, making it easy for the SDF to be optimized to the wrong sign. However, as shown in the right part of (a) and (b), the expansion technique can reduce the holes caused by optimization sensitivity. #### V-B3 Coverage-maximizing Keyframe Selection Table III demonstrate our keyframe selection strategy can significantly improve the PSNR and Acc. metrics. As shown in Fig. 12, our strategy can better optimize the ceiling area in Office2 of Replica [22]. Since the number of images corresponding to this area is low in the overall keyframe set, it is rare for previous methods to optimize this region using the random strategy. However, our keyframe selection strategy achieves complete coverage of all the voxels in the keyframe set with minimal iteration rounds, greatly increasing the probability of optimizing the edge regions. #### V-B4 Adaptive Early Ending The blue curve in Fig. 10 shows that increasing the number of iterations leads to higher PSNR values, but the rate of improvement gradually slows down. As the number of iterations varies when using the adaptive early ending, we calculate the average iteration \begin{table} \begin{tabular}{c||c||c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Speed FTP(s)} \\ \cline{3-5} & & 4090 & 2080Ti & AGX Orin \\ \hline \multirow{3}{*}{\begin{tabular}{c} Replica \\ Room0 \\ \end{tabular} } & NICE-SLAM & 2.69 & 6.35 & 13.01 & 19.03 \\ & \begin{tabular}{c} Vox-Fusion \\ **H\({}_{2}\)-Mapping** \\ \end{tabular} & 0.34 & 0.94 & 1.68 & 3.52 \\ & \begin{tabular}{c} **H\({}_{2}\)-Mapping** \\ \end{tabular} & **0.05** & **0.16** & **0.36** & **0.66** \\ \hline \multirow{3}{*}{\begin{tabular}{c} ScanNet \\ Scene0000 \\ \end{tabular} } & NICE-SLAM & 3.03 & 6.55 & 14.37 & 19.39 \\ & \begin{tabular}{c} Vox-Fusion \\ **H\({}_{2}\)-Mapping** \\ \end{tabular} & 0.67 & 2.04 & 3.12 & 21.57 \\ \cline{1-1} & \begin{tabular}{c} **H\({}_{2}\)-Mapping** \\ \end{tabular} & **0.07** & **0.24** & **0.53** & **1.07** \\ \hline \hline \end{tabular} \end{table} TABLE II: Runtime Analysis on Multiple Devices Fig. 11: Ablation study on expanded voxels allocation technique. (a) and (b) show results about Office3 and Office4 of Replica [22], respectively. This technique can produce fewer holes during the mapping (Use: left; No use: right). Fig. 12: Ablation study of coverage-maximizing keyframe selection strategy on Office2 of Replica [22]. It shows that our keyframe selection strategy will lead to better reconstruction performance in marginal areas such as the ceiling. Fig. 10: Ablation study about the adaptive early ending and octree SDF priors on the entire data sequence. We compared changes in the PSNR for different fixed iteration times in Room0 (left) and Room1 (right) of Replica [22]. The blue curve and green curve represent the results with and without octree SDF priors, respectively. The red star denotes the average iteration time and PSNR using adaptive early ending. Fig. 9: Ablation study about octree SDF priors on one frame. We choose the first frame in Room0 of Replica [22] to evaluate how the mapping performance of the corresponding region changes with increasing optimization iterations. time and the corresponding PSNR, represented by the red star. The results demonstrate this strategy adaptively leads to different average iteration times that are close to convergence in various scenarios, which helps to reduce optimization time without compromising accuracy. ### _Real-World SLAM Demonstration_ We demonstrate our mapping method with a tracking module, which completes a SLAM system on a handheld device and quadrotors. Specifically, we employ the Realsense L515 as the vision sensor to provide RGB-D images, and a modified VINS-Mono [24] incorporating depth constraints as the tracking module to estimate the pose. The handheld device is powered by AGX Orin, and the quadrotor is equipped with Orin NX. All the programs are running onboard. Fig. 1 illustrates the results of our real-world experiments. We use the handheld device to reconstruct an apartment (Mesh surface: \(\approx 127m^{2}\)) and use the quadrotors for mapping a part of the fight arena (Mesh surface: \(\approx 58m^{2}\)). The final mesh is extracted by marching cube [25] and all optimization was performed within the mapping procedure without any post-processing or additional training time. To the best of our knowledge, our method is the first to achieve high-quality NeRF-based mapping in real-time on edge computers. More details can be found in the attached video. ## V Conclusion We propose H\({}_{2}\)-Mapping, a novel NeRF-based dense mapping system that utilizes hierarchical hybrid representation and can be deployed on edge computers for real-time and high-quality robot mapping. The coarse geometry is represented explicitly using octree SDF priors for fast initialization and convergence, while high-resolution geometry details and texture are encoded implicitly using multiresolution hash encoding in a memory-efficient manner. Furthermore, we propose a coverage-maximizing keyframe selection strategy to improve the reconstruction quality in marginal areas. Baseline comparisons demonstrate that our method outperforms both mapping quality and time consumption. Besides, ablation studies show that the hierarchical hybrid representation effectively accelerates geometry and texture optimization, and the proposed keyframe selection strategy guarantees reconstruction accuracy even in edge areas. However, currently, our method cannot handle dynamic objects and long-term pose drifting, and further speed-up is required.
2306.17272
Recognizing $\mathbf{W_2}$ Graphs
Let $G$ be a graph. A set $S \subseteq V(G)$ is independent if its elements are pairwise non-adjacent. A vertex $v \in V(G)$ is shedding if for every independent set $S \subseteq V(G) \setminus N[v]$ there exists $u \in N(v)$ such that $S \cup \{u\}$ is independent. An independent set $S$ is maximal if it is not contained in another independent set. An independent set $S$ is maximum if the size of every independent set of $G$ is not bigger than $|S|$. The size of a maximum independent set of $G$ is denoted $\alpha(G)$. A graph $G$ is well-covered if all its maximal independent sets are maximum, i.e. the size of every maximal independent set is $\alpha(G)$. The graph $G$ belongs to class $\mathbf{W_2}$ if every two pairwise disjoint independent sets in $G$ are included in two pairwise disjoint maximum independent sets. If a graph belongs to the class $\mathbf{W_2}$ then it is well-covered. Finding a maximum independent set in an input graph is an NP-complete problem. Recognizing well-covered graphs is co-NP-complete. The complexity status of deciding whether an input graph belongs to the $\mathbf{W_2}$ class is not known. Even when the input is restricted to well-covered graphs, the complexity status of recognizing graphs in $\mathbf{W_2}$ is not known. In this article, we investigate the connection between shedding vertices and $\mathbf{W_2}$ graphs. On the one hand, we prove that recognizing shedding vertices is co-NP-complete. On the other hand, we find polynomial solutions for restricted cases of the problem. We also supply polynomial characterizations of several families of $\mathbf{W_2}$ graphs.
Vadim E. Levit, David Tankus
2023-06-29T19:24:50Z
http://arxiv.org/abs/2306.17272v1
# Recognizing \(\mathbf{W_{2}}\) Graphs ###### Abstract Let \(G\) be a graph. A set \(S\subseteq V(G)\) is _independent_ if its elements are pairwise nonadjacent. A vertex \(v\in V(G)\) is _shedding_ if for every independent set \(S\subseteq V(G)\setminus N[v]\) there exists \(u\in N(v)\) such that \(S\cup\{u\}\) is independent. An independent set \(S\) is _maximal_ if it is not contained in another independent set. An independent set \(S\) is _maximum_ if the size of every independent set of \(G\) is not bigger than \(|S|\). The size of a maximum independent set of \(G\) is denoted \(\alpha(G)\). A graph \(G\) is _well-covered_ if all its maximal independent sets are maximum, i.e. the size of every maximal independent set is \(\alpha(G)\). The graph \(G\) belongs to class \(\mathbf{W_{2}}\) if every two pairwise disjoint independent sets in \(G\) are included in two pairwise disjoint maximum independent sets. If a graph belongs to the class \(\mathbf{W_{2}}\) then it is well-covered. Finding a maximum independent set in an input graph is an NP-complete problem. Recognizing well-covered graphs is co-NP-complete. The complexity status of deciding whether an input graph belongs to the \(\mathbf{W_{2}}\) class is not known. Even when the input is restricted to well-covered graphs, the complexity status of recognizing graphs in \(\mathbf{W_{2}}\) is not known. In this article, we investigate the connection between shedding vertices and \(\mathbf{W_{2}}\) graphs. On the one hand, we prove that recognizing shedding vertices is co-NP-complete. On the other hand, we find polynomial solutions for restricted cases of the problem. We also supply polynomial characterizations of several families of \(\mathbf{W_{2}}\) graphs. ## 1 Introduction ### The classes \(\mathbf{W_{k}}\) An _independent set_ of vertices in a graph \(G\) is a set of vertices \(S\subseteq V(G)\) whose elements are pairwise nonadjacent. An independent set is _maximal_ if is is not a subset of another independent set. An independent set is _maximum_ if \(G\) does not contain an independent set of a higher cardinality. The cardinality of a maximum independent set in \(G\) is denoted \(\alpha(G)\)
2305.01140
Geometric Latent Diffusion Models for 3D Molecule Generation
Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries and advancing foundational science problems such as molecule design. Inspired by the recent huge success of Stable (latent) Diffusion models, we propose a novel and principled method for 3D molecule generation named Geometric Latent Diffusion Models (GeoLDM). GeoLDM is the first latent DM model for the molecular geometry domain, composed of autoencoders encoding structures into continuous latent codes and DMs operating in the latent space. Our key innovation is that for modeling the 3D molecular geometries, we capture its critical roto-translational equivariance constraints by building a point-structured latent space with both invariant scalars and equivariant tensors. Extensive experiments demonstrate that GeoLDM can consistently achieve better performance on multiple molecule generation benchmarks, with up to 7\% improvement for the valid percentage of large biomolecules. Results also demonstrate GeoLDM's higher capacity for controllable generation thanks to the latent modeling. Code is provided at \url{https://github.com/MinkaiXu/GeoLDM}.
Minkai Xu, Alexander Powers, Ron Dror, Stefano Ermon, Jure Leskovec
2023-05-02T01:07:22Z
http://arxiv.org/abs/2305.01140v1
# Geometric Latent Diffusion Models for 3D Molecule Generation ###### Abstract Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries and advancing foundational science problems such as molecule design. Inspired by the recent huge success of Stable (latent) Diffusion models, we propose a novel and principled method for 3D molecule generation named Geometric Latent Diffusion Models (GeoLDM). GeoLDM is the first latent DM model for the molecular geometry domain, composed of autoencoders encoding structures into continuous latent codes and DMs operating in the latent space. Our key innovation is that for modeling the 3D molecular geometries, we capture its critical roto-translational equivariance constraints by building a point-structured latent space with both invariant scalars and equivariant tensors. Extensive experiments demonstrate that GeoLDM can consistently achieve better performance on multiple molecule generation benchmarks, with up to 7% improvement for the valid percentage of large biomolecules. Results also demonstrate GeoLDM's higher capacity for controllable generation thanks to the latent modeling. Code is provided at [https://github.com/MinkaiXu/GeoLDM](https://github.com/MinkaiXu/GeoLDM). Machine Learning, ICML, ICML ## 1 Introduction Generative modeling for feature-rich geometries is an important task for many science fields. Typically, geometries can be represented as point clouds where each point is embedded in the Cartesian coordinates and labeled with rich features. Such structures are ubiquitous in scientific domains, _e.g._, we can represent molecules as atomic graphs in 3D (Schutt et al., 2017) and proteins as proximity spatial graphs over amino acids (Jing et al., 2021). Therefore, developing effective geometric generative models holds great promise for scientific discovery problems such as material and drug design (Pereira et al., 2016; Graves et al., 2020; Townshend et al., 2021). Recently, considerable progress has been achieved with machine learning approaches, especially deep generative models. For example, Gebauer et al. (2019); Luo and Ji (2021) and Satorras et al. (2021) proposed data-driven methods to generate 3D molecules (in silico) with autoregressive and flow-based models respectively. However, despite great potential, the results are still unsatisfactory with low chemical validity and small molecule size, due to the insufficient capacity of the underlying generative models (Razavi et al., 2019). Most recently, diffusion models (DMs) (Ho et al., 2020; Song et al., 2021) have emerged with surprising results on image synthesis (Meng et al., 2022) and beyond (Kong et al., 2021; Li et al., 2022). DMs define a diffusion process that gradually perturbs the data, and learn neural networks to reverse this corruption by progressive denoising. Then the denoising network can conduct generation by iteratively cleaning data initialized from random noise. Several studies have also applied such frameworks to the geometric domain, especially molecular structures (Hoogeboom et al., 2022; Wu et al., 2022; Anand and Achim, 2022). However, the existing models typically run DMs directly in the atomic feature space, which typically is composed of diverse physical quantities, _e.g._, charge, atom types, and coordinates. These features are multi-modal with discrete, integer, and continuous variables, making unified Gaussian diffusion frameworks sub-optimal (Hoogeboom et al., 2022; Wu et al., 2022) or requiring sophisticated, decomposed modeling of different variables (Anand and Achim, 2022). Besides, the high dimensionality of input features also increases DM modeling difficulty, since the model's training and sampling require function forward and backward computation in the full input dimension. Therefore, the validity rate of generated molecules is still not satisfying enough, and an ideal approach would be a more flexible and expressive framework for modeling complex structures. In this paper, we propose a novel and principled method to overcome the above limitations by utilizing a smoother latent space, named Geometric Latent Diffusion Models (GeoLDM). GeoLDM is set up as (variational) autoencoders (AEs) with DMs operating on the latent space. The encoder maps the raw geometries into a lower-dimensional representational space, and DMs learn to model the smaller and smoother distribution of latent variables. For modeling the 3D molecular geometry, our key innovation is constructing sufficient conditions for latent space to satisfy the critical 3D roto-translation equivariance constraints, where simply equipping latent variables with scalar-valued1 (_i.e._, invariant) variables lead to extremely poor generation quality. Technically, we realize this constraint by building the latent space as point-structured latents with both invariant and equivariant variables, which in practice is implemented by parameterizing encoding and decoding functions with advanced equivariant networks. To the best of our knowledge, we are the first work to incorporate equivariant features, _i.e._, tensors, into the latent space modeling. Footnote 1: In this paper, we will use “scalar” and “tensor” to interchangeably refer to type-0 (invariant) and type-1 (equivariant) features, following the common terminologies used in geometric literature. A unique advantage of GeoLDM is that unlike previous DM methods operating in the feature domain, we explicitly incorporate a latent space to capture the complex structures. This unified formulation enjoys several strengths. First, by mapping raw features into regularized latent space, the latent DMs learn to model a much smoother distribution. This alleviates the difficulty of directly modeling complex structures' likelihood, and is therefore more expressive. Besides, the latent space enables GeoLDM to conduct training and sampling with a lower dimensionality, which can also benefit the generative modeling complexity. Furthermore, the use of latent variables also allows for better control over the generation process, which has shown promising results in text-guided image generation (Rombach et al., 2022). This enables the user to generate specific types of molecules with desired properties. Finally, our framework is very general and can be extended to various downstream molecular problems where DMs have shown promising results, _i.e._, target drug design (Lin et al., 2022) and antigen-specific antibody generation (Luo et al., 2022). We conduct detailed evaluations of GeoLDM on multiple benchmarks, including both unconditional and property-conditioned molecule generation. Results demonstrate that GeoLDM can consistently achieve superior generation performance on all the metrics, with up to 7% higher valid rate for large biomolecules. Empirical studies also show significant improvement for controllable generation thanks to latent modeling. All the empirical results demonstrate that GeoLDM enjoys a significantly higher capacity to explore the chemical space and generate structurally novel and chemically feasible molecules. ## 2 Related Work **Latent Generative Models.** To improve the generative modeling capacity, a lot of research (Dai and Wipf, 2019; Yu et al., 2022) has been conducted to learn more expressive generative models over the latent space. VQ-VAEs (Razavi et al., 2019) proposed to discretize latent variables and use autoregressive models to learn an expressive prior there. Ma et al. (2019) instead employed flow-based models as the latent prior, with applications on non-autoregressive text generation. Another line of research is inspired by variational autoencoder's (VAE's) problem that the simple Gaussian priors cannot accurately match the encoding posteriors and therefore generate poor samples, and Dai and Wipf (2019); Aneja et al. (2021) therefore proposed to use VAEs and energy-based models respectively to learn the latent distribution. Most recently, several works successfully developed latent DMs with promising results on various applications, ranging from image (Vahdat et al., 2021), point clouds (Zeng et al., 2022), to text (Li et al., 2022) generation. Among them, the most impressive success is Stable Diffusion models (Rombach et al., 2022), which show surprisingly realistic text-guided image generation results. Despite the considerable progress we have achieved, existing latent generative methods mainly work on latent space only filled with typical _scalars_, without any consideration for equivariance. By contrast, we study the novel and challenging task that latent Figure 1: Illustration of GeoLDM. The encoder \(\mathcal{E}_{\phi}\) encodes molecular features \(\mathbf{x},\mathbf{h}\) into equivariant latent variables \(\mathbf{z}_{\mathrm{x}_{1}},\mathbf{z}_{\mathrm{b}}\), and the latent diffusion transitions \(q(\mathbf{z}_{\mathrm{x},t},\mathbf{z}_{\mathrm{b},t}|\mathbf{z}_{\mathrm{x},t-1},\mathbf{z}_{\mathrm{b},t-1})\) gradually added noise until the latent codes converge to Gaussians. Symmetrically, for generation, an initial latent \(\mathbf{z}_{\mathrm{x},T},\mathbf{z}_{\mathrm{b},T}\) is sampled from standard normal distributions and progressively refined by equivariant denoising dynamics \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{\mathrm{x}},\mathbf{z}_{\mathrm{b}})\). The final latents \(\mathbf{z}_{\mathrm{x}},\mathbf{z}_{\mathrm{b}}\) are further decoded back to molecular point clouds with the decoder \(\mathcal{D}_{\xi}\). space also contains equivariant _tensors_. **Molecule Generation in 3D.** Although extensive prior work has focused on generating molecules as 2D graphs (Jin et al., 2018; Liu et al., 2018; Shi et al., 2020), interest has recently increased in 3D generation. G-Schnet and G-SphereNet (Gebauer et al., 2019; Luo and Ji, 2021) employed autoregressive approaches to build molecules by sequential attachment of atoms or molecular fragments. Similar frameworks have also been applied to structure-based drug design (Li et al., 2021; Peng et al., 2022; Powers et al., 2022). However, this autoregressive approach requires careful formulation of a complex action space and action ordering. Other studies utilized atomic density grids, by which the entire molecule can be generated in "one step" by outputting a density over the voxelized 3D space (Masuda et al., 2020). However, these density grids lack the desirable equivariance property and require a separate fitting algorithm. In the past year, DMs have attracted attention for molecule generation in 3D (Hoogeboom et al., 2022; Wu et al., 2022), with successful application in downstream tasks like target drug generation (Lin et al., 2022), antibody design (Luo et al., 2022), and protein design (Anand and Achim, 2022; Trippe et al., 2022). However, existing models mainly still work on the original atomic space, while our method works on the fundamentally different and more expressive latent space. ## 3 Background ### Problem Definition In this paper, we consider generative modeling of molecular geometries from scratch. Let \(d\) be the dimension of node features, then each molecule is represented as point clouds \(\mathcal{G}=\langle\mathbf{x},\mathbf{h}\rangle\), where \(\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{N})\in\mathbb{R}^{N\times 3}\) is the atom coordinates matrix and \(\mathbf{h}=(\mathbf{h}_{1},\ldots,\mathbf{h}_{N})\in\mathbb{R}^{N\times d}\) is the node feature matrix, such as atomic type and charges. We consider the following two generation tasks: **(I) Unconditional generation.** With a collection of molecules \(\mathcal{G}\), learn parameterized generative models \(p_{\theta}(\mathcal{G})\) which can generate diverse and realistic molecules \(\hat{\mathcal{G}}\) in 3D. **(II) Controllable generation.** With molecules \(\mathcal{G}\) labeled with certain properties \(s\), learn conditional generation models \(p_{\theta}(\mathcal{G}|s)\) which can conduct controllable molecule generation given desired property value \(s\). ### Equivariance _Equivariance_ is ubiquitous for geometric systems such as molecules, where vector features like atomic forces or dipoles should transform accordingly _w.r.t._ the coordinates (Thomas et al., 2018; Weiler et al., 2018; Fuchs et al., 2020; Batzner et al., 2021). Formally, a function \(\mathcal{F}\) is defined as equivariant _w.r.t_ the action of a group \(G\) if \(\mathcal{F}\circ S_{g}(\mathbf{x})=T_{g}\circ\mathcal{F}(\mathbf{x}),\forall g \in G\) where \(S_{g},T_{g}\) are transformations for a group element \(g\)(Serre et al., 1977). In this work, we consider the Special Euclidean group SE(3), _i.e._, the group of rotation and translation in 3D space, where transformations \(T_{g}\) and \(S_{g}\) can be represented by a translation \(\mathbf{t}\) and an orthogonal matrix rotation \(\mathbf{R}\). In molecules the features \(\mathbf{h}\) are SE(3)-invariant while the coordinates will be affected2 as \(\mathbf{R}\mathbf{x}+\mathbf{t}=(\mathbf{R}\mathbf{x}_{1}+\mathbf{t},\ldots,\mathbf{ R}\mathbf{x}_{N}+\mathbf{t})\). This requires our learned likelihood to be invariant to roto-translations. Such property has been shown important for improving the generalization capacity of 3D geometric modeling (Satorras et al., 2021; Xu et al., 2022). Footnote 2: We follow the convention to use \(\mathbf{R}\mathbf{x}\) to denote applying group actions \(\mathbf{R}\) on \(\mathbf{x}\), which formally is calculated as \(\mathbf{x}\mathbf{R}^{T}\). ### Diffusion Models for Non-geometric Domains Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) are latent variable models that model the data \(\mathbf{x}_{0}\) as Markov chains \(\mathbf{x}_{T}\cdots\mathbf{x}_{0}\), with intermediate variables sharing the same dimension. DMs can be described with two Markovian processes: a forward _diffusion_ process \(q(\mathbf{x}_{1:T}\mid\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}\mid \mathbf{x}_{t-1})\) and a reverse _denoising_ process \(p_{\theta}(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\theta}( \mathbf{x}_{t-1}\mid\mathbf{x}_{t})\). The forward process gradually adds Gaussian noise to data \(\mathbf{x}_{t}\): \[q(\mathbf{x}_{t}\mid\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1- \beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}), \tag{1}\] where the hyperparameter \(\beta_{1:T}\) controls the amount of noise added at each timestep \(t\). The \(\beta_{1:T}\) are chosen such that samples \(\mathbf{x}_{T}\) can approximately converge to standard Gaussians, _i.e._, \(q(\mathbf{x}_{T})\approx\mathcal{N}(0,\mathbf{I})\). Typically, this forward process \(q\) is predefined without trainable parameters. The generation process of DMs is defined as learning a parameterized reverse _denoising_ process, which aims to incrementally denoise the noisy variables \(\mathbf{x}_{T:1}\) to approximate clean data \(\mathbf{x}_{0}\) in the target data distribution: \[p_{\theta}(\mathbf{x}_{t-1}\mid\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1}; \mathbf{\mu}_{\theta}(\mathbf{x}_{t},t),\rho_{t}^{2}\mathbf{I}), \tag{2}\] where the initial distribution \(p(\mathbf{x}_{T})\) is defined as \(\mathcal{N}(0,\mathbf{I})\). The means \(\mathbf{\mu}_{\theta}\) typically are neural networks such as U-Nets for images or Transformers for text, and the variances \(\rho_{t}\) typically are also predefined. As latent variable models, the forward process \(q(\mathbf{x}_{1:T}|\mathbf{x}_{0})\) can be viewed as a fixed posterior, to which the reverse process \(p_{\theta}(\mathbf{x}_{0:T})\) is trained to maximize the variational lower bound of the likelihood of the data \(\mathcal{L}_{\text{v}\mathbf{\eta}}=\mathbb{E}_{q(\mathbf{x}_{1:T}|\mathbf{x}_{0})} \Big{[}\log\frac{q(\mathbf{x}_{T}|\mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{T})}+ \sum_{t=2}^{T}\log\frac{q(\mathbf{x}_{t-1}|\mathbf{x}_{0},\mathbf{x}_{t})}{p_{ \theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}-\log p_{\theta}(\mathbf{x}_{0}| \mathbf{x}_{1})\Big{]}\). However, directly optimizing this objective is known to suffer serious training instability (Nichol and Dhariwal, 2021). Instead, Song and Ermon (2019); Ho et al. (2020) suggest a simple surrogate objective up to irrelevant constant terms: \[\mathcal{L}_{DM}=\mathbb{E}_{\mathbf{x}_{0},\mathbf{\epsilon}\sim\mathcal{N}(0, \mathbf{I}),t}\big{[}w(t)||\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)||^ {2}\big{]}, \tag{3}\] where \(\mathbf{x}_{t}=\alpha_{t}\mathbf{x}_{0}+\sigma_{t}\mathbf{\epsilon}\), with \(\alpha_{t}=\sqrt{\prod_{s=1}^{t}(1-\beta_{s})}\) and \(\sigma_{t}=\sqrt{1-\alpha_{t}^{2}}\) are parameters from the tractable diffusion distributions \(q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\alpha_{t}\mathbf{ x}_{0},\sigma_{t}^{2}\mathbf{I})\). \(\mathbf{\epsilon}_{\theta}\) comes from the widely adopted parametrization of the means \(\mu_{\theta}(\mathbf{x}_{t},t):=\frac{1}{\sqrt{1-\beta_{t}}}\big{(}\mathbf{x }_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}^{2}}}\mathbf{\epsilon}_{\theta}( \mathbf{x}_{t},t)\big{)}\). The reweighting terms are \(w(t)=\frac{\beta_{t}^{2}}{2\rho_{t}^{2}(1-\beta_{t})(1-\alpha_{t}^{2})}\), while in practice simply setting it as \(1\) often promotes the sampling quality. Intuitively, the model \(\mathbf{\epsilon}_{\theta}\) is trained to predict the noise vector \(\mathbf{\epsilon}\) to denoise diffused samples \(\mathbf{x}_{t}\) at every step \(t\) towards a cleaner one \(\mathbf{x}_{t-1}\). After training, we can draw samples with \(\mathbf{\epsilon}_{\theta}\) by the iterative ancestral sampling: \[\mathbf{x}_{t-1}=\tfrac{1}{\sqrt{1-\beta_{t}}}(\mathbf{x}_{t}-\tfrac{\beta_{t }}{\sqrt{1-\alpha_{t}^{2}}}\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t))+\rho_{t} \mathbf{\epsilon}, \tag{4}\] with \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The sampling chain is initialized from Gaussian prior \(\mathbf{x}_{T}\sim p(x_{T})=\mathcal{N}(\mathbf{x}_{T};\mathbf{0},\mathbf{I})\). ## 4 Method In this section, we formally describe Geometric Latent Diffusion Models (GeoLDM). Our work is inspired by the recent success of stable (latent) diffusion models (Rombach et al., 2022), but learning latent representations for the geometric domain is however challenging (Winter et al., 2021). We address these challenges by learning a faithful point-structured latent space with both invariant and equivariant variables, and elaborate on the design details of geometric autoencoding and latent diffusion in Section 4.1 and Section 4.2 respectively. Finally, we briefly summarize the simple training and sampling scheme in Section 4.3, and further discuss extensions for conditioning mechanisms in Section 4.4. A high-level schematic is provided in Figure 1. ### Geometric Autoencoding We are interested in first compressing the geometries \(\mathcal{G}=\langle\mathbf{x},\mathbf{h}\rangle\in\mathbb{R}^{N\times(3+d)}\) (see Section 3.1 for details) into lower-dimensional latent space. We consider the classic autoencoder (AE) framework, where the encoder \(\mathcal{E}_{\phi}\) encodes \(\mathcal{G}\) into latent domain \(\mathbf{z}=\mathcal{E}_{\phi}(\mathbf{x},\mathbf{h})\) and the decoder \(\mathcal{D}_{\xi}\) learns to decode \(\mathbf{z}\) back to data domain \(\tilde{\mathbf{x}}\), \(\tilde{\mathbf{h}}=\mathcal{D}_{\xi}(\mathbf{z})\). The whole framework can be trained by minimizing the reconstruction objective \(\mathbf{d}(\mathcal{D}(\mathcal{E}(\mathcal{G})),\mathcal{G})\), _e.g._, \(L_{p}\) norms. However, this classic autoencoding scheme is non-trivial in the geometric domain. Considering we follow SE(3) group in this paper (see Section 3.2), the typical parameterization of latent space as invariant scalar-valued features (Kingma and Welling, 2013) is very challenging: **Proposition 4.1**.: _(Winter et al., 2022) Learning autoencoding functions \(\mathcal{E}\) and \(\mathcal{D}\) to represent geometries \(\mathcal{G}\) in scalar-valued (i.e., invariant) latent space **necessarily** requires an additional **equivariant** function \(\psi\) to store **suitable** group actions such that \(\mathcal{D}(\psi(\mathcal{G}),\mathcal{E}(\mathcal{G}))=T_{\psi(\mathcal{G}) }\circ\hat{\mathcal{D}}(\mathcal{E}(\mathcal{G}))=\mathcal{G}\)._ The idea of this proposition is that Geometric AE requires an additional function \(\psi\) to represent appropriate group actions for encoding, and align output and input positions for decoding, to solve the reconstruction task. We leave a more detailed explanation with examples in Appendix A. For euclidean groups SE(n), Winter et al. (2022) suggests implementing \(\psi\) as equivariant ortho-normal vectors in the unit n-dimensional sphere \(S^{n}\). In our method, instead of separately representing and applying the equivariance with \(\psi\), we propose to also incorporate equivariance into \(\mathcal{E}\) and \(\mathcal{D}\) by constructing latent features as point-structured variables \(\mathbf{z}=\langle\mathbf{z}_{\mathrm{x}},\mathbf{z}_{\mathrm{h}}\rangle\in \mathbb{R}^{N\times(3+k)}\), which holds 3-d equivariant and \(k\)-d invariant latent features \(\mathbf{z}_{\mathrm{x}}\) and \(\mathbf{z}_{\mathrm{h}}\) for each node. This in practice can be implemented by parameterizing \(\mathcal{E}\) and \(\mathcal{D}\) with equivariant graph neural networks (EGNNs) (Satorras et al., 2021b), which extract both invariant and equivariant embeddings with the property: \[\mathbf{R}\mathbf{z}_{\mathrm{x}}+\mathbf{t},\mathbf{z}_{\mathrm{h}}=\mathcal{E}_{ \phi}(\mathbf{R}\mathbf{x}+\mathbf{t},\mathbf{h});\mathbf{R}\mathbf{x}+\mathbf{t}, \mathbf{h}=\mathcal{D}_{\xi}(\mathbf{R}\mathbf{z}_{\mathrm{x}}+\mathbf{t}, \mathbf{z}_{\mathrm{h}}), \tag{5}\] for all rotations \(\mathbf{R}\) and translations \(\mathbf{t}\). We provide parameterization details of EGNNs in Appendix C. The latent points \(\mathbf{z}_{\mathrm{x}}\) can perform the role of \(\psi\) required in Proposition 4.1, to align the orientation of outputs towards inputs. Furthermore, this point-wise latent space follows the inherent structure of geometries \(\mathcal{G}\), thereby achieving good reconstructions. Then the encoding and decoding processes can be formulated by \(q_{\phi}(\mathbf{z}_{\mathrm{x}},\mathbf{z}_{\mathrm{h}}|\mathbf{x},\mathbf{ h})=\mathcal{N}(\mathcal{E}_{\phi}(\mathbf{x},\mathbf{h}),\sigma_{0}\mathbf{I})\) and \(p_{\xi}(\mathbf{x},\mathbf{h}|\mathbf{z}_{\mathrm{x}},\mathbf{z}_{\mathrm{h}})= \prod_{i=1}^{N}p_{\xi}(x_{i},h_{i}|\mathbf{z}_{\mathrm{x}},\mathbf{z}_{ \mathrm{h}})\) respectively. Following Xu et al. (2022); Hoogeboom et al. (2022) that linear subspaces with the center of gravity always being zero can induce translation-invariant distributions, we also define distributions of latent \(\mathbf{z}_{\mathrm{x}}\) and reconstructed \(\mathbf{x}\) on the subspace that \(\sum_{i}\mathbf{z}_{\mathrm{x},i}\) (or \(\mathbf{x}_{i}\)) \(=0\). The whole framework can be effectively optimized by: \[\begin{split}&\mathcal{L}_{AE}=\mathcal{L}_{recon}+\mathcal{L}_{ rng},\\ &\mathcal{L}_{recon}=-\mathbb{E}_{q_{\phi}(\mathbf{z}_{\mathrm{x}}, \mathbf{z}_{\mathrm{h}}|\mathbf{x},\mathbf{h})}p_{\xi}(\mathbf{x},\mathbf{h}| \mathbf{z}_{\mathrm{x}},\mathbf{z}_{\mathrm{h}}),\end{split} \tag{6}\] which is a reconstruction loss combined with a regularization term. The reconstruction loss in practice is calculated as \(L_{2}\) norm or cross-entropy for continuous or discrete features. For the \(\mathcal{L}_{rng}\) terms we experimented with two variants: _KL-reg_(Rombach et al., 2022), a slight Kullback-Leibler penalty of \(q_{\phi}\) towards standard Gaussians similar to variational AE; and _ES-reg_, an early-stop \(q_{\phi}\) training strategy to avoid a scattered latent space. The regularization prevents latent embeddings from arbitrarily high variance and is thus more suitable for learning the latent DMs (LDMs). ### Geometric Latent Diffusion Models With the equivariant autoencoding functions \(\mathcal{E}_{\phi}\) and \(\mathcal{D}_{\xi}\), now we can represent structures \(\mathcal{G}\) using lower-dimensional latent variables \(\mathbf{z}\) while still keeping geometric properties. Compared with the original atomic features which are high-dimensional with complicated data types and scales, the encoded latent space significantly benefits likelihood-based generative models since: (i) as described in Section 4.1, our proposed AEs can be viewed as _regularized autoencoders_(Ghosh et al., 2020), where the latent space is more compact and smoothed, thereby improving DM's training; (ii) latent codes also enjoy lower dimensionality and benefit the generative modeling complexity, since DMs typically operate in the full dimension of inputs. Existing latent generative models for images (Vahdat et al., 2021; Esser et al., 2021) and texts (Li et al., 2022) usually rely on typical autoregressive or diffusion models to model the scalar-valued latent space. By contrast, a fundamental challenge for our method is that the latent space \(\mathbf{z}\) contains not only scalars (_i.e._, invariant features) \(\mathbf{z}_{\text{h}}\) but also tensors (_i.e._, equivariant features) \(\mathbf{z}_{\text{x}}\). This requires the distribution of latent DMs to satisfy the critical invariance: \[p_{\theta}(\mathbf{z}_{\text{x}},\mathbf{z}_{\text{h}})=p_{\theta}(\mathbf{R} \mathbf{z}_{\text{x}},\mathbf{z}_{\text{h}}),\ \forall\ \mathbf{R}. \tag{7}\] Xu et al. (2022) proved that this can be achieved if the initial distribution \(p(\mathbf{z}_{\text{x},T},\mathbf{z}_{\text{h},T})\) is invariant while the transitions \(p_{\theta}(\mathbf{z}_{\text{x},t-1},\mathbf{z}_{\text{h},t-1}|\mathbf{z}_{ \text{x},t},\mathbf{z}_{\text{h},t})\) are equivariant: \[p_{\theta}(\mathbf{z}_{\text{x},t-1},\mathbf{z}_{\text{h},t-1}| \mathbf{z}_{\text{x},t},\mathbf{z}_{\text{h},t})=\\ p_{\theta}(\mathbf{R}\mathbf{z}_{\text{x},t-1},\mathbf{z}_{ \text{h},t-1}|\mathbf{R}\mathbf{z}_{\text{x},t},\mathbf{z}_{\text{h},t}),\ \forall\ \mathbf{R}. \tag{8}\] Xu et al. (2022); Hoogeboom et al. (2022) further show that this can be realized by implementing the denoising dynamics \(\mathbf{\epsilon}_{\theta}\) with equivariant networks such that: \[\mathbf{R}\mathbf{z}_{\text{x},t-1}+\mathbf{t},\mathbf{z}_{\text{h},t-1}=\mathbf{ \epsilon}_{\theta}(\mathbf{R}\mathbf{z}_{\text{x},t}+\mathbf{t},\mathbf{z}_{\text {h},t},t),\ \forall\ \mathbf{R}\text{ and }\mathbf{t}. \tag{9}\] which in practice we parameterize as time-conditional EG-NNs. More model details are also provided in Appendix C. Similar to the encoding posterior, in order to keep translation invariance, all the intermediate states \(\mathbf{z}_{\text{x},t},\mathbf{z}_{\text{h},t}\) are also required to lie on the subspace by \(\sum_{i}\mathbf{z}_{\text{x},t,i}=0\) by moving the center of gravity. Analogous to Equation (3), now we can train the model by: \[\mathcal{L}_{LDM}=\mathbb{E}_{\mathcal{E}(\mathcal{G}),\mathbf{\epsilon}\sim \mathcal{N}(0,\mathbf{I}),t}\big{[}w(t)||\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}( \mathbf{z}_{\text{x},t},\mathbf{z}_{\text{h},t},t)||^{2}\big{]}, \tag{10}\] with \(w(t)\) simply set as \(1\) for all steps \(t\). **Theoretical analysis.** The combined objective for the whole framework, _i.e._, \(\mathcal{L}_{AE}+\mathcal{L}_{LDM}\), appears similar to the standard VAE objective with an additional regularization. We make the formal justification that considering neglecting the minor \(\mathcal{L}_{reg}\) term, \(\mathcal{L}=\mathcal{L}_{recon}+\mathcal{L}_{LDM}\) is theoretically an SE(3)-invariant variational lower bound of log-likelihood: **Theorem 4.2**.: _(informal) Let \(\mathcal{L}:=\mathcal{L}_{recon}+\mathcal{L}_{LDM}\). With certain weights \(w(t),\)\(\mathcal{L}\) is an SE(3)-invariant variational lower bound to the log-likelihood, i.e., for any geometries \(\langle\mathbf{x},\mathbf{h}\rangle\), we have:_ \[\mathcal{L}(\mathbf{x},\mathbf{h})\geq-\mathbb{E}_{p_{\text{data }}}[\log p_{\theta,\xi}(\mathbf{x},\mathbf{h})],\text{ and}\] \[\mathcal{L}(\mathbf{x},\mathbf{h})=\mathcal{L}(\mathbf{R} \mathbf{x}+\mathbf{t},\mathbf{h}),\ \forall\ rotation\ \mathbf{R}\text{ and translation }\mathbf{t},\] _where \(p_{\theta,\xi}(\mathbf{x},\mathbf{h})=\mathbb{E}_{p_{\theta}(\mathbf{z}_{ \text{x}},\mathbf{z}_{\text{h}})}p_{\xi}(\mathbf{x},\mathbf{h}|\mathbf{z}_{ \text{x}},\mathbf{z}_{\text{h}})\) is the marginal distribution of \(\langle\mathbf{x},\mathbf{h}\rangle\) under GeoLDM model._ Furthermore, for the induced marginal distribution \(p_{\theta,\xi}(\mathbf{x},\mathbf{h})\), we also hold the equivariance property that: **Proposition 4.3**.: _With decoders and latent DMs defined with equivariant distributions, the marginal \(p_{\theta,\xi}(\mathbf{x},\mathbf{h})=\mathbb{E}_{p_{\theta}(\mathbf{z}_{ \text{x}},\mathbf{z}_{\text{h}})}p_{\xi}(\mathbf{x},\mathbf{h}|\mathbf{z}_{ \text{x}},\mathbf{z}_{\text{h}})\) is an SE(3)-invariant distribution._ These theoretical analysis suggest that GeoLDM is parameterized and optimized in an SE(3)-invariant fashion, which is a critical inductive bias for geometric generative models (Satorras et al., 2021; Xu et al., 2022) and provides explanations as to why our framework can achieve better 3D geometries generation quality. We provide the full statements and proofs in Appendix B ### Training and Sampling With the proposed formulation and practical parameterization, we now present the training and sampling schemes for GeoLDM. While objectives for training Geometric AEs and LDMs are already defined in Equations (6) and (10), it is still unclear whether the two components should be trained one by one, or optimized simultaneously by backpropagation through reparameterizing (Kingma and Welling, 2013). Previous work about latent DMs for image generation (Sinha et al., 2021; Rombach et al., 2022) shows that the two-stage training strategy usually leads to better performance, and we notice similar phenomena in our experiments. This means we first train AE with regularization, and then train the latent DMs on the latent embeddings encoded by the pre-trained encoder. A formal description of the training process is provided in Algorithm 1. With GeoLDM we can formally define a residual generative distribution \(p_{\theta,\xi}(\mathbf{x},\mathbf{h},\mathbf{z}_{\mathbf{x}},\mathbf{z}_{ \mathbf{h}})=p_{\theta}(\mathbf{z}_{\mathbf{x}},\mathbf{z}_{\mathbf{h}})p_{ \xi}(\mathbf{x},\mathbf{h}|\mathbf{z}_{\mathbf{x}},\mathbf{z}_{\mathbf{h}})\), where \(p_{\theta}\) refers to the latent DM modeling the point-structured latent codes, and \(p_{\xi}\) denotes the decoder. We can generate molecular structures by first sampling equivariant latent embeddings from \(p_{\theta}\) and then translating them back to the original geometric space with \(p_{\xi}\). The pseudo-code of the sampling procedure is provided in Algorithm 2. For the number of nodes \(N\), in the above sections, we assume it to be predefined for each data point. In practice, we need to sample different numbers \(N\) for generating molecules of different sizes. We follow the common practice (Satorras et al., 2021) to first count the distribution \(p(N)\) of molecular sizes on the training set. Then for generation, we can first sample \(N\sim p(N)\) and then generate latent variables and node features in size \(N\). ### Controllable Generation Similar to other generative models (Kingma and Welling, 2013; Van Den Oord et al., 2016), DMs are also capable of controllable generation with given conditions \(s\), by modeling conditional distributions \(p(\mathbf{z}|s)\). This in DMs can be implemented with conditional denoising networks \(\mathbf{\epsilon}_{\theta}(\mathbf{z},t,s)\), with the critical difference that it takes additional inputs \(s\). In the molecular domain, desired conditions \(s\) typically are chemical properties, which are much lower-dimensional than the text prompts for image generations (Rombach et al., 2022; Ramesh et al., 2022). Therefore, instead of sophisticated cross-attention mechanisms used in text-guided image generation, we follow Hoogeboom et al. (2022) and simply parameterize the conditioning by concatenating \(s\) to node features. Besides, as a whole framework, we also adopt similar concatenation methods for the encoder and decoder,, \(\mathcal{E}_{\phi}(\mathbf{x},\mathbf{h},s)\) and \(\mathcal{D}_{\xi}(\mathbf{z}_{\mathbf{x}},\mathbf{z}_{\mathbf{h}},s)\), to further shift the latent codes towards data distribution with desired properties \(s\). ## 5 Experiments In this section, we justify the advantages of GeoLDM with comprehensive experiments. We first introduce our experimental setup in Section 5.1. Then we report and analyze the evaluation results in Section 5.2 and Section 5.3, for unconditional and conditional generation respectively. We also provide further ablation studies in Appendix E to investigate the effect of several model designs. We leave more implementation details in Appendix D. ### Experiment Setup **Evaluation Task.** Following previous works on molecule generation in 3D (Gebauer et al., 2019; Luo and Ji, 2021; Satorras et al., 2021; Hoogeboom et al., 2022; Wu et al., 2022), we evaluate GeoLDM by comparing with the state-of-the-art approaches on three comprehensive tasks. _Molecular Modeling and Generation_ measures the model's capacity to learn the molecular data distribution and generate chemically valid and structurally diverse molecules. _Controllable Molecule Generation_ concentrates on generating target molecules with desired chemical properties. For this task, we retrain the conditional version GeoLDM on molecular data with corresponding property labels. **Datasets.** We first adopt _QM9_ dataset (Ramakrishnan et al., Figure 2: Molecules generated by GeoLDM trained on QM9 (left three) and DRUG (right four). 2014) for both unconditional and conditional molecule generation. QM9 is one of the most widely-used datasets for molecular machine learning research, which has also been adopted in previous 3D molecule generation studies (Gebauer et al., 2019, 2021). QM9 contains 3D structures together with several quantum properties for 130k small molecules, limited to 9 heavy atoms (29 atoms including hydrogens). Following (Anderson et al., 2019), we split the train, validation, and test partitions, with 100K, 18K, and 13K samples. For the molecule generation task, we also test GeoLDM on the _GEOM-DRUG_ (Geometric Ensemble Of Molecules) dataset. The DRUG dataset consists of much larger organic compounds, with up to 181 atoms and 44.2 atoms on average, in 5 different atom types. It covers 37 million molecular conformations for around 450,000 molecules, labeled with energy and statistical weight. We follow the common practice (Hoogeboom et al., 2022) to select the 30 lowest energy conformations of each molecule for training. ### Molecular Modeling and Generation **Evaluation Metrics.** We measure model performances by evaluating the chemical feasibility of generated molecules, indicating whether the model can learn chemical rules from data. Given molecular geometries, we first predict bond types (single, double, triple, or none) by pair-wise atomic distances and atom types. Then we calculate the _atom stability_ and _molecule stability_ of the predicted molecular graph. The first metric captures the proportion of atoms that have the right valency, while the latter is the proportion of generated molecules for which all atoms are stable. In addition, We report _validity_ and _uniqueness_ metrics, which are the percentages of valid (measured by RDKit) and unique molecules among all the generated compounds. **Baselines.** We compare GeoLDM to several competitive baseline models. _G-Schnet_(Gebauer et al., 2019) and Equivariant Normalizing Flows (_ENF_) (Satorras et al., 2021) are previous equivariant generative models for molecules, based on autoregressive and flow-based models respectively. Equivariant Graph Diffusion Models (_EDM_) with its non-equivariant variant (_GDM_) (Hoogeboom et al., 2022) are recent progress on diffusion models for molecule generation. Most recently, Wu et al. (2022) proposed an improved version of EDM (_EDM-Bridge_), which further boosts the performance with well-designed informative prior bridges. To yield a fair comparison, all the baseline models use the same parameterization and training configurations as described in Section 5.1. **Results and Analysis.** We generate \(10,000\) samples from each method to calculate the above metrics, and the results are reported in Table 1. As shown in the table, GeoLDM outperforms competitive baseline methods on all metrics with an obvious margin. It is worth noticing that, for the DRUG dataset, even ground-truth molecules have \(86.5\%\) atom-level and nearly \(0\%\) molecule-level stability. This is because the DRUG molecules contain larger and more complex structures, creating errors during bond type prediction based on pair-wise atom types and distances. Furthermore, as DRUG contains many more molecules with diverse compositions, we also observe that _unique_ metric is almost \(100\%\) for all methods. Therefore, we omit the _molecule stability_ and _unique_ metrics for the DRUG dataset. Overall, the superior performance demonstrates GeoLDM's higher capacity to model the molecular distribution and generate chemically realistic molecular geometries. We provide visualization of \begin{table} \begin{tabular}{l|c c c c|c c} \hline \hline & & \multicolumn{3}{c|}{**QM9**} & \multicolumn{2}{c}{**DRUG**} \\ \# Metrics & Atom Sta (\%) & Mol Sta (\%) & Valid (\%) & Valid \& Unique (\%) & Atom Sta (\%) & Valid (\%) \\ \hline Data & 99.0 & 95.2 & 97.7 & 97.7 & 86.5 & 99.9 \\ \hline ENF & 85.0 & 4.9 & 40.2 & 39.4 & - & - \\ G-Schnet & 95.7 & 68.1 & 85.5 & 80.3 & - & - \\ GDM & 97.0 & 63.2 & - & - & 75.0 & 90.8 \\ GDM-aug & 97.6 & 71.6 & 90.4 & 89.5 & 77.7 & 91.8 \\ EDM & 98.7 & 82.0 & 91.9 & 90.7 & 81.3 & 92.6 \\ EDM-Bridge & 98.8 & 84.6 & 92.0* & 90.7 & 82.4 & 92.8* \\ \hline **GraphLDM** & 97.2 & 70.5 & 83.6 & 82.7 & 76.2 & 97.2 \\ **GraphLDM-aug** & 97.9 & 78.7 & 90.5 & 89.5 & 79.6 & 98.0 \\ \hline **GEOLDM** & **98.9**\(\pm\) 0.1 & **89.4**\(\pm\) 0.5 & **93.8**\(\pm\) 0.4 & **92.7**\(\pm\) 0.5 & **84.4** & **99.3** \\ \hline \hline \end{tabular} *Results obtained by our own experiments. Other results are borrowed from recent studies (Hoogeboom et al., 2022; Wu et al., 2022). \end{table} Table 1: Results of atom stability, molecule stability, validity, and validity\(\times\)uniqueness. A higher number indicates a better generation quality. Metrics are calculated with 10000 samples generated from each model. On QM9, we run the evaluation for 3 times and report the derivation. Note that, for DRUG dataset, molecule stability and uniqueness metric are omitted since they are nearly \(0\%\) and \(100\%\) respectively for all the methods. Compared with previous methods, the latent space with both invariant and equivariant variables enables GeoLDM to achieve up to 7% improvement for the validity of large molecule generation. randomly generated molecules in Figure 2, and leave more visualizations in Appendix F. **Ablation Study.** Furthermore, to verify the benefits of incorporating equivariant latent features, we conduct ablation studies with only invariant variables in the latent space, called Graph Latent Diffusion Models (GraphLDM). We run GraphLDM with the same configuration as our method, except that all modules (_i.e._, encoder, decoder, and latent diffusion models) are instead equipped with typical non-equivariant graph networks. We also follow Hoogeboom et al. (2022) to test GDM-aug and GraphLDM-aug, where models are trained with data augmented by random rotations. Table 1 shows the empirical improvement of GeoLDM over these ablation settings, which verifies the effectiveness of our latent equivariance design. ### Controllable Molecule Generation **Evaluation Metrics.** In this task, we aim to conduct controllable molecule generation with the given desired properties. This can be useful in realistic settings of material and drug design where we are interested in discovering molecules with specific property preferences. We test our conditional version of GeoLDM on QM9 with 6 properties: polarizability \(\alpha\), orbital energies \(\varepsilon_{\mathrm{HOMO}}\), \(\varepsilon_{\mathrm{LUMO}}\) and their gap \(\Delta\varepsilon\), Dipole moment \(\mu\), and heat capacity \(C_{v}\). For evaluating the model's capacity to conduct property-conditioned generation, we follow Satorras et al. (2021) to first split the QM9 training set into two halves with \(50K\) samples in each. Then we train a property prediction network \(\omega\) on the first half, and train conditional models on the second half. Afterward, given a range of property values \(s\), we conditionally draw samples from the generative models and then use \(\omega\) to calculate their property values as \(\hat{s}\). We report the _Mean Absolute Error (MAE)_ between \(s\) and \(\hat{s}\) to measure whether generated molecules are close to their conditioned property. We also test the MAE of directly running \(\omega\) on the second half QM9, named _QM9_ in Table 2, which measures the bias of \(\omega\). A smaller gap with _QM9_ numbers indicates a better property-conditioning performance. **Baselines.** We incorporate existing EDM as our baseline model. In addition, we follow Hoogeboom et al. (2022) to also list two baselines agnostic to ground-truth property \(s\), named _Random_ and \(N_{\text{atoms}}\). _Random_ means we simply do random shuffling of the property labels in the dataset and then evaluate \(\omega\) on it. This operation removes any relation between molecule and property, which can be viewed as an upper bound of _MAE_ metric. \(N_{\text{atoms}}\) predicts the molecular properties by only using the number of atoms in the molecule. The improvement over _Random_ can verify the method is able to incorporate conditional property information into the generated molecules. And overcoming \(N_{\text{atoms}}\) further indicates the model can incorporate conditioning into molecular structures beyond the number of atoms. **Results and Analysis.** We first provide a visualization of controlled molecule generation by GeoLDM in Figure 3, as qualitative assessments. We interpolate the conditioning property with different Polarizability values \(\alpha\) while keeping the reparameterization noise \(\epsilon\) fixed. Polarizability refers to the tendency of matter, when subjected to an electric field, to acquire an electric dipole moment in proportion to that applied field. Typically, less isometrically molecular geometries lead to larger \(\alpha\) values. This is consistent with our observed phenomenon in Figure 3. We report the numerical results in Table 2. As shown in the table, GeoLDM significantly outperforms baseline models, including the previous diffusion model running on atomic features (EDM), on all the property metrics. The results demonstrate that by modeling in the latent space, GeoLDM acquired a higher capacity to incorporate given property information into the generation process. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Property & \(\alpha\) & \(\Delta\varepsilon\) & \(\varepsilon_{\mathrm{HOMO}}\) & \(\varepsilon_{\mathrm{LUMO}}\) & \(\mu\) & \(C_{v}\) \\ Units & Bohr\({}^{3}\) & meV & meV & meV & D & \(\frac{\mathrm{cal}}{\mathrm{mol}}\)K \\ \hline QM9* & 0.10 & 64 & 39 & 36 & 0.043 & 0.040 \\ \hline Random* & 9.01 & 1470 & 645 & 1457 & 1.616 & 6.857 \\ \(N_{\text{atoms}}\) & 3.86 & 866 & 426 & 813 & 1.053 & 1.971 \\ EDM & 2.76 & 655 & 356 & 584 & 1.111 & 1.101 \\ \hline GeoLDM & 2.37 & 587 & 340 & 522 & 1.108 & 1.025 \\ \hline \hline \end{tabular} * The results of _QM9_ and _Random_ can be viewed as lower and upper bounds of MAE on all properties. \end{table} Table 2: Mean Absolute Error for molecular property prediction. A lower number indicates a better controllable generation result. Results are predicted by a pretrained EGNN classifier \(\omega\) on molecular samples extracted from individual methods. Figure 3: Molecules generated by conditional GeoLDM. We conduct controllable generation with interpolation among different Polarizability \(\alpha\) values with the same reparametrization noise \(\epsilon\). The given \(\alpha\) values are provided at the bottom. ## 6 Conclusion and Future Work We presented GeoLDM, a novel latent diffusion model for molecular geometry generation. While current models operate directly on high-dimensional, multi-modal atom features, GeoLDM overcomes their limitations by learning diffusion models over a continuous, lower-dimensional latent space. By building point-structured latent codes with both invariant scalars and equivariant tensors, GeoLDM is able to effectively learn latent representations while maintaining roto-translational equivariance. Experimental results demonstrate its significantly better capacity for modeling chemically realistic molecules. For future work, as a general and principled framework, GeoLDM can be extended to various 3D geometric generation applications, _e.g._, apply GeoLDM in more realistic drug discovery scenarios with given protein targets, or scale up GeoLDM for more challenging 3D geometries such as peptides and proteins. ## Acknowledgements We thank Tailin Wu, Aaron Lou, Xiang Lisa Li, and Kexin Huang for discussions and for providing feedback on our manuscript. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. We also gratefully acknowledge the support of NSF (#1651565), ARO (W911NF-21-1-0125), ONR (N00014-23-1-2159), CZ Biohub, Stanford HAI. We also gratefully acknowledge the support of Novo Nordisk A/S. Minkai Xu thanks the generous support of Sequoia Capital Stanford Graduate Fellowship.
2303.13812
Rectangular matrix additions in low and high temperatures
We study the addition of two random independent $M\times N$ rectangular random matrices with invariant distributions in two limit regimes, where the parameter beta (inverse temperature) goes to infinity and zero. In low temperature regime the random singular values of the sum concentrate at deterministic points, while in high temperature regime we obtain a Law of Large Numbers for the empirical measures. As a consequence, we deliver a duality between low and high temperatures. Our proof uses the type BC Bessel function as characteristic function of rectangular matrices, and through the analysis of this function we introduce a new family of cumulants, that linearize the addition in high temperature limit, and degenerate to the classical or free cumulants in special cases.
Jiaming Xu
2023-03-24T05:13:45Z
http://arxiv.org/abs/2303.13812v5
# Rectangular matrix additions in low and high temperatures ###### Abstract. We study the addition of two random independent \(M\times N\) rectangular random matrices with invariant distributions in two limit regimes, where the parameter beta (inverse temperature) goes to infinity and zero. In low temperature regime the random singular values of the sum concentrate at deterministic points, while in high temperature regime we obtain a Law of Large Numbers for the empirical measures. As a consequence, we deliver a duality between low and high temperatures. Our proof uses the type BC Bessel function as characteristic function of rectangular matrices, and through the analysis of this function we introduce a new family of cumulants, that linearize the addition in high temperature limit, and degenerate to the classical or free cumulants in special cases. ## 1. Introduction ### Overview Addition is one of the most natural operations on matrices. For deterministic matrices, there was a classical question posed by Weyl [W] in 1912, which considers eigenvalues \(c_{1}\leq...\leq c_{N}\) of \(C=A+B\), where \(A,B\) are two arbitrary self-adjoint \(N\times N\) matrices with fixed real eigenvalues \(a_{1}\leq...\leq a_{N}\), \(b_{1}\leq...\leq b_{N}\), and try to describe all the possible values of \(c_{1}\leq...\leq c_{N}\). Solved by the end of \(XX^{th}\) centery due to combined efforts by Horn, Klyochko, Kuntson-Tao, and others, see e.g [Ho], [Kl], [KT]. In random matrices, people usually assume the summands \(A\) and \(B\) are random, independent and share some certain symmetries, and the study of this type of questions have significant connections with free probability theory. A well-known classical result connecting random matrix addition and free probability is stated by Volculescu in [Vo], which considers the addition of two independent real/complex/real quaternionic self-adjoint matrices, and relates its asymptotic behavior as the size of the matrix grows with the notion of _free convolution_. There's also another classical result of similar flavor in rectangular setting. Take \(\{A_{M}\}_{M=1}^{\infty}\), \(\{B_{M}\}_{M=1}^{\infty}\) to be two independent sequences of \(M\times N\) (\(M\leq N\)) matrices with real/complex/real quaternionic entries, that are uniformly chosen from the set of rectangular matrices with given singular values \(a_{M,1}\geq...\geq a_{M,M}\geq 0\) and \(b_{M,1}\geq...\geq b_{M,M}\geq 0\), and let \(C_{M}=A_{M}+B_{M}\) with random singular values \(c_{M,1}\geq...\geq c_{M,M}\geq 0\). **Definition 1.1**.: _For a \(M\times N\) (\(M\leq N\)) matrix \(A\) with singular values \(a_{1},...,a_{M}\geq 0\), define its (symmetric) empirical measure to be_ \[\mu_{A}:=\frac{1}{2M}\sum_{i=1}^{M}(\delta_{a_{i}}+\delta_{-a_{i}}).\] **Theorem 1.2**.: _[_B1_, Proposition 2.1]_ _Define \(\{A_{M}\}_{M=1}^{\infty}\), \(\{B_{M}\}_{M=1}^{\infty}\) as above. Assume that \(M,N\to\infty\) in a way that \(N(M)/M\to q\) for some constant \(q\geq 1\), and there exists deterministic probability measures \(\mu_{A}\), \(\mu_{B}\) on \(\mathbb{R}\), such that_ \[\lim_{M\to\infty}\mu_{A_{M}}=\mu_{A},\quad\lim_{M\to\infty}\mu_{B_{M}}=\mu_{B}.\] _Then the random empirical measure of \(C\), \(\mu_{C_{M}}=\frac{1}{2M}\sum_{i=1}^{M}(\delta_{c_{M,i}}+\delta_{-c_{M,i}})\), converges weakly in probability to some deterministic probability measure \(\mu_{C}\) on \(\mathbb{R}\)._ \(\mu_{C}=\mu_{A}\boxplus_{q}\mu_{B}\) _is called the rectangular free convolution of \(\mu_{A}\) and \(\mu_{B}\)._ The rectangular free convolution is a deterministic binary operation of measures on \(\mathbb{R}\), that itself does not rely on random matrix structure, and it was well-studied in free probability theory from different aspects. In particular, for each measure \(\mu\) with finite moments, there exists a collection of _rectangular free cumulants_\(\{c_{l}^{q}\}_{l=1}^{\infty}\) (see [B1, Section 3.1]) that are polynomials of moments with explicit expressions, and these quantities linearize free convolution, i.e, \(c_{l}^{q}(\mu_{A}\boxplus\mu_{B})=c_{l}^{q}(\mu_{A})+c_{l}^{q}(\mu_{B})\) for all \(l\)'s. It turns out that the existence of such cumulants is a common feature of the various version of convolutions in free probability theory, and each convolution is characterized by its corresponding cumulants. On the other side, there has been lots of papers studying additions of \(\beta\)-ensembles of random matrix theory that generalize the above theory in different parameter regimes. The parameter \(\beta>0\) is interpreted in physics as inverse temperature, and the cases \(\beta=1,2,4\) correspond to matrices with real/complex/real quaternionic entries. There are two classes of matrix ensembles, the \(N\times N\) self-adjoint matrix and the \(M\times N\) rectangular matrices, and the most classical examples are Gaussian ensembles and Laguerre ensembles, respectively. For the first class, [GM] studies the limit behavior of eigenvalues of \(C=A+B\) when \(N\) is fixed and \(\beta\to\infty\), [BCG] proves a Law of Large Numbers similar to Theorem 1.2 when \(N\to\infty,\theta\to 0,N\theta\to\gamma>0\), and [MSS],[AP],[GM] extend the theory of convolution and cumulants to finite matrix additions for \(\beta>0\). However, extending Theorem 1.2 to general \(\beta>0\) remained open. The second class is relatively less understood. [B1],[B2] study the so-called rectangular free convolution for \(\beta=1,2\), \(N,M\to\infty\) and \(N/M\to q\geq 1\), and [GrM],[Gri] study the finite free convolution and cumulants for rectangular matrix additions for \(\beta=1,2\). The matrix ensemble considered in this text belongs to the second class, and we study the limiting behavior of singular values of \(C=A+B\) in both low and high temperature regimes, more precisely, when \(M,N\) are fixed, \(\theta\to\infty\), and when \(M,N\to\infty,\theta\to 0,M\theta\to\gamma>0,N\theta\to q\gamma\) for some \(q\geq 1\). Note that even defining the operation \(C=A+B\) for \(\beta\neq 1,2,4\) is non-trivial, and this is one of our tasks. Our approach is based on distributions of rectangular matrices in a version of characteristic function. The symmetry of self-adjoint/rectangular matrices with fixed eigenvalues/singular values are given by invariance under actions of classical Lie group \(O(N)/U(N)/Sp(N)\), and when \(\beta=1,2,4\), the matrix characteristic functions have matrix integral representations with representation-theoretic background. Such functions have natural analytic continuation to all \(\beta>0\), and can be identified as eigenfunctions of certain differential operators. See the following papers [BCG], [GM], [GS] for application of such idea in random matrices, and also [BG1], [BG2], [H] for the study of more general \(N\)-particle system using symmetric characteristic functions of similar flavor. While the above works deal with self-adjoint matrices, or more generally \(N\)-particle systems that are corresponding to root system of type A with a single root multiplicity \(\theta=\beta/2\), rectangular matrices are corresponding to root system of type BC, that has two distinct root multiplicities parameterized by \(\beta\) in a more involved way. For more connections of type BC Lie theoretic object with probability, see e.g [KVW], [V], [VW]. In this text, the randomness of a M-tuple of nonnegative real numbers (which should be thought as singular values of some \(M\times N\) random matrices) is characterized by a multivariate symmetric function, known as type BC Bessel function in special functions literature. It is also a special case of the symmetric Dunkl kernel, that generalizes the usual Fourier kernel to nontrivial root multiplicities, see [A] for a review. Motivated by the asymptotic behavior in high temperature regime, we apply and further develop a philosophy, that the limit of partial derivatives of logarithm of our characteristic function at \(0\) give a collection of cumulants, and the existence of such cumulants is equivalent to the existence of limiting moments, which implies that the empirical measure of the random M-tuples satisfy a Law of Large Numbers. These new q-\(\gamma\) cumulants are designed to linearize the rectangular addition in the regime \(M\theta\to\gamma,N\theta\to q\gamma\), while the operation itself in this limit regime is called q-\(\gamma\) convolution. Similar to classical and free cumulants, they have nice combinatorial relation with moments. Finally, we point out that there's a surprising identification of q-\(\gamma\) theory with the rectangular finite free probability theory, which was developed in [MSS],[GrM],[Gri] while studying finite rectangular matrix additions. ### Rectangular matrix addition Throughout the text we always take \(\beta=2\theta>0\), and \(\beta=1,2,4\) (\(\theta=\frac{1}{2},1,2\)) correspond to the (skew) field real, complex and real quaterion (whose real dimension are given by \(\beta\)). For \(M\leq N\), given two \(M\times N\) independent random matrices \(A\) and \(B\), we study the randomness of the sum \[C=A+B.\] Inspired by the classical theory of summing independent random variables \(X+Y\), namely, that the charateristic function satiesfies \[\phi_{X+Y}(t)=\phi_{X}(t)\cdot\phi_{Y}(t),\] where \(t\in\mathbb{R}\) is the parameter variable, we have **Proposition 1.3**.: _For \(\theta=\frac{1}{2},1,2\), let \(A\) and \(B\) be \(M\times N\) rectangular independent random matrices, let \(Z\) be \(N\times M\) an arbitrary deterministic matrix with real/complex/real quaternionic entries, and let \(C=A+B\). We have_ \[\mathbb{E}\Big{[}\exp\Big{(}Re(Tr(CZ))\Big{)}\Big{]}=\mathbb{E}\Big{[}\exp \Big{(}Re(Tr(AZ))\Big{)}\Big{]}\cdot\mathbb{E}\Big{[}\exp\Big{(}Re(Tr(BZ)) \Big{)}\Big{]}. \tag{1.1}\] Proof.: \(Re(Tr(CZ))=Re(Tr(AZ))+Re(Tr(BZ))\), and since \(A,B\) are independent, the expectation of the exponential function factors. Let us now rewrite (1.1) in terms of singular values of \(A,B\) and \(C\), and for simplicity first take \(\theta=1\), i.e, we deal with complex matrices. In this text, we are considering the summands \(A\) and \(B\) that have distribution invariant under left and right unitary actions, i.e, \[A\stackrel{{ d}}{{=}}UAV,\ B\stackrel{{ d}}{{=}} UBV, \tag{1.2}\] where \(U\in U(M),V\in U(N)\) are arbitrary unitary matrices. One example is the real/complex/real quaternionic \(M\times N\) Wishart matrix, with i.i.d mean 0 Gaussian entries. Note that if \(A,B\) satisfy (1.2), so is C. For simplicity, in the following discussions we usually stick to A. By singular value decomposition, it's useful to write A as \(U\Lambda V\), where \[\Lambda=\begin{bmatrix}a_{1}&&&&0&...&0\\ &a_{2}&&&&0&...&0\\ &&...&&&&\\ &&&...&&&\\ &&&a_{M}&0&...&0\end{bmatrix}_{M\times N}, \tag{1.3}\] \(\vec{a}=(a_{1},...,a_{M})\in\mathbb{R}_{\geq 0}\), and \(U\in U(M),V\in U(N)\) are random elements under Haar measures on the corresponding unitary groups. For now, assume that the singular values of A are deterministic. One can consider rectangular matrices A with real/real quaternionic entries and define invariant distribution in exactly the same way, while replacing the Haar distributed \(U,V\) by elements in orthogonal/unitary sympletic groups \(O(M)/Sp(M)\), \(O(N)/Sp(N)\). Similarly for \(B\). The eigenvectors of \(AA^{*},A^{*}A\), \(BB^{*},B^{*}B\) are distributed uniformly, and so are eigenvectors of \(CC^{*},C^{*}C\). Hence the nontrivial randomness of \(C\) is about its singular values. Also, because of the singular value decomposition in (1.2), we can replace the parameter matrix \(Z\) by the form in (1.5), where \(z_{1},...,z_{M}\) are its singular values. Therefore, we can rewrite the matrix Fourier transform of \(A\) in (1.1) as a function \(\mathbb{B}(\vec{a};z_{1},...,z_{M};\theta,N)\), such that \[\mathbb{B}(\vec{a};z_{1},z_{2},...,z_{M};\theta,N)=\int dU\int dV\exp\left( \frac{1}{2}Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*})\right), \tag{1.4}\] where \(\Lambda\) is defined in the same way as in (1.3), and \[Z=\begin{bmatrix}z_{1}&&&&\\ &z_{2}&&&&\\ &&...&&\\ &&&...&\\ &&&z_{M}\\ 0&...&0\\ &...&&\\ 0&...&0\end{bmatrix}_{N\times M}, \tag{1.5}\] \(\theta\) takes the value \(\frac{1}{2},1,2\), \((z_{1},...,z_{M})\in\mathbb{R}_{M}\), and \(U\in O(M)/U(M)/Sp(M)\), \(V\in O(N)/U(N)/Sp(N)\) are integrated under Haar measures respectively. The function \(\mathbb{B}(\vec{a};z_{1},...,z_{M};\theta,N)\) is known as multivariate type BC Bessel function in a general theoretical framework of special functions. We briefly review this framework in Appendix A, and give more information about the type BC Bessel function in Section 2.2, 2.3 based on the theory. We note that because of the symmetry of Haar measure, \(\mathbb{B}(\vec{a};z_{1},...,z_{M};\theta,N)\) is symmetric in both \((a_{1},...,a_{M})\) and \((z_{1},...,z_{M})\), and without loss of generality we can always take \(a_{1}\geq a_{2}\geq...\geq a_{M}\). Same for matrix \(B\) and \(C\). Then Proposition 1.3 is rewritten as the following result. **Proposition 1.4**.: _For \(\theta=\frac{1}{2},1,2\), fix \(a_{1}\geq...\geq a_{M}\geq 0\), \(b_{1}\geq...\geq b_{M}\geq 0\), let \(A_{M\times N}\) and \(B_{M\times N}\) be real/complex/real quaternionic rectangular matrices with deterministic singular values \(\{a_{i}\}_{i=1}^{M},\{b_{i}\}_{i=1}^{M}\) and invariant distribution, as in (1.2), \(C=A+B\) with singular values \(\vec{c}=(c_{1}\geq...\geq c_{M}\geq 0)\), then_ \[\mathbb{E}\left[\mathbb{B}(\vec{c};z_{1},...,z_{M};\theta,N)\right]=\mathbb{B} (\vec{a};z_{1},...,z_{M};\theta,N)\cdot\mathbb{B}(\vec{b};z_{1},...,z_{M}; \theta,N),\quad(z_{1},...,z_{M})\in\mathbb{R}^{M}. \tag{1.6}\] For general \(\beta>0\), there is no skew field of real dimension \(\beta\) and therefore no concrete \(\beta\)-rectangular matrices. But motivated by Proposition 1.4, we first identify an invariant \(M\times N\) matrix with uniform "singular vectors" and deterministic singular values \(a_{1},...,a_{M}\), with the M-tuples \(\vec{a}\). Moreover, it's known (see e.g [F, Section 13.4.3], [GT]) that multivariate Bessel functions admit a natural exterpolation from \(\theta=\frac{1}{2},1,2\) to arbitrary real \(\theta>0\). We still denote this function as \(\mathbb{B}(\vec{a};z_{1},...,z_{M};\theta,N)\) for all \(\theta>0\). Hence, the \(M\times N\) matrix addition (of independent summands) is extended to \(\theta>0\), by generalizing Proposition 1.4. **Definition 1.5**.: _Fix \(\theta>0\), \(M\leq N\), \(\vec{a}=(a_{1}\geq...\geq a_{M}\geq 0)\), \(\vec{b}=(b_{1}\geq...\geq b_{M}\geq 0)\), let \(\vec{c}\) be a symmetric random vector in \(\mathbb{R}_{\geq 0}^{M}\), such that_ \[\mathbb{E}\Big{[}\mathbb{B}(\vec{c};z_{1},...,z_{M};\theta,N)\Big{]}=\mathbb{B }(\vec{a};z_{1},...,z_{M};\theta,N)\cdot\mathbb{B}(\vec{b};z_{1},...,z_{M}; \theta,N),\quad(z_{1},...,z_{M})\in\mathbb{R}^{M}. \tag{1.7}\] _We write_ \[\vec{c}=\vec{a}\,\mathbb{H}_{M,N}^{\theta}\,\vec{b}.\] From the probablistic point of view, \(\vec{c}\) is identified as the singular values of the "virtual" random \(M\times N\) matrix \(C=A\,\mathbb{H}_{M,N}^{\theta}\,B\), and \(\mathbb{B}(\vec{c};z_{1},...,z_{M};\theta,N)\) serves as the characterstic function of \(C\). From the analytic point of view, the operation in (1.7) has been studied previously, in the context of Dunkl kernel and Dunkl translation, see [A, Section 3.6]. The expectation symbol on the left of (1.7) holds in the sense that, there exists a unique generalized function1\(\mathfrak{m}\) on \(\mathbb{R}_{\geq 0}^{M}\) depending on \(\vec{a}\) and \(\vec{b}\), such that for any \((z_{1},...,z_{M})\in\mathbb{R}^{M}\), when testing on \(\mathbb{B}(\cdot;z_{1},...,z_{M};\theta,N)\) we get the right side of (1.7), and in particular when taking \(z_{1}=...=z_{M}=0\) we have \(\mathfrak{m}(1)=1\). Note that \(\mathfrak{m}\) is symmetric in the sense that, for any proper test function \(f\) and any permutation \(\sigma\), Footnote 1: Throughout this text, we use the term ”generalized function” instead of ”distribution” to denote a linear functional on smooth functions, in order to avoid confusion with the probability distribution. \[\langle\mathfrak{m},f(c_{1},...,c_{M})\rangle=\langle\mathfrak{m},f(c_{\sigma (1)},...,c_{\sigma(M)})\rangle,\] where \(\langle\mathfrak{m},f\rangle\) denotes the value of the functional \(\mathfrak{m}\) testing on the f. Moreover, by [A, Lemma 3.23]\(\mathfrak{m}\) is compactly supported. The rectangular addition \(\vec{a}\,\mathbb{H}_{M,N}^{\theta}\,\vec{b}\) can also be naturally generalized to independent random M-tuples \(\vec{a},\vec{b}\), by first taking the conditional event that \(\vec{a},\vec{b}\) taking some fixed value, then applying Definition 1.5. Formally, for random M-tuples \(\vec{a}\) we replace the type BC Bessel function by \[G_{N;\theta}(z_{1},...,z_{M}):=\mathbb{E}\Big{[}\mathbb{B}(\vec{a},z_{1},...,z _{M};N,\theta)\Big{]}, \tag{1.8}\] the type BC Bessel generating function of \(\vec{a}\), and we assume the randomness of \(\vec{a}\) to be reasonable, in the sense that the right side of (1.8) is finite and well-behaved as an analytic function of \((z_{1},...,z_{M})\in\mathbb{R}_{M}\). See Section 2.5 for more details. ### Low and high temperature behavior By viewing \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) as the random M-tuples of singular values of some \(M\times N\) virtual rectangular matrix with invariant distribution, it's then natural to study the behavior of \(\vec{c}\) from a random matrix point of view. The distribution of \(\vec{c}\) depends on summands \(\vec{a},\vec{b}\) and parameters \(M\leq N,\theta>0\). In various regimes of parameters, one can propose the following questions: 1. For fixed \(M,N,\theta\), is the distribution of \(\vec{c}\) given by a probability measure on \(\mathbb{R}_{\geq 0}^{M}\), and how do \(\vec{a},\vec{b}\) explicitly determine this measure? 2. What's the "low temperature" behavior of \(\vec{c}\), i.e, when taking \(M,N\) to be fixed, and \(\theta\to\infty\)? 3. What's the "fixed temperature" behavior of \(\vec{c}\), i.e, when taking \(\theta\) to be fixed, and \(M,N\to\infty\)? 4. What's the "high temperature" behavior of \(\vec{c}\), i.e, when taking \(\theta\to 0\), and \(M,N\to\infty\), growing in potentially different speed? This text answers question 2 and 4. For question 1, it is well-believed (but still open) that the generalized function under \(\vec{c}\) is indeed a probability measure, see e.g [A, Section 3.5], and this is related to the positivity conjecture of Littlewood-Richardson coefficients in the theory of symmetric functions, see [St, Conjecture 8.3], [Ro2] or [GM, Conjecture 2.1]. We do not rely on this conjecture and instead analyze moments of the distribution of \(\vec{c}\), which can be defined no matter the positivity conjecture holds or not. See Proposition 2.26 for the precise statement. In low temperature regime, we observe that the random M-tuples are becoming "frozen" at some deterministic positions. More precisely we have the following statement. Let \(1\leq M\leq N\), \(z\) be a formal variable, \((a_{1},...,a_{M}),(b_{1},...,b_{M})\in\mathbb{R}_{\geq 0}^{M}\), we define a polynomial \(P_{M,N}(z)\) by \[\begin{split} P_{M,N}(z)=\sum_{l=0}^{M}(-1)^{l}\Bigg{(}& \sum_{i\geq 0,j\geq 0,i+j=l}\frac{(M-i)!(M-j)!}{M!(M-l)!}\\ &\frac{(N-i)!(N-j)!}{N!(N-l)!}e_{i}(a_{1}^{2},...,a_{M}^{2})e_{j} (b_{1}^{2},...,b_{M}^{2})\Bigg{)}z^{M-l}.\end{split} \tag{1.9}\] **Theorem 1.6**.: _Fix \(M\leq N\), given \(\vec{a}\) and \(\vec{b}\), let \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\). Then as \(\theta\to\infty\), the distribution of \(\vec{c^{2}}=(c_{1}^{2},...,c_{M}^{2})\) converges on polynomial test functions to \(\delta\)-measures on roots of \(P_{M,N}(z)\)._ _Remark 1.7_.: The polynomial \(P_{M,N}(z)\) has previously appeared in [GrM], and it's shown in [GrM, Theorem 2.3] that all roots of \(P_{M,N}(z)\) are real and nonnegative, given that \(a_{1}^{2},...,a_{M}^{2},b_{1}^{2},...,b_{M}^{2}\) are all real and nonnegative, using the theory of stable polynomials. In the fixed temperature regime, it was shown in [B1] (see Theorem 1.2) that for \(\theta=\frac{1}{2},1\), as \(M,N\to\infty\) in a way that \(\frac{N}{M}\to q\), we get the rectangular free convolution. We believe the same result holds for any fixed \(\theta>0\). In high temperature regime, when taking \(\theta\to 0\), \(N\to\infty\) and let \(M\), the number of singular values to be fixed, the type BC multivariate Bessel function becomes a simple symmetric combination of exponents: \[\mathbb{B}(\vec{a},N\theta z_{1},...,N\theta z_{M};\theta,N)\longrightarrow\frac{ 1}{M!}\sum_{\sigma\in S_{M}}\prod_{i=1}^{M}e^{a_{i}^{2}z_{\sigma(i)}^{2}}. \tag{1.10}\] See Appendix D for more details. Such limit expression has a clear probabilistic interpretation. Given deterministic M-tuples \(\vec{a}\) and \(\vec{b}\) as before, let \(\vec{c}=(c_{1},...,c_{M}\geq 0)\) be obtained by choosing uniformly random an element \(\sigma\) in \(S_{M}\), and taking \[(c_{1}^{2},...,c_{M}^{2})=(a_{1}^{2}+b_{\sigma(1)}^{2},...,a_{M}^{2}+b_{\sigma( M)}^{2}).\] Moreover, when taking \(M\rightarrow\infty\) after and assume the empirical measure \[\frac{1}{M}\sum_{i=1}^{M}\delta_{x_{i}^{2}}\ (x_{i}=a_{i,M}\ \text{or}\ b_{i,M})\] of \(\{\vec{a}_{M}\}_{M=1}^{\infty}\), \(\{\vec{b}_{M}\}_{M=1}^{\infty}\) converge to some probability measure \(\mu_{a},\mu_{b}\) on \(\mathbb{R}_{\geq 0}\) weakly, then so is \(\{\vec{c}_{M}\}_{M=1}^{\infty}\), and the empirical measure \[\mu_{c}=\mu_{a}*\mu_{b},\] where \(*\) denotes the usual convolution of measures on \(\mathbb{R}\). We see two different limiting behavior of \(\vec{a}\,\mathbb{B}_{M,N}^{\theta}\,\vec{b}\) as \(M\rightarrow\infty\). For \(\theta=0\) we get the usual convolution, and for \(\theta>0\) we get the rectangular free convolution. This motivates us to look at the intermediate regime between the above two settings, such that we take \(M\rightarrow\infty,N\rightarrow\infty,\theta\to 0,M\theta \rightarrow\gamma>0\), \(N\theta\to q\gamma\) for some \(q\geq 1\). We are interested in the sequence of (random) virtual singular values \(\{\vec{c}_{M}=(c_{M,1}\geq...\geq c_{M,M}\geq 0)\}_{M=1}^{\infty}\), and we study the limiting behavior of the symmetric empirical measures of \(\vec{c}_{M}\). **Definition 1.8**.: _Let \(\{\vec{a}_{M}\}_{M=1}^{\infty}\) be a sequence of random M-tuples such that \(\vec{a}_{M}=(a_{M,1}\geq...\geq a_{M,M}\geq 0)\). Denote_ \[p_{k}^{M}=\frac{1}{2M}\sum_{i=1}^{M}\left[a_{M,i}^{k}+(-a_{M,i})^{k}\right].\] _We say \(\{\vec{a}_{M}\}\) converges in moments, if there exist deterministic nonnegative real numbers \(\{m_{k}\}_{k=1}^{\infty}\) such that for any s=1,2,... and any \(k_{1},...,k_{s}\in\mathbb{Z}_{\geq 1}\), we have_ \[\lim_{M\rightarrow\infty}\mathbb{E}\left[\prod_{i=1}^{s}p_{k_{i}}^{M}\right]= \prod_{i=1}^{s}m_{k_{i}}. \tag{1.11}\] _We write_ \[\vec{a}_{M}\xrightarrow[M\rightarrow\infty]{m}\{m_{k}\}_{k=1}^{\infty}. \tag{1.12}\] _Remark 1.9_.: Definition 1.8 is stating that the empirical measure of \((a_{M,1},...,a_{M,M})\) is converging weakly to some deterministic probability measure with moments \(\{m_{k}\}_{k=1}^{\infty}\), as long as the moment problem of \(\{m_{k}\}_{k=1}^{\infty}\) has a unique solution. By definition \(p_{k}^{M}=0\) for all odd \(k\)'s, and therefore one can immediately see that \(m_{k}=0\) for all odd \(k\)'s. The reason why we use symmetric empirical measure is that there's no canonical choice of the sign of singular values. _Remark 1.10_.: The convergence is well-posed as long as the randomness of \(\vec{a}_{M}\)'s are given by compactly supported generalized function, where the expectation \(\mathbb{E}\) is testing the generalized function by the polynomial function \(p_{k}^{M}\) of \(\vec{a}_{M}\). We prove a law of large numbers of the symmetric empirical measure of \(\vec{c}_{M}\), which is interpreted as the empirical measure of the \(M\times N\) matrix \(C\) with singular values \(c_{M,1},...,c_{M,M}\). We assume the the distribution of each \(\vec{a}_{M}\), \(\vec{b}_{M}\) is given by some real valued compactly supported generalized function or exponentially decaying measure. For the precise meaning of the latter notion and more details of this technicality, see Section 2.5. **Theorem 1.11**.: _Fix \(\gamma>0,q\geq 1\). For \(M=1,2,...\), let \(N(M)\geq M\), \(\theta(M)>0\) be two sequences satisfying \(N\to\infty\), \(\theta\to 0\), \(M\theta\to\gamma\), \(N\theta\to q\gamma\) as \(M\to\infty\). Suppose for two sequences of random tuples \(\{\vec{a}_{M}\}_{M=1}^{\infty},\{\vec{b}_{M}\}_{M=1}^{\infty}\),_ \[\vec{a}_{M}\xrightarrow[M\to\infty]{m}\{m_{k}^{a}\}_{k=1}^{\infty},\quad\vec {b}_{M}\xrightarrow[M\to\infty]{m}\{m_{k}^{b}\}_{k=1}^{\infty}.\] _Then_ \[\vec{a}_{M}\boxplus^{\theta}_{M,N}\vec{b}_{M}\xrightarrow[M\to\infty]{m_{k}^{ c}\}_{k=1}^{\infty},\] _where \(\{m_{k}^{c}\}_{k=1}^{\infty}\) is a sequence of deterministic nonnegative real numbers._ _We say \(\{m_{k}^{c}\}_{k=1}^{\infty}\) is the q-\(\gamma\) convolution of \(\{m_{k}^{a}\}_{k=1}^{\infty}\) and \(\{m_{k}^{b}\}_{k=1}^{\infty}\), written as_ \[\{m_{k}^{c}\}_{k=1}^{\infty}=\{m_{k}^{a}\}_{k=1}^{\infty}\boxplus_{q,\gamma} \{m_{k}^{b}\}_{k=1}^{\infty}.\] We provide more properties of the q-\(\gamma\) convolution in the following two theorems. **Theorem 1.12**.: _There exists a invertible map \(\Upsilon_{m\to k}^{q,\gamma}:\mathbb{R}^{\infty}\to\mathbb{R}^{\infty}\), that corresponds each \(\{m_{2k}\}_{k=1}^{\infty}\) with a collection of q-\(\gamma\) cumulants \(\{k_{l}\}_{l=1}^{\infty}\), i.e, \(\{k_{l}\}_{l=1}^{\infty}=\Upsilon_{m\to k}^{q,\gamma}(\{m_{2k}\}_{k=1}^{\infty})\). The \(q\)-\(\gamma\) cumulants linearizes q-\(\gamma\) convolution: for \(l=1,2,...\),_ \[k_{l}(\{m_{2k}^{c}\}_{k=1}^{\infty})=k_{l}(\{m_{2k}^{a}\}_{k=1}^{\infty})+k_ {l}(\{m_{2k}^{b}\}_{k=1}^{\infty}).\] \(k_{l}=0\) _for all odd \(l\)'s. Also \(m_{k}^{a}\), \(m_{k}^{b}\), \(m_{k}^{c}\) are 0 for all odd k's._ _Treating each \(k_{l}\) as a variable of degree \(l\), then each \(m_{2k}\) is a homogeneous polynomial in \(k_{l}\)'s of degree \(2k\), whose coefficients are polynomials of \(q,\gamma\) with explicit combinatorial description. Conversely, treating \(m_{2k}\) as a variable of degree \(2k\), each even q-\(\gamma\) cumulant \(k_{2l}\) is a homogeneous polynomial in \(m_{2k}\)'s of degree \(2l\)._ **Theorem 1.13**.: _When \(q\to 0,q\gamma\to\infty\), the q-\(\gamma\) convolution of \(\{m_{k}^{a}\}_{k=1}^{\infty}\) and \(\{m_{k}^{b}\}_{k=1}^{\infty}\) turns into usual convolution of the two corresponding independent random variables, and the q-\(\gamma\) cumulants of \(\{m_{2k}^{a}\}_{k=1}^{\infty}\), \(\{m_{2k}^{b}\}_{k=1}^{\infty}\) turn into the usual cumulants after proper rescaling. Similarly, when \(q\) is fixed, \(\gamma\to\infty\), the q-\(\gamma\) convolution turns into rectangular free convolution, and the q-\(\gamma\) cumulants turn into rectangular free cumulants after proper rescaling._ Theorem 1.11 is proved in Section 4. Theorem 1.12 summarizes results in Section 5.1 and 5.2, such that the combinatorial moment-cumulant formula is given in Theorem 5.5, and the relation between moment generating function and cumulant generating function is given in Theorem 5.7. Theorem 1.13 summarizes the connections of q-\(\gamma\) convolution to classical and free convolution, which are given in Theorem 5.11 and Theorem 5.14 respectively. We also provide a limit transition of our \(q\)-\(\gamma\) convolution to the \(\gamma\)-convolution defined in [BCG] in Theorem 5.9, which is related to the asymptotic behavior of self-adjoint matrix additions in high temperature regime. ### Duality between low and high temperatures It is observed that in \(\beta\)-random matrix theory, there's a duality between parameters \(\beta\) and \(\frac{4}{\beta}\) (or \(\theta\) with \(\frac{1}{\theta}\)). For example in [De], the author gives an equality of average products of characteristic polynomials of Gaussian/Chiral \(\beta\)-ensembles at \(\beta\) and \(\frac{4}{\beta}\). Similarly in [F2], it's shown that for Gaussian/Laguerre/Jacobi \(\beta\)-ensembles, the one-point or higher-point functions that describe the linear statistic of eigenvalues at low and high temperature can be identified with each other. The phenomena is not yet fully understood, and one analog exists in the theory of symmetric polynomials, where there is an automorphism that sends Jack polynomial to its dual by taking the transpose of its labelling Young diagram and invert the parameter \(\theta\) at the same time, see [S, Section 3] or [M, (10.17)] for the precise statement. Since in this text we are considering low and high temperature regimes at the same time, the duality is indicating some connection between the two regimes. When \(M,N\) are fixed, \(\theta\to\infty\), \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) concentrate at roots of \(P_{M,N}(z)\), which are identified as the rectangular finite free convolution of \(\vec{a}\) and \(\vec{b}\) defined in [MSS], [GrM]. When \(M,N\to\infty\), \(\theta\to 0\), \(M\theta\to\gamma,\ N\theta\to q\gamma\), \(\vec{c}\) converges in moments to the q-\(\gamma\) convolution of \(\vec{a}\), \(\vec{b}\). We find that the \((M,N)\)-rectangular finite free convolution and the \(q\)-\(\gamma\) convolution match with each other under certain identification of parameters. More precisely, [Gri] introduces a degree M polynomial as the so-called rectangular R-transform, that linearizes rectangular finite free convolution. We treat the coefficients of rectangular R-transform as the rectangular finite cumulants, and show that if identifying \(M\) in rectangular finite convolution with \(-\gamma\) in q-\(\gamma\) convolution, the moment-cumulant relation of rectangular finite convolution and q-\(\gamma\) convolution match perfectly. In addition, since both \(M\) and \(\gamma\) are positive, these two operations are analytic continuation of each other, and they together extend the moment-cumulant relation to \(\gamma\in\mathbb{R}_{\geq 0}\bigcup\mathbb{Z}_{\leq-1}\). See Section 6 for more details. We also note that similar identification of low and high temperature regimes holds also for self-adjoint matrix addtions. [BCG] studies addition of two \(N\times N\) self-adjoint matrices in the high temperature regime \(N\to\infty,\theta\to 0,N\theta\to\gamma>0\), and introduces the so-called \(\gamma\)-convolution and \(\gamma\)-cumulants. On the other hand, [AP] introduces a family of \(d\times d\) free cumulants from finite self-adjoint matrix additions in low temperature, and the authors of these two papers discovered that their moment-cumulant relations can also match by identifying \(d\) with \(-\gamma\). We believe that such matching appearing in both self-adjoint and rectangular matrix additions should not be just coincidence. ### Techniques and difficulties Unlike many other classes of \(\beta\)-ensembles, we don't have a density function of our object \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\), and because of the openness of the positivity conjecture we can't even guarantee that such density exists. Therefore, the proof of main results in low and high temperature regime, Theorem 1.6 and Theorem 1.11, both rely heavily on moment calculations. We characterize its distribution using the type BC Bessel generating function \(G_{N;\theta}(z_{1},...,z_{M};\vec{c})\), which is a new object in random matrix literature, and apply two different approaches in the low and high temperature regime respectively, to extract the moment information of \(\vec{c}\). In order to apply such approaches, it's necessary to figure out the correct notion of Bessel function \(\mathbb{B}(\vec{c};z_{1},...,z_{M};\theta,N)\) for rectangular matrices. On one hand, we start from the case \(\frac{1}{2},1,2\) and define \(\mathbb{B}(\vec{c};z_{1},...,z_{M};\theta,N)\) as the matrix integral in (1.4), based on the probabilistic intuition of rectangular random matrices. On the other hand, for arbitrary \(\theta>0\), we define our type BC Bessel function to be a symmetric Dunkl kernel, that is known as the joint eigenfunction of the corresponding type BC Dunkl operators, with eigenvalues given by the symmetric moments of \(\vec{c}\). While there are infinite versions of Dunkl kernels, we choose the root multiplicities \(m_{\pm e_{i}}\), \(m_{\pm e_{i}\pm e_{j}}\) in a unique way that 1. For \(\theta=\frac{1}{2},1,2\), it coincides with (1.4). 2. For general \(\theta>0\), it has nice explicit power series expansion that naturally extrapolates from \(\theta=\frac{1}{2},1,2\). We find such root multiplicities and verify the analytic and combinatorial properties of \(\mathbb{B}(\cdot;z_{1},...,z_{M};\theta,N)\) in Section 2, by applying the general theory of special functions and symmetric spaces under random matrix motivations. In low temperature regime, we use the explicit expansion of Bessel generating function to calculate the limiting distribution of \(\vec{c}\). And in high temperature regime, we study the asymptotic behavior of the action of Dunkl operators on \(G_{N;\theta}(z_{1},...,z_{M};\vec{c})\), which extracts moment information. More precisely, in Theorem 4.8 we build an equivalence of the following two conditions of a sequence of random M-tuples \(\vec{c}_{M}=(c_{M,1},...,c_{M,M})\in\mathbb{R}_{\geq 0}^{M}\), \(M=1,2,...\), in the regime \(M,N\rightarrow\infty,\theta\to 0,M\theta\rightarrow\gamma,N\theta \to q\gamma\): 1. \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) converges in moments as in Definition 1.8. 2. The \(l^{th}\) order partial derivative in \(z_{1}\) of \(\ln\left(G_{N;\theta}(z_{1},...,z_{M};\vec{c})\right)\) at \(0\) converges to some real number for all \(l=1,2,...\), and the partial derivatives in more than one variables among \(z_{1},...,z_{M}\) at \(0\) all converge to \(0\). The nontrivial limit of \(l^{th}\) order derivative in condition 2 gives the q-\(\gamma\) cumulant \(k_{l}\) of \(\{\vec{c}_{M}\}_{M=1}^{\infty}\), up to some constant. Note that this equivalence itself is independent of the addition operation, and can be applied to a single sequence of (virtual) rectangular matrices. See Section 5.5 as an example. Compared to the previous studies of rectangular additions, which mostly deal with real/complex matrices, our text defines and considers the general \(\beta\)-additions that do not rely on concrete matrix structure. Compared to the study of self-adjoint additions, there are some extra technicality that arise in this text. Firstly, there are two parameters \(M,N\) of the matrix size, and we allow \(M\) and \(N\) to grow in different speed. More importantly, because of the more involved root multiplicities, the type BC Bessel generating functions, type BC Dunkl operators have more complicated expressions, and this makes the combinatoric in the asymptotic analysis more complicated as well. Because of the above two issues, and because of the fact that rectangular matrices are relatively less studied in literatures, it takes more efforts for us to properly define the rectangular version of empirical measures, moments, cumulants etc, and figure out the limit regime that nontrivial behavior and connection to known objects occur. The readers will also see a more complicated moment-cumulant relation of our q-\(\gamma\) convolution, that can degenerate to the usual, free, rectangular free and \(\gamma\)-convolutions which are operations that characterize several other random matrix additions. ### Further Studies We point out several possible directions for further studies in rectangular matrix addition. In the regime \(\theta\rightarrow\infty\) and \(M,N\) fixed, we believe that the fluctuation of \(\vec{c}=\vec{a}\mathbb{H}_{M,N}^{\theta}\vec{b}\) around roots of \(P_{M,N}(z)\) will converge in distribution to some Gaussian vector, since the similar limiting behavior holds for Laguerre \(\beta\) ensemble. We are only able to prove this for the single row matrix, i.e, when \(M=1\), and the general case remains as an open problem. This text does not consider the fixed temperature regime, where \(M,N\to\infty\), \(N/M\to q\geq 1\) and \(\theta\) is fixed. As mentioned previously, we believe that for general \(\theta>0\), we will get rectangular free convolution of the empirical measure in the limit. In the regime \(M,N\to\infty,\theta\to 0,M\theta\to\gamma,N\theta\to q\gamma\), we prove a Law of Large Numbers of the empirical measure of rectangular matrix addition, and it might be of interest to go further, and prove a Central Limit Theorem of it under proper assumption of the summands: for a class of well-behaved test function \(\phi\), testing the empirical measure with \(\phi\) always gives a Gaussian random variable in the limit. We refer to [GuM] for the result in this flavor on a collection of \(\beta\)-ensembles. We also note that we only consider the global behavior of the limiting empirical measure in this text. However, our approach of using Dunkl operators to extract moment information, might be applicable in the study of the bulk or edge limit of certain matrix ensemble, including but not limited to rectangular matrix addition. The paper is organized as follows. In Section 2 we introduce type BC Bessel function and Bessel generating function, which play the role of characteristic function for rectangular matrices. In Section 3 we study the low temperature behavior. In Section 4 we prove the main theorem in high temperature regime, and introduce the q-\(\gamma\) cumulants in an analytic way. Then we study the moment-cumulant relation of q-\(\gamma\) convolution in more details, provide an explicit combinatorial description, and point out its connection with the classical free probability theory in Section 5. Finally in Section 6, we check the quantitative connection between low and high temperature regimes. ### Acknowledgements The author is grateful to Vadim Gorin for a lot of stimulating discussions, and all his useful suggestions on the presentation of this text. We thank Grigori Olshanski for pointing out one useful reference, Simon Marshall for explaining some basics of symmetric spaces, and Margit Roesler for clarification of a technical issue in her lecture notes. ## 2. Bessel functions and Dunkl operators ### Symmetric polynomials Symmetric polynomials are common objects appearing in combinatoric, representation theory, and random matrices. This section recalls basic definitions of several objects in this subject, that we will use in this text. For a detailed introduction of classical results of symmetric polynomials, see e.g [M]. **Definition 2.1**.: _A partition \(\lambda\) is a M-tuple of nonnegative integers \((\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{M}\geq 0)\). We identity \((\lambda_{1},...,\lambda_{M})\) with \((\lambda_{1},...,\lambda_{M},0,...,0)\), and denote the length of \(\lambda\) by \(l(\lambda)\in\mathbb{Z}_{\geq 1}\), which is the number of strictly positive \(\lambda_{i}\)'s. We say a partition is even, if \(\lambda_{1},...,\lambda_{l(\lambda)}\) are all even._ _Let \(|\lambda|=\sum_{i=1}^{l(\lambda)}\lambda_{i}.\) For two partitions \(\lambda,\mu\) such that \(|\lambda|=|\mu|\), there's a lexicographical order between them, that is, \(\lambda>\mu\) if and only if for some \(j\in\mathbb{Z}_{\geq 1}\),_ \[\lambda_{1}=\mu_{1},...,\lambda_{j-1}=\mu_{j-1}\text{ and }\lambda_{j}>\mu_{j}.\] The combinatoric expressions of symmetric polynomials are often given by sums over partitions, for which we introduce the following notions. **Definition 2.2**.: _A Young diagram is graphical representation of a partition. Given a partition \(\lambda\), view it as a collection of \(|\lambda|\) boxes, that there are \(\lambda_{i}\) boxes in the \(i^{\text{th}}\) row. In this text we do not distinguish a partition and its corresponding Young diagram. Let \(s=(i,j)\in\lambda\) be the coordinate of the box on the \(j^{\text{th}}\) column and the \(i^{\text{th}}\) row in \(\lambda\). Moreover, let \(\lambda_{j}^{{}^{\prime}}\) be the number of boxes on the \(j^{\text{th}}\) column of \(\lambda\), and_ \[a(s)=a(i,j)=\lambda_{i}-j,\quad l(s)=l(i,j)=\lambda_{j}^{{}^{\prime}}-i,\quad \lambda^{{}^{\prime}}=(\lambda_{1}^{{}^{\prime}},...,\lambda_{\lambda_{1}}^{{} ^{\prime}}).\] **Definition 2.3**.: _For \(M\in\mathbb{Z}_{\geq 1}\), a symmetric polynomial \(g(z_{1},...,z_{M})\) is a multivariate polynomial of variables \(z_{1},...,z_{M}\) with complex coefficients, such that for any \(\sigma\in S_{M}\), the symmetric group of M elements, we have_ \[f(z_{1},...,z_{M})=f(z_{\sigma(1)},...,z_{\sigma(M)}).\] _We denote the space of all symmetric polynomials in M variables by \(\Lambda_{M}\), which has the structure of an (complex) algebra._ We introduce several classical symmetric polynomials as elements in \(\Lambda_{M}\). **Definition 2.4**.: _The monomial symmetric polynomial \(m_{\lambda}\)'s are a collection of elements in \(\lambda\) indexed by partition \(\lambda\), such that for \(l(\lambda)\leq M\),_ \[m_{\lambda}(\vec{z})=\sum_{(k_{1},...,k_{N})}\sum_{1\leq i_{1}<i_{2}<...<i_{k} \leq M}z_{i_{1}}^{k_{1}}z_{i_{2}}^{k_{2}}...\ z_{i_{k}}^{k_{N}},\] _where \((k_{1},...,k_{N})\) go over all rearrangements of \(\lambda_{1}\geq...\geq\lambda_{N}\) without repetitions. We also take \(m_{\lambda}(\vec{z})=0\) for \(l(\lambda)>M\)._ _The elementary symmetric polynomials \(\{e_{k}\}_{i=1}^{M}\) are_ \[e_{k}(\vec{z})=\sum_{1\leq i_{1}<i_{2}<...<i_{k}\leq M}z_{i_{1}}z_{i_{2}}...z_ {i_{N}}.\] _By definition \(e_{k}=m_{1^{k}}\), where \(1^{k}\) denotes the partition \((1,1,...,1)\) of length k._ _The power sums \(\{p_{k}\}_{k=1}^{\infty}\) are_ \[p_{k}(\vec{z})=z_{1}^{k}+z_{2}^{k}+...+z_{M}^{k}.\] _By definition \(e_{k}=m_{(k)}\), where \((k)\) denotes the length 1 partition \(\lambda\) such that \(\lambda_{1}=k\)._ _Remark 2.5_.: It's clear from definition that \(\{m_{\lambda}\}\) form a linear basis of \(\Lambda_{M}\). Another important fact is that, \(\{e_{k}\}_{k=1}^{M}\) and \(\{p_{k}\}_{k=1}^{\infty}\) are two sets of algebraic generators of \(\Lambda_{M}\), see [M, Chapter 1]. The Jack polynomials play a central role in this text. Let \(\vec{z}\) denote \((z_{1},...,z_{M})\) for some \(M\geq 1\), fix \(\theta>0\), and let \(X\) be a formal auxiliary variable, \(\partial_{i},i=1,2,...,M\), be the partial derivative operator in \(z_{i}\); \(V(\vec{z})=\prod_{1\leq i<j\leq M}(z_{i}-z_{j})\) be the Vandermonde determinant. **Definition 2.6**.: _[_M_, Chapter VI]_ _Let \(D_{M}(X;\theta)\) be a differential operator of the form_ \[D_{M}(X;\theta)=V(\vec{z})^{-1}\det\left[z_{i}^{M-j}\left(z_{i}\frac{\partial}{ \partial z_{i}}+(M-j)\theta+X\right)\,\right]_{1\leq i,j\leq M}. \tag{2.1}\] \(D_{M}(X;\theta)\) _is a generating function (with variable X) of linear differential operators \(D_{M}^{1},...,D_{M}^{M}\) acting on \(\Lambda_{M}\), such that_ \[D_{M}(X;\theta)=\sum_{r=0}^{M}D_{M}^{r}X^{M-r}.\] _The Jack polynomials in M-variables are a collection of elements \(P_{\lambda}(\vec{z})\) in \(\Lambda_{M}\), indexed by partitions \(\lambda\) such that \(l(\lambda)\leq M\). Each \(P_{\lambda}(\vec{z})\) is uniquely determined by the following two properties:_ \[P_{\lambda}(\vec{z})=m_{\lambda}(\vec{z})+\sum_{\mu<\lambda}u_{\mu}^{\lambda }(\theta)m_{\mu}(\vec{z}), \tag{2.2}\] _where \(u_{\mu}^{\lambda}(\theta)\in\mathbb{R}\) are parameterized by \(\theta\), and_ \[D_{M}(X;\theta)P_{\lambda}(\vec{z})=c_{\lambda}^{\lambda}(\theta)P_{\lambda} (\vec{z}), \tag{2.3}\] _where \(c_{\lambda}^{\lambda}(\theta)=\prod_{i=1}^{M}(X+\theta^{-1}\lambda_{i}+M-i)\)._ **Proposition 2.7**.: _[_M_, chapter VI]_ _For \(M\geq l(\mu)\), \(u_{\mu}^{\lambda}(\theta)\) in (2.2) is independent of \(M\)._ Because of last proposition, we write \(P_{\lambda}(\cdot;\theta)=m_{\lambda}(\cdot)+\sum_{\mu<\lambda}u_{\mu}^{ \lambda}(\theta)m_{\mu}(\cdot)\), where \(\cdot\) denotes \((z_{1},...,z_{M})\) for arbitrary \(M\geq l(\lambda)\), which does not affect the combinatorial expansion in \(m_{\mu}\)'s. We also introduce another version of Jack polynomial. **Definition 2.8**.: _The dual of Jack polynomial \(Q_{\lambda}(\cdot;\theta)\) as_ \[Q_{\lambda}(\cdot;\theta)=b_{\lambda}(\theta)\cdot P_{\lambda}(\cdot;\theta),\] _where \(b_{\lambda}(\theta)=\prod_{s\in\lambda}\frac{a(s)+\theta l(s)+\theta}{a(s)+ \theta l(s)+1}\)._ _Remark 2.9_.: It's a nontrivial fact that Jack polynomials satisfying the two defining properties exist. The differential operator \(D_{M}(X;\theta)\) was discovered by Sekiguchi in [S]. Given two Jack polynomials \(P_{v}(\cdot;\theta)\) and \(P_{\mu}(\cdot;\theta)\), their project \(P_{v}(\cdot;\theta)\cdot P_{\mu}(\cdot;\theta)\) is again a symmetric polynomial, and hence can be written as a unique linear combination of Jack polynomials. Namely we have the following equality, where \(C_{\lambda}^{v,\mu}(\theta)\) is the coefficient of \(P_{\lambda}(\cdot;\theta)\) in the expansion: \[P_{v}(\cdot;\theta)P_{\mu}(\cdot;\theta)=\sum_{\lambda}C_{\lambda}^{v,\mu}( \theta)P_{\lambda}(\cdot;\theta). \tag{2.4}\] We note that \(C_{\lambda}^{v,\mu}(\theta)\) is also independent of \(M\) because of Proposition 2.7. ### Type BC Bessel functions For positive integers \(M\leq N\), take a M-tuples of nonnegative real numbers \(\vec{a}=(a_{1}\geq a_{2}\geq...\geq a_{M})\) as the given data. The idea of type BC Bessel function \(\mathbb{B}(\vec{a},z_{1},...,z_{M});\theta)\) is a version of multivariate symmetric Fourier kernel, with certain nontrivial root multiplicities given by parameter \(\theta>0\). In the special functions literature this is a special case of the so-called symmetric Dunkl kernel, see Section 2.3. **Definition 2.10**.: _For \(\theta=\frac{1}{2},1,2\), \(M\leq N\), the type BC multivariate Bessel functions are defined with parameter \(\theta\), and M-tuples of ordered real labels \(a=(a_{1}\geq a_{2}\geq\cdots a_{M})\), that_ \[\mathbb{B}(\vec{a};z_{1},z_{2},...,z_{M};\theta,N)=\int dU\int dV\ \exp\left( \frac{1}{2}Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*})\right),\] _where_ \[\Lambda=\begin{bmatrix}a_{1}&&&&0&...&0\\ &a_{2}&&&&0&...&0\\ &&...&&&&\\ &&&...&&&\\ &&&a_{M}&0&...&0\end{bmatrix}_{M\times N}, \tag{2.5}\] \[Z=\begin{bmatrix}z_{1}&&&&\\ &z_{2}&&&\\ &&...&&&\\ &&&...&\\ &&&z_{M}\\ 0&...&0&\\ &&&...&\\ 0&...&0&\end{bmatrix}_{N\times M}, \tag{2.6}\] \(U\in O(M)/U(M)/Sp(M)\)_, \(V\in O(N)/U(N)/Sp(N)\) are integrated under Haar measures._ Definition 2.10 provides an explicit connection with rectangular matrices, where the integral is of the form as a "Fourier transform"/characteristic function of \(A=U\Lambda V\). However, since there is no (skew) field with real dimension \(\beta\) for general \(\beta>0\), one need to define the Bessel functions in an alternate way that does not rely on explicit matrix structure. For this purpose, we introduce the notion of type BC Jacobi polynomial, which is indeed the multivariate Jacobi polynomial in Appendix A with a specified root multiplicity function parametrized by \(\theta>0\), and was studied in [OO2]. For \(M\in\mathbb{Z}_{\geq 1}\), let \(W\) denote the \(BC_{M}\) Weyl group \(W=S_{M}\ltimes\mathbb{Z}_{2}^{M}\), which acts on functions of \(\vec{x}=(x_{1},...,x_{M})\). The \(S_{M}\) part permutes \(x_{1},...,x_{M}\), and the \(\mathbb{Z}_{2}^{M}\) part acts by \(f(\vec{x})\mapsto f(x_{1}^{\pm},...,x_{M}^{\pm})\). **Definition 2.11**.: _[_OO2_]_ _Take three parameters \(\theta>0,a,b>-1\). The type BC Jacobi polynomials are a collection of functions \(J_{\lambda}(\vec{x};\theta,a,b)\) on the M-dimensional torus_ \[\mathbb{T}=\{(x_{1},...,x_{M})\subset\mathbb{C}^{M},|x_{1}|=...=|x_{M}|=1\},\] _indexed by partition \(\lambda\). And \(J_{\lambda}\)'s are determined by the following:_ _(1). \(J_{\lambda}(\vec{x};\theta,a,b)=x_{1}^{\lambda_{1}}\cdots x_{M}^{\lambda_{M}} +...\), where the dots stand for lower monomials in the lexicographic order as in Definition 2.1, and \(J_{\lambda}\) is \(W\)-invariant,_ _._ 2. \(J_{\lambda}\)_'s are mutually orthogonal in_ \(L^{2}(\mathbb{T},w)\)_, with scalar product given by_ \[\langle f,g\rangle:=\int_{\mathbb{T}}f(\vec{x})\bar{g}(\vec{x})w(\vec{x})\cdot \text{Haar}(d\vec{x}),\] _where_ \[w(\vec{x})=\prod_{1\leq i<j\leq M}|x_{i}-x_{j}|^{2\theta}|1-x_{i}x_{j}|^{2 \theta}\prod_{1\leq i\leq M}|1-x_{i}|^{2a+1}|1+x_{i}|^{2b+1}.\] _Let \(\Phi_{\lambda}(x_{1},...,x_{M};\theta,a,b)\) be the normalized type BC multivariate Jacobi polynomials where \(\Phi_{\lambda}(0,...,0;\theta,a,b)=1\)._ _Remark 2.12_.: By taking \(x_{i}=e^{2z_{i}i}\) for i=1,2,...,M, each \(J_{\lambda}\) is identified with the Jacobi polynomial \(\mathfrak{J}_{\tilde{\lambda}}\) in Definition 7.12, where \(\tilde{\lambda_{k}}=2\tilde{\lambda_{k}}\) for \(k=1,2,...,l(\lambda)\). We define the type BC Bessel function as a limit of type BC Jacobi polynomial, then present a more concrete power series expression of it in terms of Jack polynomials, using the limit transition. **Definition 2.13**.: _Take \(\theta>0\), \(M\leq N\), \(\lambda=\lfloor\epsilon^{-1}(a_{1},...,a_{M})\rfloor\), \(a=\theta(N-M+1)-1\), \(b=\theta-1\), \(\rho\) be a fixed vector defined as in (7.1), the type BC multivariate Bessel function labeled by \(\vec{a}=(a_{1}\geq a_{2}\geq...\geq a_{M})\) is an multivariate analytic function in both \(\vec{a}\) and \((z_{1},...,z_{M})\), defined by_ \[\mathbb{B}(i\vec{a},z_{1},...,z_{M};\theta,N):=\lim_{\epsilon\to 0}\Phi_{ \lfloor\frac{\vec{x}}{2\epsilon}-\frac{\rho}{2}\rfloor}(e^{2\epsilon z_{1}i},...,e^{2\epsilon z_{M}i};a,b,\theta).\] _Remark 2.14_.: Because of Remark 2.12, each \(\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\) is identified with \(f_{\vec{a}}\) in Definition 7.10. Moreover, by specifying \(a,b\) in this way we take the root multiplicities \(m_{\pm e_{i}}=2\theta(N-M),m_{\pm 2e_{i}}=2\theta-1,m_{\pm e_{i}\pm e_{j}}=2\theta\), which are parameterized by a single variable \(\theta>0\). **Definition 2.15**.: _For a partition \(\mu\), \(t\in\mathbb{R}\), \(\theta>0\), let_ \[H(\mu)=\prod_{s\in\mu}[a(s)+1+\theta l(s)], \tag{2.7}\] \[H^{{}^{\prime}}(\mu)=\prod_{s\in\mu}[a(s)+\theta+\theta l(s)], \tag{2.8}\] _and_ \[(t)_{\mu}=\prod_{s\in\mu}[(t+j-1-\theta(i-1)]. \tag{2.9}\] **Proposition 2.16**.: _The limit in Definition 2.13 exists, and_ \[\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\] \[= \sum_{\mu}\prod_{i=1}^{M}\frac{\Gamma(\theta N-\theta(i-1))}{ \Gamma(\theta N-\theta(i-1)+\mu_{i})}\frac{1}{H(\mu)}2^{-2|\mu|}\frac{P_{\mu} (a_{1}^{2},\cdots,a_{M}^{2};\theta)P_{\mu}(z_{1}^{2},\cdots,z_{M}^{2};\theta) }{P_{\mu}(1^{M};\theta)}\] \[= \sum_{\mu}\prod_{i=1}^{M}\frac{\Gamma(\theta N-\theta(i-1))}{ \Gamma(\theta N-\theta(i-1)+\mu_{i})}\frac{\Gamma(\theta M-\theta(i-1))}{ \Gamma(\theta M-\theta(i-1)+\mu_{i})}\frac{H^{{}^{\prime}}(\mu)}{H(\mu)}2^{ -2|\mu|}P_{\mu}(a_{1}^{2},\cdots,a_{M}^{2};\theta)P_{\mu}(z_{1}^{2},\cdots,z_{M }^{2};\theta), \tag{2.10}\] _where \(\mu\) is summed over all partitions of length at most \(M\)._ Proof.: The existence of the limit is guaranteed by Proposition 7.11, Theorem 7.14, Remark 2.12 and 2.14. More precisely, by [Ro, Theorem 2.32], the multivariate Bessel function is meromorphic on \(m\), the root multiplicity function, and the pole set \(K^{sing}\) of \(m\) is explicitly given in [DJO]. One can check that for all \(\theta>0\), \(m\notin K^{sing}\). Hence we can do an analytically continuation of (2.10) from nonnegative root multiplicities to all \(\theta>0\). We do a concrete calculation for the explicit expression on the right of (2.10). By [OO2, Proposition 2.3], \[\Phi_{\lfloor\frac{\vec{a}}{2\epsilon}-\frac{\rho}{2}\rfloor}(e^{ 2\epsilon z_{1}i},...,e^{2\epsilon z_{M}i};\theta,a,b)\] \[= \sum_{\mu\leq\lfloor\frac{\vec{a}}{2\epsilon}-\frac{\rho}{2} \rfloor}\frac{I_{\mu}(\lfloor\frac{\vec{a}}{2\epsilon}-\frac{\rho}{2}\rfloor ;\theta;\sigma+M)P_{\mu}(2cos(2\epsilon z_{j})-2;\theta)}{C(M,\mu;\theta;a,b)},\] where \(I_{\mu}(x_{1},...,x_{M};\theta,h)\) is defined in [OO2, Proposition 2.2], \(\sigma=(a+b+1)/2\), and \[C(M,\mu;\theta,a,b)=I_{\mu}(\mu;\theta,\sigma+\theta M)J_{\mu}(1^{M};\theta,a,b).\] By comparing [OO2, (2.3) and (2.4)], we see that as an inhomogeneous polynomial of \(x_{1},...,x_{M}\), \(I_{\mu}(x_{1},...,x_{M};\theta;h)\) has highest degree term \(P_{\mu}(x_{1}^{2},...,x_{M}^{2};\theta)\). Therefore asymptotically \[I_{\mu}(\lfloor\frac{\vec{a}}{2\epsilon}-\frac{\rho}{2}\rfloor;\theta;\sigma+ M)\approx P_{\mu}(a_{1}^{2},...,a_{M}^{2};\theta)2^{-2|\mu|}\epsilon^{-2|\mu|}.\] On the other hand, when \(\epsilon\) is small, \[P_{\mu}(2cos(2\epsilon z_{j})-2;\theta)\approx P_{\mu}(-(2\epsilon z_{j})^{2} ;\theta)=P_{\mu}(z_{j}^{2};\theta)(-4)^{|\mu|}\epsilon^{2|\mu|},\] so it remains to match the coefficients. This follows by (see [M, (10,20)]) \[P_{\mu}(1^{M};\theta)=\frac{(M\theta)_{\mu}}{H^{\prime}(\mu)}, \tag{2.11}\] and (see [OO2, Remark 2.5]) \[C(M,\mu;\theta,\theta(N-M+1)-1,\theta-1)\] \[= 4^{\mu}\cdot\frac{H(\mu)}{H^{\prime}(\mu)}\prod_{i=1}^{M}\frac{ \Gamma(\mu_{i}+(M-i+1)\theta)}{\Gamma((M-i+1)\theta)}\frac{\Gamma(\mu_{i}+(N -i+1)\theta)}{\Gamma((N-i+1)\theta)}.\qed\] The following example gives a connection of \(\mathbb{B}(\cdot,z_{1},...,z_{M};\theta,N)\) with the usual single variable Bessel function. **Example 2.17**.: _When \(M=1\),_ \[\mathbb{B}(a,iz;\theta,N)=\Gamma(N\theta)\cdot(\frac{az}{2})^{-(N\theta-1)}B _{N\theta-1}(az),\] _where \(B_{\alpha}\) is the Bessel function of the first kind._ Definition 2.13 generalizes the notion of the type BC Bessel function to any \(\theta>0\). In particular, when \(\theta=\frac{1}{2},1,2\), Definition 2.13 provides an explicit power series expansion of the matrix integral in Definition 2.10. There are more than one way to show the equivalence of these two expressions, and the one we present below relies on the representation theory lying behind the concrete objects. **Theorem 2.18**.: _For \(\theta=\frac{1}{2},1,2\),_ \[\int dU\int dV\ exp(i\cdot Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*}))\] \[= \lim_{\epsilon\to 0}\Phi_{\lfloor\frac{\theta}{2}-\frac{\rho}{2} \rfloor}(e^{2\epsilon z_{1}i},...,e^{2\epsilon z_{M}i};a,b,\theta)\] _where the matrix integral on the left is defined in the same way as in Definition 2.10, only differs by a constant \(2i\) in the exponent._ Proof.: \(\Phi_{\lfloor\frac{\theta}{2}-\frac{\rho}{2}\rfloor}(e^{2z_{1}i},...,e^{2z_{M} i};a,b,\theta)\) is identified with spherical function of \(O(M+N)/O(M)\times O(N),U(M+N)/U(M)\times U(N),Sp(M+N)/Sp(M)\times Sp(N)\) respectively according to Theorem 7.16, and the root multiplicity list in Appendix B. After limit transition in Proposition 7.11 or Remark 7.17, it suffices to identify the matrix integral with the corresponding Euclidean spherical function, which we again refer to [11], [12]. _Remark 2.19_.: The expression of matrix integral in Definition 2.10 as power series in Definition 2.13 is not new, and could be found in [10, Section 13.4.3] with a different proof. See the Appendix C for more information and yet another short proof of this result. ### Type BC Dunkl operators As a special class of differential operators, Dunkl operators were introduced in [D], and can be thought as a generalization of the usual partial derivatives on multivariate analytic functions, that take Fourier kernels as eigenfunction. We briefly review the basic general theory of Dunkl opertors in Appendix A, and in this section, we specify to a special class of rational Dunkl operators under root system of type BC, which is parametrized by a single variable \(\theta>0\) and plays a central role in Section 3. For the convenience of readers, we redefine this operator in a more concrete and straightforward way. **Definition 2.20**.: _For \(N\geq M\geq 2,\theta>0\), let \(D_{i}\) be a differential operator acting on analytic functions on \(\mathbb{C}^{M}\) with variables \(z_{1},..,z_{M}\), that_ \[D_{i}=\partial_{i}+\Big{[}\theta(N-M+1)-\frac{1}{2}\Big{]}\frac{1-\sigma_{i}} {z_{i}}+\theta\sum_{j\neq i}\Big{[}\frac{1-\sigma_{ij}}{z_{i}-z_{j}}+\frac{1- \tau_{ij}}{z_{i}+z_{j}}\Big{]}, \tag{2.12}\] _where \(\sigma_{i}\) interchanges \(z_{i}\) and \(-z_{i}\), \(\sigma_{ij}\) interchanges \(z_{i}\) and \(z_{j}\), and \(\tau_{ij}\) interchanges \(z_{i}\) and \(-z_{j}\)._ _Remark 2.21_.: \(D_{i}\)'s are special cases of the rational Dunkl operator in Definition 7.9, such that the reflections \(s_{\alpha}\) for \(\alpha\in R\) are specified as following: \[\sigma_{i}=s_{e_{i}},\ \ \sigma_{ij}=s_{e_{i}-e_{j}},\ \ \tau_{ij}=s_{e_{i}+e_{j}}.\] Moreover, the root multiplicity function is given by \(m_{\pm e_{i}}=2\theta(N-M),m_{\pm 2e_{i}}=2\theta-1,m_{\pm e_{i}\pm e_{j}}=2\theta\), the same as type BC Bessel function in Section 2.2. **Proposition 2.22**.: [D] _The Dunkl operators of same root multiplicities commute, i.e,_ \[D_{i}D_{j}=D_{j}D_{i}\] _for any \(1\leq i,j\leq M\)._ The following result provides connection of type BC multivariate Bessel functions and Dunkl operators, namely, the former are eigenfunctions of the latter. **Definition 2.23**.: _Fix \(M\geq 1\). For \(k=1,2,...\), denote_ \[\mathrm{P}_{k}=D_{1}^{k}+...+D_{M}^{k}.\] **Theorem 2.24**.: _Given \(\vec{a}=(a_{1}\geq...\geq a_{M})\) for each \(k=1,2,...\),_ \[\mathrm{P}_{2k}\mathbb{B}(\vec{a},z_{1},..,z_{M};\theta)=\left(\sum_{i=1}^{M}( a_{i})^{2k}\right)\cdot\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta). \tag{2.13}\] Proof.: This is a special case of Definition 7.10. _Remark 2.25_.: From Proposition 2.16, one can see that \(\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta)\) is symmetric under actions of Weyl group of root system \(BC_{M}\), namely, invariant by interchanging \(z_{i}\) with \(z_{j}\) and replacing \(z_{i}\) by \(-z_{i}\). Similarly, it's necessary to take symmetric power sum of \(D_{i}^{\prime}s\) with even power, which satisfies the same symmetry. ### Matrix addition and moments For \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\), we assume in this section that \(\vec{a},\vec{b}\) are deterministic, and recall from Definition 1.5 that the distribution \(\mathfrak{m}\) of \(\vec{c}\) is given by testing on type BC Bessel function. Note that polynomials are bounded and smooth on compact sets, and therefore are legitimate test functions of \(\mathfrak{m}\). Moreover, by Proposition 1.4 Bessel function is analytic and symmetric on \(\mathbb{C}^{M}\), so we can view it as a generating function of symmetric polynomials of M variables \(c_{1},...,c_{M}\). More precisely, by expanding Bessel functions on both sides of (1.7) using (2.10), we have the following: **Proposition 2.26**.: _For each partition \(\lambda\) with \(l(\lambda)\leq M\), let \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\), then_ \[\mathbb{E}\left[P_{\lambda}(c_{1}^{2},...,c_{M}^{2};\theta)\right] =\sum_{|v|+|\mu|=|\lambda|}\frac{H(\lambda)}{H(v)H(\mu)}\frac{ \prod_{i=1}^{M}\Gamma(\theta(N-i+1))\Gamma(\theta(N-i+1)+\lambda_{i})}{\prod_{ i=1}^{M}\Gamma(\theta(N-i+1)+v_{i})\Gamma(\theta(N-i+1)+\mu_{i})}\] \[\frac{P_{\lambda}(1^{M};\theta)}{P_{v}(1^{M};\theta)P_{\mu}(1^{M };\theta)}C_{\lambda}^{v,\mu}(\theta)P_{v}(a_{1}^{2},...,a_{M}^{2};\theta)P_{ \mu}(b_{1}^{2},...,b_{M}^{2};\theta) \tag{2.14}\] _where \(v,\mu\) are two partitions of length at most \(M\)._ Proposition 2.26 provides explicit data of the distribution of random singular values \(\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) in terms of moments. It is believed, but not yet proved that \(\mathfrak{m}\) (with such moments) is indeed a (symmetric) positive probability measure on \(\mathbb{R}^{M}\). For \(\beta=1,2,4\) this holds automatically because the probability measure is constructed explicitly by the matrix structure, while for general \(\beta>0\), the randomness of \(\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) holds and is studied in this text in the weaker sense given by (2.14). ### Type BC Bessel generating functions For \(\vec{a}=(a_{1}\geq...\geq a_{M}\geq 0)\), we assume that \(\vec{a}\) is random, and its distribution is given by a symmetric generalized function \(\mathfrak{m}\), testing on smooth functions and in particular polynomials on \(\mathbb{R}_{\geq 0}^{M}\). **Definition 2.27**.: _Fix \(M\leq N,\theta>0\). Given a compactly supported symmetric generalized function \(\mathfrak{m}\) on \(\mathbb{R}_{\geq 0}^{M}\) defined as above, let the Bessel generating function of \(\mathfrak{m}\) be a function of \(z_{1},...,z_{M}\) given by_ \[G_{N,\theta}(z_{1},...,z_{M};\mathfrak{m}):=\langle\mathfrak{m},\mathbb{B}( \vec{a},z_{1},...,z_{M};N,\theta)\rangle, \tag{2.15}\] _where the bracket denotes testing \(\mathfrak{m}\) by \(\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\), in which \(\vec{a}\) are the variables and \(z_{1},...,z_{M}\) are parameters._ We also define the Bessel generating function for a class of fast decaying probability measures, for potential applications of our theory (see e.g Section 5.5). As preparation, we state a uniform upper bound of multivariate Bessel functions. **Proposition 2.28**.: _For any \(\theta>0\), \(M\leq N\), \(\vec{a}=(a_{1}\geq...\geq a_{M})\in\mathbb{R}_{\geq 0}^{M}\), \(z=(z_{1},...,z_{M})\in\mathbb{R}^{M}\), we have_ \[0\leq\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\leq\left[1+\frac{1}{\theta} \left(\frac{a_{1}|z|}{2}\right)^{2}e^{\frac{a_{1}|z|}{2}}\right]^{M}, \tag{2.16}\] _and for any \(k_{1},...,k_{s}\in\mathbb{Z}_{\geq 1}\),_ \[\left|\left(\prod_{i=1}^{s}P_{2k_{i}}\right)\mathbb{B}(\vec{a},z_{1},...,z_{M} ;\theta,N)\right|\leq\prod_{i=1}^{s}\left(\sum_{j=1}^{M}a_{i}^{2k_{i}}\right) \left[1+\frac{1}{\theta}\left(\frac{a_{1}|z|}{2}\right)^{2}e^{\frac{a_{1}|z|}{ 2}}\right]^{M}. \tag{2.17}\] Proof.: From Proposition 1.4, it's clear that \(\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\geq 0\), and since \(P_{\mu}(1^{M};\theta)=\frac{(M\theta)_{\mu}}{H^{\prime}(\mu)}\), \[\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\] \[= \sum_{\mu}\prod_{i=1}^{M}\frac{\Gamma(\theta N-\theta(i-1))}{ \Gamma(\theta N-\theta(i-1)+\mu_{i})}\frac{(M\theta)_{\mu}}{H(\mu)H^{\prime} (\mu)}2^{-2|\mu|}\frac{P_{\mu}(a_{1}^{2},\cdots,a_{M}^{2};\theta)P_{\mu}(z_{1 }^{2},\cdots,z_{M}^{2};\theta)}{P_{\mu}(1^{M};\theta)^{2}}\] \[\leq \sum_{\mu}\prod_{i=1}^{M}\frac{\Gamma(\theta N-\theta(i-1))}{ \Gamma(\theta N-\theta(i-1)+\mu_{i})}\frac{(M\theta)_{\mu}}{H(\mu)H^{\prime} (\mu)}2^{-2|\mu|}a_{1}^{2|\mu|}z_{1}^{2|\mu|}\] \[\leq \sum_{\mu}\prod_{i=1}^{M}\left[\frac{\Gamma(\theta N-\theta(i-1)) }{\Gamma(\theta N-\theta(i-1)+\mu_{i})}\frac{\Gamma(\theta M-\theta(i-1)+\mu_ {i})}{\Gamma(\theta M-\theta(i-1))}\right]\,\frac{1}{\prod_{i=1}^{M}\mu_{i}!} \frac{1}{\prod_{i=1}^{M}\prod_{j=0}^{\mu_{i}-1}(\theta+j)}2^{-2|\mu|}a_{1}^{2| \mu|}z_{1}^{2|\mu|}\] \[\leq \sum_{\mu_{1}\geq...\geq\mu_{M}\geq 0}\frac{1}{\prod_{i=1}^{M} \mu_{i}!}\frac{1}{\prod_{i=1}^{M}\prod_{j=0}^{\mu_{i}-1}(\theta+j)}\left( \frac{a_{1}z_{1}}{2}\right)^{2|\mu|}\] \[\leq \prod_{i=1}^{M}\left(\sum_{\mu_{i}=0}^{\infty}\frac{1}{\mu_{i}! \prod_{j=0}^{\mu_{i}-1}(\theta+j)}\left(\frac{a_{1}z_{1}}{2}\right)^{2\mu_{i} }\right)\leq\prod_{i=1}^{M}\left(1+\sum_{\mu_{i}=1}^{\infty}\frac{1}{\theta} \frac{1}{[(\mu_{i}-1)!]^{2}}\left(\frac{a_{1}|z|}{2}\right)^{2\mu_{i}}\right)\] \[\leq \prod_{i=1}^{M}\left(1+\frac{1}{\theta}\left(\frac{a_{1}|z|}{2} \right)^{2}e^{\frac{a_{1}|z|}{2}}\right)=\left[1+\frac{1}{\theta}\left(\frac{ a_{1}|z|}{2}\right)^{2}e^{\frac{a_{1}|z|}{2}}\right]^{M}. \tag{2.18}\] This verifies (2.16). (2.17) follows from (2.16) and Theorem 2.24. **Definition 2.29**.: _We say a measure \(\mathfrak{m}\) on M-tuples \(a_{1}\geq...\geq a_{M}\geq 0\) is exponentially decaying with exponent \(R>0\), if_ \[\int e^{MRa_{1}}\mu(da_{1},...,da_{M})<\infty.\] By Proposition 2.28 and Definition 2.29, the Bessel generating function of \(\mathfrak{m}\), where \(\mathfrak{m}\) is a compactly supported generalized function or exponentially decaying measure, is well-defined on a domain near \(0\). Moreover, we will take \(\mathfrak{m}\) to be of total mass \(1\), which means \(\langle\mathfrak{m},1\rangle=1\), where \(1\) is the constant function \(1\). So we have \[G_{N,\theta}(0,...,0;\mathfrak{m})=1.\] Now we generalize the addition to random vectors \(\vec{a}\) and \(\vec{b}\) following Definition 1.5. **Definition 2.30**.: _Given \(\theta>0\), \(M\leq N\), let \(\vec{a}=(a_{1}\geq...\geq a_{M}\geq 0)\), \(\vec{b}=(b_{1}\geq...\geq b_{M}\geq 0)\) be two random M-tuples whose distribution are given by generalized functions \(\mathfrak{m}_{a}\) and \(\mathfrak{m}_{b}\) on \(\mathbb{R}^{M}_{\geq 0}\). Let \(\vec{c}\) be a symmetric random vector in \(\mathbb{R}^{M}_{\geq 0}\) whose distribution is given by generalized function \(\mathfrak{m}_{c}\), such that_ \[G_{N,\theta}(z_{1},...,z_{M};\mathfrak{m}_{c})=G_{N,\theta}(z_{1},...,z_{M}; \mathfrak{m}_{a})\cdot G_{N,\theta}(z_{1},...,z_{M};\mathfrak{m}_{b}). \tag{2.19}\] _We write_ \[\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}.\] Since \(\mathbb{B}(z_{1},...,z_{M};\theta,N)\) is behaving nice enough in the analytic sense, one can interchange the differentiation over \(z_{1},...,z_{M}\) and the pairing with \(\mathfrak{m}\), and therefore Theorem 2.24 generalizes to the following. **Theorem 2.31**.: _Let \(\mathfrak{m}\) be a symmetric compactly supported generalized function on \(\mathbb{R}^{M}\), or a exponential decaying measure as in Definition 2.29 with exponent \(R\). Let \(k_{1},...,k_{s}\in\mathbb{Z}_{\geq 1}\). Then \(G_{N,\theta}(z_{1},..,z_{M};\mathfrak{m})\) is analytic as a function of \((z_{1},...,z_{M})\) (in the domain \(\{z\in\mathbb{R}^{M}:|z|<R\}\) in the second case). Moreover,_ \[\left(\prod_{i=1}^{s}\mathrm{P}_{2k_{i}}\right)G_{N,\theta}(z_{1},..,z_{M}; \mathfrak{m})\Big{|}_{z_{1}=...z_{M}=0}=\bigg{\langle}\mathfrak{m},\prod_{i=1 }^{s}\Big{(}\sum_{j=1}^{M}(a_{j})^{2k_{i}}\Big{)}\bigg{\rangle}. \tag{2.20}\] _The above properties also hold for_ \[G_{N,\theta}(z_{1},...,z_{M};\mathfrak{m}_{c})=G_{N,\theta}(z_{1},...,z_{M}; \mathfrak{m}_{a})\cdot G_{N,\theta}(z_{1},...,z_{M};\mathfrak{m}_{b}),\] _where \(\mathfrak{m}_{a}\), \(\mathfrak{m}_{b}\) are of the above two types._ Proof.: This follows from dominated convergence theorem, where the uniform upper bounds of \(\mathbb{B}(\cdot,z_{1},...,z_{M};\theta,N)\) and its derivatives are given by Proposition 2.28. ## 3. Concentration in low temperature In this section, we fix the size of matrices \(M\),\(N\) and the input as deterministic input \(\vec{a}\), \(\vec{b}\), and study the behavior of \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) as \(\theta\to\infty\). According to the statistical physic interpretation, when \(\theta\to\infty\) the temperature is going down to \(0\), and hence the random vector \(\vec{c}\) will freeze at some deterministic M-tuples. ### Finite Law of Large Numbers Before taking the limit, we consider the expected characteristic polynomial of \(CC^{*}\) for each \(\theta<\infty\). It turns out that the expression does not really depend on \(\theta\). The following lemma will be used later in the proof. Let \(C^{v,\mu}_{\lambda}(\theta)\) be the coefficient defined in 2.4. **Lemma 3.1**.: _When \(\lambda=1^{l}\), \(C^{v,\mu}_{\lambda}(\theta)\neq 0\) only when \(v=1^{i}\), \(\mu=1^{j}\), and \(i+j=l\). Moreover,_ \[C^{1^{i},1^{j}}_{1^{l}}(\theta)=\frac{\prod_{m=1}^{l}(\frac{[\theta-m\theta+1 ]}{l\theta-m\theta+1})}{\prod_{m=1}^{i}(\frac{i\theta-m\theta+\theta}{i\theta- m\theta+1})\prod_{m=1}^{j}(\frac{j\theta-m\theta+\theta}{j\theta-m\theta+1})}. \tag{3.1}\] Proof.: This is studied in [GM], and for the convenience of the readers we reproduce the proof. Applying the automorphism \(\omega_{\theta}\) of the algebra of symmetric functions (see [M, Chapter VI, Section 10], which acts on Jack polynomials in the following way: \[\omega_{\theta}(P_{\lambda}(\cdot;\theta))=Q_{\lambda^{\prime}}(\cdot;\theta^ {-1}), \tag{3.2}\] (2.4) becomes \[Q_{(i,0,...)}(\cdot;\theta^{-1})\cdot Q_{(j,0,...)}(\cdot;\theta^{-1})=\sum_{ \mu}C^{1^{i},1^{j}}_{\mu}(\theta)\cdot Q_{\mu^{\prime}}(\cdot;\theta^{-1}). \tag{3.3}\] Recall that \(Q_{\lambda}(\cdot;\theta)=b_{\lambda}(\theta)P_{\lambda}(\cdot;\theta)=\frac{ H(\lambda)}{H^{\prime}(\lambda)}P_{\lambda}(\cdot;\theta)\). By comparing the coefficient of the leading monomial \(z_{1}^{l}\), we have \[C^{1^{i},1^{j}}_{1^{l}}(\theta)=\frac{b_{1^{i}}(\theta^{-1})b_{1^{j}}(\theta^ {-1})}{b_{1^{l}}(\theta^{-1})}, \tag{3.4}\] and \(C^{v,\mu}_{1^{l}}=0\) if \(v\) or \(\mu\) has more than one column. **Theorem 3.2**.: _Fix \(M\leq N\), given \(\vec{a}\) and \(\vec{b}\), let \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\). Take \(z\) as a formal variable, and let_ \[P^{\theta}_{M,N}(z)=\mathbb{E}\left[\prod_{i=1}^{M}(z-c_{i}^{2})\right]. \tag{3.5}\] _Then the explicit expression of \(P^{\theta}_{M,N}(z)\) is \(\theta\)-independent, and_ \[P^{\theta}_{M,N}(z)=P_{M,N}(z)\] _for all \(\theta>0\), where \(P_{M,N}(z)\) is defined in (1.9)._ Proof.: Rewrite the product on the right side of (3.5) as \[\prod_{i=1}^{M}(z-c_{i}^{2})=\sum_{l=0}^{M}(-1)^{l}e_{l}(c_{1}^{2},...,c_{M}^{ 2})z^{M-l},\] it turns out that \(P^{\theta}_{M,N}(z)\) is given by the moments of \(\{c_{i}^{2}\}_{i=1}^{M}\) only in terms of elementary symmetric polynomials. Taking the partition \(\lambda=(1^{j},0^{M-j})\), \(P_{\lambda}(x;\theta)=e_{j}(x)\) for any \(\theta>0\). We use Proposition 2.26 and it remains to specify the coefficients. From Lemma 3.1 we get \[\frac{H(1^{l})}{H(1^{i})H(1^{j})}C_{1^{l}}^{1^{i},1^{j}}(\theta)=\frac{H^{{}^{ \prime}}(1^{l})}{H^{{}^{\prime}}(1^{i})H^{{}^{\prime}}(1^{j})}=\frac{l!}{i!j!}.\] Moreover, direct calculation yields \[\frac{e_{l}(1^{M})}{e_{i}(1^{M})e_{j}(1^{M})}=\frac{i!(M-i)!j!(M-j)!}{M!l!(M-l)!},\] and when \(\lambda=1^{l},v=1^{i},\mu=1^{j}\), \[\frac{\prod_{i=1}^{M}\Gamma(\theta(N-i+1))\Gamma(\theta(N-i+1)+\lambda_{i})}{ \prod_{i=1}^{M}\Gamma(\theta(N-i+1)+v_{i})\Gamma(\theta(N-i+1)+\mu_{i})}=\frac {(N-i)!}{N!}\frac{(N-j)!}{(N-l)!}.\] Combine all these together finishes the proof. We highlight the connection of our result with the so-called finite free probability, which was initiated in recent years by Marcus, Spielman and Srivastava and studies convolution of polynomials. Given two polynomials \(p(z)=\sum_{i=0}^{M}z^{M-i}a_{i},q(z)=\sum_{i=0}^{M}z^{M-i}b_{i}\) with degree at most \(M\), [MSS] defines the rectangular additive convolution for two \(M\times M\) matrices, and [GrM] generalizes it to arbitrary rectangular matrices, such that the \((M,N)^{th}\) rectangular additive convolution of \(p(z)\) and \(q(z)\) is defined as \[p(z)\boxplus\boxplus_{M}^{N}q(z)=\sum_{l=0}^{M}z^{M-l}(-1)^{l}\Bigg{(}\frac{(M -i)!(M-j)!}{M!(M-l)!}\frac{(N-i)!(N-j)!}{N!(N-l)!}\Bigg{)}a_{i}b_{j}.\] In [GrM], it is shown that taking \(p(z)=\chi_{z}(AA^{*})\), \(q(z)=\chi_{z}(BB^{*})\), where \(A\) and \(B\) are two \(M\times N\) real/complex matrices, taking \(\chi_{z}(\cdot)\) to be the characteristic polynomial \(\det(zI-\cdot)\), and let \(U_{M\times M},V_{N\times N}\) are independent Haar orthogonal/unitary, then \(p(z)\boxplus\boxplus_{M}^{N}q(z)=\mathbb{E}\left[\chi_{z}((A+UBV)(A+UBV)^{*})\right]\). Theorem 3.2 generalizes this operation from \(\beta=1,2\) to arbitrary \(\beta>0\), with a different approach not relying on the concrete matrix structure. In particular it shows that the rectangular additive convolution is \(\beta-\)independent. Our next result is the law of large number of \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) in the regime \(\theta\rightarrow\infty\). As preparation we state a combinatorial result. Given partitions \(v,\mu,\lambda\) such that \(v_{1},\mu_{1}\leq\lambda_{1}\), \(l(v),\ l(\mu),\ l(\lambda)\leq M\), let \(\{k_{l}\}\) be an index set that \(l=1,2,...,\lambda_{k}-\lambda_{k+1},k=1,2,...,M\), and \(\{i_{k_{l}}\},\{j_{k_{l}}\}\) be two collections of nonnegative integers. We do not distinguish \(\{i_{k_{l}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}\) with \(\{i_{k_{\sigma(l)}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}\), where \(\sigma\in S_{\lambda_{k}-\lambda_{k+1}}\) is an arbitrary permutation, and same for \(\{j_{k_{l}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}\). **Proposition 3.3**.: _Let \(C_{\lambda}^{v,\mu}\) be the coefficient of \(m_{\lambda}(\cdot)\) in the expansion_ \[m_{v}(\cdot)\cdot m_{\mu}(\cdot)=\sum_{\lambda}C_{\lambda}^{v,\mu}m_{\lambda} (\cdot).\] _Then_ \[C_{\lambda^{{}^{\prime}}}^{v^{\prime},\mu^{{}^{\prime}}}=\text{\# ways to choose }\{i_{k_{l}}\},\{j_{k_{l}}\}\text{ such that for }m=1,2,...,M,\] \[\begin{cases}v_{m}=\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}-\lambda_{k+1}}I_{i_{k_{l} }\geq m};\\ \mu_{m}=\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}-\lambda_{k+1}}I_{j_{k_{l}}\geq m}. \end{cases} \tag{3.6}\] Proof.: We choose \(\{i_{k_{l}}\},\{j_{k_{l}}\}\) in an explicit way. By definition of \(C_{\lambda}^{v,\mu}\), we are combining column \(v_{l_{1}}^{{}^{\prime}}\) with column \(\mu_{l_{2}}^{{}^{\prime}}\) to get a column \(\lambda_{l_{3}}^{{}^{\prime}}\), where \(l_{1},l_{2},l_{3}\) are chosen among \(1,2,...,\lambda_{1}\), and \(v_{l_{1}}^{{}^{\prime}},\mu_{l_{2}}^{{}^{\prime}}\) might be of length \(0\). Inspired by this, let \(\{i_{k_{l}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}\) be the length of (distinct) columns of \(v\), \(\{j_{k_{l}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}\) be the length of (distinct) columns of \(\mu\), which are chosen to contribute to \(\lambda_{\lambda_{k+1}+1}^{{}^{\prime}},...,\lambda_{\lambda_{k}}^{{}^{\prime}}\). We immediately see that the above way to choose \(\{i_{k_{l}}\},\{j_{k_{l}}\}\) satisfy (3.6), whose total number is equal to \(C_{\lambda^{{}^{\prime}}}^{v^{{}^{\prime}},\mu^{{}^{\prime}}}\). It remains to check each way of choosing \(\{i_{k_{l}}\},\{j_{k_{l}}\}\) can be interpreted in this way. Given a sequence of nonnegative integers \(\{i_{k_{l}}\},\{j_{k_{l}}\}\) satisfying (3.6), we have for \(m=1,2,...,M\), \[\begin{cases}v_{m}-v_{m+1}=\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}-\lambda_{k+1} }I_{i_{k_{l}}=m};\\ \mu_{m}-\mu_{m+1}=\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}-\lambda_{k+1}}I_{j_{k_ {l}}=m}.\end{cases} \tag{3.7}\] Then one can split \(\{i_{k_{l}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}(k=1,2,...,M)\) into disjoint groups, such that the number of elements in group \(m\) is exactly \(v_{m}-v_{m+1}\), which is equal to the number of length \(m\) columns in \(v\). Vice versa for \(\{j_{k_{l}}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}(k=1,2,...,M)\). Proof of Theorem 1.6:.: The weak convergence to a delta function on polynomial test functions is equivalent to the statement that, given any arbitrary collection of polynomials \(f_{1},...,f_{n}\) of M variables, we have \[\lim_{\theta\to\infty}\mathbb{E}\left[\prod_{i=1}^{n}f_{i}(\vec{c^{2}})\right] =\lim_{\theta\to\infty}\prod_{i=1}^{n}\left[\mathbb{E}\left[f_{i}(\vec{c^{2}}) \right]\right]. \tag{3.8}\] Since \(\vec{c}\) is symmetric in distribution, if suffices to consider symmetric polynomials in \(\Lambda_{M}\), which can be generated (in the sense of algebra) by elementary symmetric functions \(e_{1},...,e_{M}\). Since (3.8) is multilinear in \(f_{i}^{\prime}s\), we reduce to showing for any positive integers \(k_{1},...,k_{M}\), \[\lim_{\theta\to\infty}\mathbb{E}\left[\prod_{i=1}^{M}e_{i}(\vec{c^{2}})^{k_{i} }\right]\stackrel{{?}}{{=}}\lim_{\theta\to\theta}\prod_{i=1}^{M} \left[\mathbb{E}\left[e_{i}(\vec{c^{2}}\right]\right]^{k_{i}}. \tag{3.9}\] Once we show this, the deterministic limit of \(\vec{c}\) will be a M-tuples \(\vec{\lambda}\), such that \(\mathbb{E}\Big{[}e_{i}(\vec{c})\Big{]}=e_{i}(\vec{\lambda})\) for all \(i=1,2,...,M\). Then Theorem 3.2 identifies \(\vec{\lambda}\) with roots of \(P_{M,N}^{\theta}(z)\). We connect the left side of (3.9) with Jack polynomials, using the following result ([St, Proposition 7.6]): \[\lim_{\theta\to\infty}P_{\lambda}(z_{1},...,z_{M};\theta)=\prod_{i=1}^{M}[e_{i}( z_{1},...,z_{M})]^{\lambda_{i}-\lambda_{i+1}}, \tag{3.10}\] for any partition \(\lambda\). Then, let \(\lambda_{i}=k_{i}+...+k_{M}\), the left side of (3.9) becomes the limit of \(\mathbb{E}\left[P_{\lambda}(\vec{c^{2}};\theta)\right]\), since each product of \(e_{i}(\vec{c^{2}})^{\prime}s\) has bounded expectation for \(\theta>0\) due to the fact that \(\vec{c}\) is bounded supported. Again by Proposition 2.26, \[\mathbb{E}\left[P_{\lambda}(c_{1}^{2},...,c_{M}^{2};\theta)\right] =\sum_{|v|+|\mu|=|\lambda|}\frac{H(\lambda)}{H(v)H(\mu)}\frac{ \prod_{i=1}^{M}\Gamma(\theta(N-i+1))\Gamma(\theta(N-i+1)+\lambda_{i})}{\prod_{ i=1}^{M}\Gamma(\theta(N-i+1)+v_{i})\Gamma(\theta(N-i+1)+\mu_{i})}\] \[\frac{P_{\lambda}(1^{M};\theta)}{P_{v}(1^{M};\theta)P_{\mu}(1^{M };\theta)}C_{\lambda}^{v,\mu}(\theta)P_{v}(a_{1}^{2},...,a_{M}^{2};\theta)P_{ \mu}(b_{1}^{2},...,b_{M}^{2};\theta). \tag{3.11}\] Taking \(\theta\to\infty\), \[\frac{\prod_{i=1}^{M}\Gamma(\theta(N-i+1))\Gamma(\theta(N-i+1)+\lambda_{i})}{ \prod_{i=1}^{M}\Gamma(\theta(N-i+1)+v_{i})\Gamma(\theta(N-i+1)+\mu_{i})} \longrightarrow\prod_{m=1}^{M}(N-m+1)^{\lambda_{m}-v_{m}-\mu_{m}},\] and since by definition \[P_{v}(\cdot;\theta)P_{\mu}(\cdot;\theta)=\sum_{\lambda}C_{\lambda}^{v,\mu}( \theta)P_{\lambda}(\cdot;\theta),\] applying \(\omega_{\theta}\) on both sides (c.f. the proof of Lemma 3.1), and use the fact that (see [St, Proposition 7.6]) \[\lim_{\theta\to 0}P_{\lambda}(z_{1},...,z_{M};\theta)=m_{\lambda}(z_{1},...,z_{M}), \tag{3.12}\] we have \[C_{\lambda}^{v,\mu}(\theta)\frac{H(\lambda)}{H(v)H(\mu)} \longrightarrow C_{\lambda^{\prime}}^{v^{\prime}},^{\mu^{\prime}} \cdot\frac{\lim_{\theta\to\infty}H^{{}^{\prime}}(\lambda)}{\lim_{\theta\to \infty}H^{{}^{\prime}}(v)\cdot\lim_{\theta\to\infty}H^{{}^{\prime}}(\mu)}\] \[=C_{\lambda^{\prime}}^{v^{\prime}},^{\mu^{\prime}}\cdot\frac{\prod _{s\in\lambda}(l(s)+1)}{\prod_{s\in v}(l(s)+1)\cdot\prod_{s\in\mu}(l(s)+1)}. \tag{3.13}\] Moreover, applying (3.10) again on \(\frac{P_{\lambda}(1^{M};\theta)}{P_{v}(1^{M};\theta)P_{\mu}(1^{M};\theta)}\), \(P_{v}(a_{1}^{2},...,a_{M}^{2};\theta)P_{\mu}(b_{1}^{2},...,b_{M}^{2};\theta)\), the right side of (3.11) goes to \[\sum_{|v|+|\mu|=|\lambda|}C_{\lambda^{\prime}}^{v^{\prime}},^{\mu ^{\prime}} \frac{\prod_{s\in\lambda}(l(s)+1)}{\prod_{s\in v}(l(s)+1)\cdot\prod_{s\in \mu}(l(s)+1)}\frac{\prod_{i=1}^{M}\binom{M}{i}^{\lambda_{i}-\lambda_{i+1}}}{ \prod_{i=1}^{M}\binom{M}{i}^{v_{i}-v_{i+1}}\prod_{i=1}^{M}\binom{M}{i}^{\mu_{i }-\mu_{i+1}}}\] \[\prod_{m=1}^{M}(N-m+1)^{\lambda_{i}-v_{i}-\mu_{i}}\prod_{i=1}^{M} \left[e_{i}(a_{1}^{2},...,a_{M}^{2})\right]^{v_{i}-v_{i+1}}\prod_{i=1}^{M} \left[e_{i}(b_{1}^{2},...,b_{M}^{2})\right]^{\mu_{i}-\mu_{i+1}}. \tag{3.14}\] On the other hand, by Theorem 3.2, the right side of (3.9) is equal to \[\begin{split}&\prod_{k=1}^{M}\left[\mathbb{E}\big{[}e_{i}(\tilde{c }^{2})\big{]}\right]^{\lambda_{k}-\lambda_{k+1}}\\ =&\prod_{k=1}^{M}\left[\sum_{i+j=k}\frac{(M-i)!(M-j)! }{M!(M-k)!}\frac{(N-i)!(N-j)!}{N!(N-k)!}e_{i}(a_{1}^{2},...,a_{M}^{2})e_{j}(b_{ 1}^{2},...,e_{M}^{2})\right]^{\lambda_{k}-\lambda_{k+1}}.\end{split} \tag{3.15}\] It remains to check that (3.14) is equal to (3.15). We open the bracket in (3.15), and identify each term in the sum with a unique collection of nonnegative integer valued indices \(\{k_{l}\}_{l=1}^{\lambda_{k}-\lambda_{k+1}}(k=1,2,...,M)\), such that \(i_{k_{l}}+j_{k_{l}}=k\) for each \(l\). Moreover, such term is a multiple of \(\prod_{i=1}^{M}\left[e_{i}(a_{1}^{2},...,a_{M}^{2})\right]^{v_{i}-v_{i+1}} \prod_{i=1}^{M}\left[e_{i}(b_{1}^{2},...,b_{M}^{2})\right]^{\mu_{i}-\mu_{i+1}}\), where for \(i=1,2,...,M\), \[\begin{split} v_{i}-v_{i+1}&=\sum_{k=1}^{M}\sum_{l =1}^{\lambda_{k}-\lambda_{k+1}}I_{i_{k_{l}}=m},\\ \mu_{i}-\mu_{i+1}&=\sum_{k=1}^{M}\sum_{l=1}^{\lambda _{k}-\lambda_{k+1}}I_{j_{k_{l}}=m},\\ \lambda_{i}-\lambda_{i+1}&=\sum_{k=1}^{M}\sum_{l=1} ^{\lambda_{k}-\lambda_{k+1}}I_{k=m},\end{split} \tag{3.16}\] which matches (3.7). Hence it remains to match the coefficients, i.e, to show that \[\begin{split}&\sum_{i_{k_{l}}+j_{k_{l}}=k}\prod_{k=1}^{M}\prod_{l =1}^{\lambda_{k}-\lambda_{k+1}}\left[\frac{(M-i_{k_{l}})!(M-j_{k_{l}})!}{M!(M -k)!}\frac{(N-i_{k_{l}})!(N-j_{k_{l}})!}{N!(N-k)!}\right]\\ \stackrel{{?}}{{=}}& C_{\lambda^{\prime} }^{v^{\prime},\mu^{\prime}}\frac{\prod_{s\in\lambda}[l(s)+1]}{\prod_{s\in v} [l(s)+1]\cdot\prod_{s\in\mu}[l(s)+1]}\\ &\cdot\frac{\prod_{i=1}^{M}{M\choose i}^{\lambda_{i}-\lambda_{i+ 1}}}{\prod_{i=1}^{M}{M\choose i}^{v_{i}-v_{i+1}}\prod_{i=1}^{M}{M\choose i}^{ \mu_{i}-\mu_{i+1}}}\prod_{m=1}^{M}(N-m+1)^{\lambda_{m}-v_{m}-\mu_{m}}.\end{split} \tag{3.17}\] We first rewrite the left side: \[\frac{(M-i_{k_{l}})!(M-j_{k_{l}})!}{(M-k)!M!}=\prod_{m=1}^{M}(M-m+1)^{\sum_{k =1}^{M}(I_{m\leq k}-I_{m\leq i_{k_{l}}}-I_{m\leq j_{k_{l}}})}, \tag{3.18}\] hence \[\begin{split}&\prod_{k=1}^{M}\prod_{l=1}^{\lambda_{k}-\lambda_{k+1 }}\left[\frac{(M-i_{k_{l}})!(M-j_{k_{l}})!}{(M-k)!M!}\frac{(N-i_{k_{l}})!(N-j_{ k_{l}})!}{(N-k)!N!}\right]\\ &=\prod_{m=1}^{M}(M-m+1)^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}- \lambda_{k+1}}\left[I_{m\leq k}-I_{m\leq i_{k_{l}}}-I_{m\leq j_{k_{l}}}\right] }\\ &\cdot\prod_{m=1}^{M}(N-m+1)^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k} -\lambda_{k+1}}\left[I_{m\leq k}-I_{m\leq i_{k_{l}}}-I_{m\leq j_{k_{l}}}\right] }.\end{split} \tag{3.19}\] On the right side, \[\prod_{m=1}^{M}(N-m+1)^{\lambda_{m}-v_{m}-\mu_{m}}=\prod_{m=1}^{M}(N-m+1)^{\sum_{k =1}^{M}\sum_{l=1}^{\lambda_{k}-\lambda_{k+1}}(I_{m\leq k-I_{m\leq i_{k_{l}}}-I_{ m\leq j_{k_{l}}}})}. \tag{3.20}\] And \[\binom{M}{i}=\prod_{m=1}^{M}(M-m+1)^{1-I_{m\geq i+1}-I_{m\geq M-i+1}}, \tag{3.21}\] hence \[\frac{\prod_{i=1}^{M}\binom{M}{i}^{\lambda_{i}-\lambda_{i+1}}}{ \prod_{i=1}^{M}\binom{M}{i}^{v_{i}-v_{i+1}}\prod_{i=1}^{M}\binom{M}{i}^{\mu_{i }-\mu_{i+1}}}\] \[= \prod_{i=1}^{M}\binom{M}{i}^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{ k}-\lambda_{k+1}}\left[I_{k=i}-I_{i_{k_{l}}-i}-I_{j_{k_{l}}=i}\right]}\] \[= \prod_{m=1}^{M}(M-m+1)^{\left(\sum_{i=1}^{M}\left[1-I_{m\geq i+1 }-I_{m\geq M-i+1}\right]\right)\left(\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}- \lambda_{k+1}}\left[I_{k=i}-I_{i_{k_{l}}-i}-I_{j_{k_{l}}-i}\right]\right)}\] \[= \prod_{m=1}^{M}(M-m+1)^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}- \lambda_{k+1}}\left[I_{m\leq k-I_{m\leq i_{k_{l}}}-I_{m\leq j_{k_{l}}}}\right] +\left[I_{m\geq M-i_{k_{l}}+1}+I_{m\geq M-j_{k_{l}}+1}-I_{m\geq M-k+1}\right]}. \tag{3.22}\] Finally, \[\prod_{m=1}^{M}(M-m+1)^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}- \lambda_{k+1}}\left[-\left(I_{m\geq M-i_{k_{l}}+1}+I_{m\geq M-j_{k_{l}}+1}-I_{ m\geq M-k+1}\right)\right]}\] \[= \prod_{m=1}^{M}(M-m+1)^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}- \lambda_{k+1}}\left[-\left(I_{i_{k_{l}}\geq M-m+1}-I_{j_{k_{l}}\geq M-m+1}+I_{ k\geq M-m+1}\right)\right]}\] \[= \prod_{m=1}^{M}m^{\sum_{k=1}^{M}\sum_{l=1}^{\lambda_{k}-\lambda_ {k+1}}\left[I_{m\leq k-I_{m\leq i_{k_{l}}}-I_{m\leq j_{k_{l}}}}\right]}\] \[= \frac{\prod_{j=1}^{\lambda_{1}}\lambda_{j}^{\prime}!}{\prod_{j=1 }^{\lambda_{1}}v_{j}^{\prime}!\prod_{j=1}^{\lambda_{1}}\mu_{j}^{\prime}!}= \frac{\prod_{s\in\lambda}[l(s)+1]}{\prod_{s\in v}[l(s)+1]\prod_{s\in\mu}[l(s)+1 ]}. \tag{3.23}\] (3.17) follows from (3.20), (3.22), (3.23) and Proposition 3.3. ### Gaussian Fluctuation for \(1\times N\) matrix Take \(M=1\), so that \(A\) and \(B\) are two \(1\times N\) matrices with singular values \(a_{1},b_{1}\geq 0\), and let \(c_{1}=a_{1}\boxplus_{1,N}^{\theta}b_{1}\). When taking \(\theta\to\infty\), Theorem 1.6 shows \[c_{1}^{2}\longrightarrow\lambda_{1}^{2}\] in moments, where \[\mathbb{E}\left[c_{1}^{2}\right]=\mathbb{E}\left[e_{1}(c^{2})\right]=a_{1}^{2} +b_{1}^{2}=\lambda_{1}^{2}. \tag{3.24}\] Based on this result, we consider further the fluctuation of \(c_{1}\) around \(\lambda_{1}\) in \(\theta\to\infty\) regime, which turns out to be a Gaussian random variable under proper rescaling. **Theorem 3.4**.: _For \(a_{1},b_{1}\geq 0\), let \(\lambda_{1}^{2}=a_{1}^{2}+b_{1}^{2}\), and \(c_{1}=a_{1}\boxplus_{1,N}^{\theta}b_{1}\). As \(\theta\rightarrow\infty\), we have:_ \[\sqrt{\theta}(c_{1}^{2}-\lambda_{1}^{2})\overset{d}{\longrightarrow}Z, \tag{3.25}\] _where \(Z\sim\mathscr{N}(0,\frac{2}{N}a_{1}^{2}b_{1}^{2})\)._ _Remark 3.5_.: We expect the Gaussian fluctuation behavior of \(\vec{c}=\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) when \(\theta\rightarrow\infty\), for general \(M>1\), and we leave the generalization of Theorem 3.4 as an open problem. Proof.: We first show that the convergence holds in the sense of moments. By Proposition 2.26, for all \(l=1,2,...\) \[\mathbb{E}\left[c_{1}^{2l}\right]=\sum_{k_{1}+k_{2}=l}\frac{l!}{k_{1}!k_{2}!} \frac{\Gamma(\theta N)\Gamma(\theta N+l)}{\Gamma(\theta N+k_{1})\Gamma(\theta N +k_{2})}a_{1}^{2k_{1}}b_{1}^{2k_{2}}, \tag{3.26}\] since in (2.14) \(\lambda=(l,0,...)\), \(v=(k_{1},0,...)\), \(\mu=(k_{2},0,...)\) and \(C_{\lambda}^{v,\mu}(\theta)\equiv 1\). Then for \(m\in\mathbb{Z}_{\geq 0}\), \[\mathbb{E}\left[(c_{1}^{2}-\lambda_{1}^{2})^{m}\right]\] \[= \sum_{k_{1}+k_{2}+k=m}\frac{(-1)^{k}m!}{(k_{1}+k_{2})!k!}\frac{(k_ {1}+k_{2})!}{k_{1}!k_{2}!}\frac{\Gamma(\theta N)\Gamma(\theta N+k_{1}+k_{2})} {\Gamma(\theta N+k_{1})\Gamma(\theta N+k_{2})}a_{1}^{2k_{1}}b_{1}^{2k_{2}}(a _{1}^{2}+b_{1}^{2})^{k}\] \[= \sum_{k_{1}+k_{2}+k_{3}+k_{4}=m}(-1)^{k}\frac{m!}{k_{1}!k_{2}!k_{ 3}!k_{4}!}\frac{\Gamma(\theta N)\Gamma(\theta N+k_{1}+k_{2})}{\Gamma(\theta N+ k_{1})\Gamma(\theta N+k_{2})}a_{1}^{2k_{1}+2k_{3}}b_{1}^{2k_{2}+2k_{4}}.\] This implies for fixed \(l_{1}+l_{2}=m\geq 0\), coefficient of monomial \(a_{1}^{2l_{1}}b_{1}^{2l_{2}}\) in \(\mathbb{E}\left[[\sqrt{\theta}(c_{1}^{2}-\lambda_{1}^{2})]^{m}\right]\) is \[\sqrt{\theta}^{l_{1}+l_{2}}m!\sum_{k_{3}=0}^{l_{1}}\sum_{k_{4}=0} ^{l_{2}}\frac{(-1)^{k_{3}}}{(l_{1}-k_{3})!k_{3}!}\frac{(-1)^{k_{4}}}{(l_{2}-k_ {4})!k_{4}!}\frac{\Gamma(\theta N)\Gamma(\theta N+l_{1}+l_{2}-k_{3}-k_{4})}{ \Gamma(\theta N+l_{1}-k_{3})\Gamma(\theta N+l_{2}-k_{4})} \tag{3.28}\] \[= \sqrt{\theta}^{l_{1}+l_{2}}m!\sum_{k_{3}=0}^{l_{1}}\sum_{k_{4}=0} ^{l_{2}}\frac{(-1)^{l_{1}-k_{3}}}{(l_{1}-k_{3})!k_{3}!}\frac{(-1)^{l_{2}-k_{4} }}{(l_{2}-k_{4})!k_{4}!}\frac{\Gamma(\theta N)\Gamma(\theta N++k_{3}+k_{4})}{ \Gamma(\theta N+k_{3})\Gamma(\theta N+k_{4})}. \tag{3.27}\] It remains to match the above expression with moments of \(Z\). We use the following lemma, whose prove is postponed. **Lemma 3.6**.: _For any \(l=1,2,...\), with \(z\) as a formal variable,_ _(a)._ \[\sum_{p=0}^{l}\frac{(-1)^{(l-p)}}{(l-p)!p!}(z+p)(z+p+1)...(z+p+q-1)=0 \tag{3.29}\] _if \(q=0,1,2,...,l-1\)._ _(b)._ \[\sum_{p=0}^{l}\frac{(-1)^{(l-p)}}{(l-p)!p!}(z+p)(z+p+1)...(z+p+q-1)=1 \tag{3.30}\] _if \(q=l\)._ Without loss of generality, assume that \(l_{1}\geq l_{2}\) in (3.28), and we rewrite (3.28) as \[\sqrt{\theta}^{l_{1}+l_{2}}m!\sum_{k_{4}=0}^{l_{2}}\frac{(-1)^{l_{2}-k_{4}}}{(l_{ 2}-k_{4})!k_{4}!}\frac{\Gamma(\theta N)}{\Gamma(\theta N+k_{4})}\Bigg{[}\sum_{ k_{3}=0}^{l_{1}}\frac{(-1)^{l_{1}-k_{3}}}{(l_{1}-k_{3})!k_{3}!}(\theta N+k_{3})( \theta N+k_{3}+1)...(\theta N+k_{3}+k_{4}-1)\Bigg{]}.\] By Lemma 3.6, the sum in the bracket is nonzero only when \(k_{4}=l_{1}\), which implies \(l_{1}=l_{2}\), \(m=l_{1}+l_{2}=2l_{1}\), and (3.28) becomes \[\sqrt{\theta}^{2l_{1}}\frac{(2l_{1})!}{l_{1}!}\frac{1}{\theta N(\theta N+1)... (\theta N+l_{1}-1)}.\] Therefore, the odd moments of \(\sqrt{\theta}(c_{1}^{2}-\lambda_{1}^{2})\) are all zero, and the \(2k^{th}\) moment of \(\sqrt{\theta}(c_{1}^{2}-\lambda_{1}^{2})\) is equal to \[\sqrt{\theta}^{2k}\frac{(2k)!}{k!}\frac{1}{\theta N(\theta N+1)...(\theta N+k -1)}a_{1}^{2k}b_{1}^{2k},\] which converges to \[\frac{(2k)!}{k!}\frac{1}{N^{k}}a_{1}^{2k}b_{1}^{2k}=(2k-1)!!(\frac{2}{N})^{k} a_{1}^{2k}b_{1}^{2k}\] as \(\theta\to\infty\). This coincides with the moments of \(Z\sim\mathscr{N}(0,\frac{2}{N}a_{1}^{2}b_{1}^{2})\). By Example 2.17 and [MP, Theorem 2], which states that products of two usual Bessel functions can be written as a convex combination of Bessel functions, we have that for each \(\theta>0\), \(c_{1}^{2}\) is supported by a legitimate probability measure \(\mu_{\theta}\). The convergence of second moment when \(\theta\to\infty\) implies that \(\{\mu_{\theta}\}_{\theta>0}\) are tight, hence (3.25) follows from the moment convergence. Proof of Lemma 3.6.: (a). Expand the polynomial of \(z\). For each coefficient of \(z^{0},z^{1},z^{2},...,z^{q}\), it can be written as a polynomial of \(p\) with degree at most \(q\) with integer coefficients, and hence an integral linear combination of \(p,p(p-1),p(p-1)(p-2),...,p(p-1)\cdots(p-q+1)\), so the left side of (3.29) becomes an integral linear combination of binomial sums of \[\sum_{p=q}^{l}\frac{(-1)^{l-p}}{(l-p)!(p-q)!}=(1+(-1))^{l-q},\] which equals to \(0\) when \(q\leq l-1\). (b). Following the same idea as (a), the only nonvanishing term is the coefficient of \(z^{0}\), which equals to \[\sum_{p=0}^{l}\frac{(-1)^{l-p}}{(l-p)!p!}p(p-1)\cdots(p-l+1)=\sum_{p=l}^{l} \frac{(-1)^{l-p}}{(l-p)!(p-l)!}=1.\qed\] ## 4. Law of large number in high temperature In this section, fix two parameters \(\gamma>0,q\geq 1\). we explore the behavior of empirical measures of a \(M\times N\) random matrix \(C\), in the regime that taking \(M,N\to\infty\), \(\theta\to 0\), \(M\theta\to\gamma\), \(N\theta\to q\gamma\). To simplify the notation, sometimes we only write \(M\theta\to\infty,N\theta\to q\gamma\) to denote the same regime. ### Main results Consider M-tuples of real numbers \(\vec{c}=(c_{1}\geq...\geq c_{M}\geq 0)\), which should be thought as singular values of some (virtual) rectangular matrix. Suppose that there is a sequence of random M-tuples \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) where \(\vec{c}_{M}=(c_{M,1}\geq...\geq c_{M,M}\geq 0)\), and the distribution of is given in the sense as in Theorem 2.31. Denote its empirical measure by \(\mu_{M}=\frac{1}{M}\sum_{i=1}^{M}(\delta_{c_{M,i}}+\delta_{-c_{M,i}})\). We set up a condition in terms of moments, that under some mild technical assumption, is equivalent to the weak convergence, in probability, of the random empirical measures \(\{\mu_{M}\}\) to some limiting probability measure \(\mu\) when \(M\to\infty\). The moments of \(\mu\) are all finite and given by \(m_{k}\)'s. **Definition 4.1**.: _(LLN) Let \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) be a sequence of random M-tuples defined as above. For \(k=1,2,...,\) denote_ \[p_{k}^{M}=\frac{1}{M}\sum_{i=1}^{2M}[c_{M,i}^{k}+(-c_{M,i})^{k}].\] _We say \(\{\vec{c}_{M}\}\) satisfies a law of large numbers, if there exists deterministic real numbers \(\{m_{k}\}_{k=1}^{\infty}\) such that for any s=1,2,... and any \(k_{1},...,k_{s}\in\mathbb{Z}_{\geq 1}\), we have_ \[\lim_{M\to\infty}\mathbb{E}\left[\prod_{i=1}^{s}p_{k_{i}}^{M}\right]=\prod_{i =1}^{s}m_{k_{i}}. \tag{4.1}\] Denote the Bessel generating function of \(\vec{c}_{M}\) by \[G_{M,N;\theta}(z_{1},...,z_{M}):=G_{N,\theta}(z_{1},...,z_{M};\mathfrak{m}_{ \vec{c}_{M}}).\] Recall from the Section 2.5 that \(G_{M,N;\theta}(0,...,0)=1\), and \(G_{M,N;\theta}(z_{1},...,z_{M})\) is analytic on a domain near \((0,...,0)\). Under these conditions, \(\ln(G_{M,N;\theta}(z_{1},...,z_{M}))\) is analytic near \((0,...,0)\), and \(\ln(G_{M,N;\theta}(0,...,0))=0\). Next, we introduce a condition of the partial derivatives of \(\ln(G_{M,N;\theta}(z_{1},...,z_{M}))\) at \(0\), as \(M\to\infty\). **Definition 4.2**.: _(\(q\)-\(\gamma\)-LLN-appropriateness) Given the sequence \(\{\vec{c}_{M}\}_{M=1}^{\infty}\), if for a sequence of real numbers \(\{k_{l}\}_{l=1}^{\infty}\), the following limits hold:_ 1. \(\lim_{M\theta\to\gamma,N\theta\to q\gamma}\frac{\partial^{l}}{ \partial z_{i}^{l}}ln(G_{M,N;\theta})\Big{|}_{z_{1}=,...,z_{M}=0}=(l-1)!\cdot k _{l},\text{ for all }l,i\in\mathbb{Z}_{\geq 1}.\)__ 2. \(\lim_{M\theta\to\gamma,N\theta\to q\gamma}\frac{\partial}{ \partial z_{i_{1}}}...\frac{\partial}{\partial z_{i_{r}}}ln(G_{M,N;\theta}) \Big{|}_{z_{1},..,z_{M}=0}=0,\text{ for all }r\geq 2,\text{ and }i_{1},...,i_{r}\in\mathbb{Z}_{\geq 1}\) _such that the set_ \(\{i_{1},...,i_{r}\}\)_is of cardinality at least two._ _We say \(\{k_{l}\}_{l=1}^{\infty}\) are the limiting \(q\)-\(\gamma\) cumulants of \(\{\vec{c}_{M}\}\)._ _Remark 4.3_.: By Proposition 2.26, \(k_{l}\) are always \(0\) for all odd l's. _Remark 4.4_.: Writing \[g^{M,N,\theta}(z)=\frac{\partial}{\partial z}ln(G_{M,N;\theta}(z,0,...,0))= \sum_{l=1}^{\infty}k_{l}^{M,N,\theta}z^{l-1},\] we have \[k_{l}^{M,N,\theta}\longrightarrow k_{l}\] as \(M\theta\to\gamma,N\theta\to q\gamma\). Our main theorem connects Definition 4.1, 4.2 and gives a quantitative relation between moments and q-\(\gamma\) cumulants of the limiting empirical measure of \(\{\mu_{M}\}_{M=1}^{\infty}\), which is stated using generating function. Consider \(\mathbb{R}[[z]]\), the space of all formal power series of variable \(z\) with real coefficients. **Definition 4.5**.: _Let \(a(z)\) be an element in \(\mathbb{R}[[z]]\). We define four linear operators acting on \(\mathbb{R}[[z]]\) to itself, such that for any \(n=0,1,2,...\)_ \[(1).\ \partial(z^{n}) :=n\cdot z^{n-1}\] \[(2).\ d(z^{n}) :=\begin{cases}0&n=0;\\ z^{n-1}&n\geq 1,\end{cases}\] \[(3).\ d^{{}^{\prime}}(z^{n}) :=\begin{cases}0&n\ \ \text{is\ even};\\ 2z^{n-1}&n\ \ \text{is\ odd},\end{cases}\] \[(4).\ \ast_{a}(z^{n}) :=a(z)\cdot z^{n}.\] **Definition 4.6**.: _Let \(\mathrm{T}_{\mathrm{k}\to m}^{\mathrm{q},\gamma}:\mathbb{R}^{\infty}\to \mathbb{R}^{\infty}\) be an operation sending a countable sequence \(\{k_{l}\}_{l=1}^{\infty}\) to another sequence \(\{m_{2k}\}_{k=1}^{\infty}\), such that for each \(k=1,2,...\)_ \[m_{2k}=[z^{0}]\left(\partial+2\gamma d+((q-1)\gamma-\frac{1}{2})d^{{}^{\prime} }+\ast_{g}\right)^{2k-1}g(z), \tag{4.2}\] _where \([z^{0}]\) takes the constant term of the formal power series in \(\mathbb{R}[[z]]\), and_ \[g(z)=\sum_{l=1}^{\infty}k_{l}z^{l-1}.\] _Remark 4.7_.: Note by a simple induction on \(k=1,2,...\) that (4.2) implies each \(m_{2k}\) is given by a positive constant time \[k_{2k}+\text{a polynomial of }k_{2},k_{4},...,k_{2k-2}.\] Hence, \(\mathrm{T}_{\mathrm{k}\to m}^{\mathrm{q},\gamma}\) is an invertible map, such that given a sequence of real numbers \(\{m_{2k}\}_{k=1}^{\infty}\), there exists a unique real sequence \(\{k_{l}\}_{l=1}^{\infty}\) with \(k_{l}=0\) for all odd l's, and \(\mathrm{T}_{k\to m}^{q,\gamma}\left(\{k_{l}\}_{l=1}^{\infty}\right)=\{m_{2k} \}_{k=1}^{\infty}\). More precisely, \(\{m_{2j}\}_{j=1}^{k}\) are corresponding to \(\{k_{l}\}_{l=1}^{2k}\). We denote the inverse map by \[\{k_{l}\}=\mathrm{T}_{\mathrm{m}\to\mathrm{k}}^{\mathrm{q},\gamma}(\{m_{2k} \}).\] In Section 5 we provide various points of views on the maps \(\mathrm{T}_{k\to m}^{q,\gamma}\) and \(\mathrm{T}_{m\to k}^{q,\gamma}\). We are ready to present the main result now. **Theorem 4.8**.: _(Convergence of empirical measure in high temperature) The sequence of random M-tuples \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) satisfies LLN, if and only if it is q-\(\gamma\)-LLN-appropriate._ _If this occurs, we have_ \[\{m_{2k}\}_{k=1}^{\infty}=\mathrm{T}_{\mathrm{k}\to m}^{q,\gamma}(\{k_{l}\}_{ l=1}^{\infty}), \tag{4.3}\] _where \(\{k_{l}\}_{l=1}^{\infty}\) are the q-\(\gamma\) cumulants corresponding to \(\{m_{2k}\}_{k=1}^{\infty}\)._ ### Asymptotic expression under Dunkl actions The proof of Theorem 4.8 is relying on the actions of Dunkl operator introduced in Section 2.3 on Bessel generating functions. Before proceeding to the proof, we first study the explicit expression of this action in detail. Consider a symmetric function \(F(z_{1},..,z_{M})\) which is analytic on a complex domain near \(0\). Then the Talor expansion of \(F\) of \(k^{th}\) order is \[F(z_{1},...,z_{M})=\sum_{\lambda:|\lambda|\leq k,\ l(\lambda)\leq M}c_{F}^{ \lambda}\cdot m_{\lambda}(\vec{z})+O(||z||^{k+1}), \tag{4.4}\] where \(m_{\lambda}(\vec{z})\) is the monomial symmetric polynomial indexed by \(\lambda\). If we further assume \(F\) to be a symmetric function in \(z_{1}^{2},...,z_{M}^{2}\), then \[c_{F}^{\lambda}\ \text{ is nonzero only if }\lambda\text{ is even.} \tag{4.5}\] Fix \(M\geq 1\). Recall we denote \[P_{k}=D_{1}^{k}+...+D_{M}^{k},\] where \(D_{i}\) is defined in Section 2.3. The following theorem is a technical result on the explicit expansion of \(\exp(F(z_{1},...,z_{M}))\) under the action of \(P_{k}\)'s, and it serves as a stepping stone to the proof of Theorem 4.8. **Theorem 4.9**.: _Fix \(k=2,4,...\) and a even partition \(\lambda\) and \(|\lambda|=2k\). Let \(F(z_{1},...,z_{M})\) be a symmetric function on \(\mathbb{R}^{M}\) satisfying (4.5), analytic on a domain near \((0,...,0)\) and \(F(0,...,0)=0\). Then_ \[M^{-l(\lambda)}\left[\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}} \right]\exp(F(z_{1},...,z_{M})\Big{|}_{z_{1}=...z_{M}=0}=b_{\lambda}^{ \lambda}\cdot c_{F}^{\lambda}+\sum_{\mu:|\mu|=k,l(\mu)>l(\lambda)}b_{\mu}^{ \lambda}\cdot c_{F}^{\mu}\] \[\qquad\qquad\qquad+L(c_{F}^{(i)},1\leq i\leq 2k-1)+R_{1}(c_{F}^{v}, |v|<2k)+M^{-1}R_{2}(c_{F}^{v},|v|\leq 2k), \tag{4.6}\] _where \(b_{\mu}^{\lambda}\) are coefficients that are uniformly bounded in the limit regime \(M\rightarrow\infty,N\rightarrow\infty,\theta\to 0,M\theta\rightarrow\gamma,N \theta\to q\gamma\), and the notation \((i)\) denotes the partition \((i,0,...,0)\). In particular,_ \[\lim_{M\theta\rightarrow,N\theta\to q\gamma}b_{\lambda}^{ \lambda}=\prod_{i=1}^{l(\lambda)}\Bigl{[} \lambda_{i}(\lambda_{i}-2+2q\gamma)(\lambda_{i}-2+2q\gamma)(\lambda_{i}-4-2q \gamma)(\lambda_{i}-4+2\gamma)\] \[...(2+2q\gamma)(2+2\gamma)2q\gamma\Bigr{]}, \tag{4.7}\] _and_ \[L(c_{F}^{(i)},1\leq i\leq 2k-1)= \prod_{i=1}^{l(\lambda)}\left([z^{0}](\partial+2\gamma d+((q-1) \gamma-\frac{1}{2})d^{{}^{\prime}}+*_{g})^{\lambda_{i}-1}g(z)\right)\] \[- 2k(2k-2+2q\gamma)(2k-2+2\gamma)(2k-4-2q\gamma)(2k-4+2\gamma)\] \[...(2+2q\gamma)(2+2\gamma)2q\gamma\cdot c_{F}^{(k)}\cdot\text{I} _{l(\lambda)=1}. \tag{4.8}\] _The operator \(\partial,d,d^{{}^{\prime}}\) and \(*_{g}\) are defined in the same way as in Definition 4.6, and_ \[g(z):=\sum_{i=1}^{\infty}nc_{F}^{(n)}z^{n-1}.\] _Here, \(L\), \(R_{1}\) and \(R_{2}\) are all polynomials of \(c_{F}^{v}\)'s, whose corresponding variables are given in the parenthesis, and the coefficient of each monomial is uniformly bounded in the limit regime. Moreover, each monomial in \(R_{1}\) contains at least one \(C_{F}^{v}\) where \(l(v)\geq 2\). If we assign \(c_{F}^{v}\) with degree \(|v|\), each summand on the right of (4.6) is homogeneous of degree \(k\)._ We postpone the proof of Theorem 4.9 to next section, and using its result, we are able to prove Theorem 4.8. **Proof of Theorem 4.8**. We first assume the sequence \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) is q-\(\gamma\)-appropriate, with limiting q-\(\gamma\)-cumulants \(\{k_{l}\}_{l=1}^{\infty}\). We need to show \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) is satisfying LLN with moments \(\{m_{2k}\}_{k=1}^{\infty}=\mathrm{T}_{k\to m}^{q,\gamma}(\{k_{l}\}_{l=1}^{ \infty})\). Denote the type-BC Bessel generating function of \(\vec{c}_{M}\) by \(G_{M,N;\theta}(z_{1},...,z_{M})\). By Theorem 2.31, the left side of (4.1) before taking the limit is given by \[M^{-s}\Bigg{(}\prod_{i=1}^{s}P_{2k_{i}}\Bigg{)}G_{M,N;\theta}(z_{1},...,z_{M}) \Big{|}_{z_{1}=...=z_{M}=0}. \tag{4.9}\] For each \(M=1,2,...\), without loss of generality assume \(k_{1}\geq k_{2}\geq...\geq k_{s}\) and identify \((2k_{1},...,2k_{s})\) with a partition \(\lambda\). Also since \(G_{M,N;\theta}\) is analytic on a domain near \(0\) and \(G_{M,N;\theta}(0,...,0)=1\), there is a function \(F_{M,N;\theta}(z_{1},...,z_{M})\) analytic near \(0\) and \(\exp(F_{M,N;\theta}(z_{1},...,z_{M}))=G_{M,N;\theta}(z_{1},...,z_{M})\), \(F_{M,N;\theta}(0,...,0)=0\). We write \(F_{M,N;\theta}\) in terms of its \(k^{th}\) order Talor polynomial \[F_{M,N;\theta}(z_{1},...,z_{M})=\sum_{\mu:|\mu|\leq k,\ l(\mu)\leq M}c_{F_{M,N ;\theta}}^{\lambda}\cdot m_{\mu}(\vec{z})+O(||z^{k+1}||).\] After the above identifications (4.9) satisfies the condition of Theorem 4.9. Then we turn it into the expression on the right of (4.6), and take the limit \(M\theta\to\infty\),\(N\theta\to q\gamma\). By q-\(\gamma\)-appropriateness, \[\lim_{M\theta\to\infty,\ N\theta\to q\gamma}c_{F_{M,N;\theta}}^{(n)}=\frac{k_ {n}}{n},\quad\lim_{M\theta\to\infty,\ N\theta\to q\gamma}c_{F_{M,N;\theta}}^{(n )}=0,\ \text{if}\ l(\mu)>1.\] Hence \(\sum_{\mu:|\mu|=k,l(\mu)>l(\lambda)}b_{\mu}^{\lambda}\cdot c_{F_{M,N;\theta}} ^{v}\) turns to \(0\), since each summand contains some term converging to \(0\), and \[b_{\lambda}^{\lambda}\cdot c_{F_{M,N;\theta}}^{\lambda}\longrightarrow\begin{cases} 0&\text{if}\ s>1\\ (2k_{1}-2+2q\gamma)(2k_{1}-2+2\gamma)(2k_{1}-4-2q\gamma)(2k_{1}-4+2\gamma)&\\...(2+2q\gamma)(2+2\gamma)2q\gamma\cdot c_{F}^{(2k_{1})}k_{2k_{1}}&\text{if} \ s=1.\end{cases}\] The polynomial L converges to \[\prod_{i=1}^{s}\left([z^{0}](\partial+2\gamma d+((q-1)\gamma- \frac{1}{2})d^{{}^{\prime}}+*_{g})^{2k_{i}-1}g(z)\right)\] \[-(2k_{1}-2+2q\gamma)(2k_{1}-2+2\gamma)(2k_{1}-4-2q\gamma)(2k_{1}- 4+2\gamma)...(2+2q\gamma)(2+2\gamma)2q\gamma\cdot k_{2k_{1}}\cdot\mathrm{I}_{s=1},\] where \(g(z)=\sum_{n=1}k_{n}z^{n-1}\), since \(\sum_{n=1}^{\infty}nc_{F_{M,N;\theta}}^{(n)}z^{n-1}\) converges coefficient-wise to \(g(z)\). The polynomial \(R_{1}\) converges to \(0\), also because each summand contains some factor \(c_{F_{M,N;\theta}}^{v}\) with \(l(v)>1\), that vanishes in the limit regime. The polynomial \(M^{-1}R_{2}\) vanishes as well in the limit since all its coefficients converge to \(0\). Combining all the results above gives \[\lim_{M\to\infty}\mathbb{E}\left[\prod_{i=1}^{s}p_{2k_{i}}^{M}\right]=\prod_{i =1}^{s}\left([z^{0}](\partial+2\gamma d+((q-1)\gamma-\frac{1}{2})d^{{}^{ \prime}}+*_{g})^{2k_{i}-1}g(z)\right),\] which is equal to \(\prod_{i=1}^{s}m_{2k_{i}}\) that \(\{m_{2k}\}_{k=1}^{\infty}=\mathrm{T}_{k\to m}^{q,\gamma}(\{k_{l}\}_{l=1}^{ \infty})\). Hence the LLN condition of \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) is proved. Now we go in the opposite direction, that assuming \(\{\vec{c}_{M}\}_{M=1}^{\infty}\) satisfies LLN for some \(\{m_{k}\}_{k=1}^{\infty}\), i.e, for all even partition \(\lambda\), \[M^{-l(\lambda)}\left.\left[\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\right]\exp \Big{(}F_{M,N;\theta}(z_{1},...,z_{M})\Big{)}\right|_{z_{1}=...=z_{M}=0}\frac{N \theta\to q\gamma}{M\theta\to\infty}\prod_{i=1}^{l(\lambda)}m_{\lambda_{i}},\] where \(F_{M,N;\theta}(z_{1},...,z_{M})=\sum_{\mu:l(\mu)\leq M}c_{F_{M,N;\theta}}^{v} \cdot m_{\mu}(\vec{z})\) is an analytic function near \(0\) satisfying \(\exp(F_{M,N;\theta})=G_{M,N;\theta}\). We need to show: \[c_{F_{M,N;\theta}}^{\lambda}\longrightarrow\begin{cases}0&\text{$l(\lambda)>1 $ or $\lambda$ is not even}\\ \frac{k_{2k}}{2k}&\text{$\lambda=(2k)$}\end{cases} \tag{4.10}\] in the limit regime \(M\theta\to\gamma,N\theta\to q\gamma\), where \(\{k_{l}\}_{l=1}^{\infty}=\text{T}_{\text{m}\to\text{k}}^{\text{q},\gamma}(\{m _{2k}\}_{k=1}^{\infty})\). Note that we only need to consider the case \(|\lambda|\) is even, and \(c_{F_{M,N;\theta}}^{v}=0\) for all \(v\) not even, since the type BC Bessel function of each \(\vec{c}_{M}\) is a symmetric function in \(z_{1}^{2},...,z_{M}^{2}\), by Definition 2.27 and Proposition 2.26. We proceed by induction on \(|\lambda|\). For \(|\lambda|=0\) there's nothing to show. Suppose the result holds for all \(|\lambda|\leq 2k-2\), we now consider the partition that \(|\lambda|=2k\). By Theorem 4.9, for each \(M,N,\theta\), we have a (finite) system of linear equations of \[b_{\lambda}^{\lambda}\cdot c_{F_{M,N;\theta}}^{\lambda}+\sum_{v: \ |v|=2k,\ l(v)>l(\lambda),\ \mu\text{ is even}}b_{v}^{\lambda}\cdot c_{F_{M,N;\theta}}^{v}\] \[= M^{-l(\lambda)}\left[\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}} \right]\exp\Big{(}F_{M,N;\theta}(z_{1},...,z_{M})\Big{)}\Big{|}_{z_{1}=...z_{M }=0}-L(c_{F_{M,N;\theta}}^{(i)},1\leq i\leq 2k-1)\] \[\qquad-R_{1}(c_{F_{M,N;\theta}}^{v},|v|<2k)-M^{-1}R_{2}(c_{F_{M,N ;\theta}}^{v},|v|\leq 2k). \tag{4.11}\] We observe that if we write it in the matrix form in the lexicographical order of \(v\)'s introduced in Section 2.1, the above system is upper triangular, and again by Theorem 4.9, its diagonal entries \(b_{\mu}^{\mu}\)'s all converge to some nonzero constant in the limit regime, and the off-diagonal entries are uniformly bounded. Hence the matrix is invertible asymptotically, and its inverse has uniformly bounded entries. **Claim:** If \(\lambda\neq(2k)\), the right side of (4.11) converges to \(0\) in the limit regime. _Proof of the claim:_\(R_{1}\to 0\) by induction hypothesis (recall that each of its term involves some partition \(v\) with \(l(v)\geq 2\)), and \(R_{2}\to 0\) since the coefficients all vanish in the limit. By the LLN condition, \[M^{-l(\lambda)}\left[\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\right]\exp(F(z_{1},...,z_{M})\Big{|}_{z_{1}=...z_{M}=0}\longrightarrow\prod_{i=1}^{l(\lambda)}m _{\lambda_{i}},\] and by Theorem 4.9 and Definition 4.6, when \(\lambda\neq(2k)\), each \(\lambda_{i}<2k\) and \[L(c^{(i)}_{F_{M,N;\theta}},1\leq i\leq 2k-1)=\prod_{i=1}^{l(\lambda)}m_{\lambda_{i} }^{M,N;\theta},\] where \(\{m_{k}^{M,N;\theta}\}_{k=1}^{\infty}=\mathrm{T}_{k\to m}^{q,\gamma}(\{l\cdot c ^{(l)}_{F_{M,N;\theta}}\}_{l=1}^{\infty}).\) By induction hypothesis \(\{l\cdot c^{(l)}_{F_{M,N;\theta}}\}_{l=1}^{\infty}\longrightarrow\{k_{l}\}_ {l=1}^{\infty}\) pointwisely for \(l<2k\), and hence \(m_{j}^{M,N;\theta}\to m_{j}\) pointwisely for \(j<k\), and \[L(c^{(i)}_{F_{M,N;\theta}},1\leq i\leq 2k-1)\longrightarrow\prod_{i=1}^{l( \lambda)}m_{\lambda_{i}}\] as well. Because of this claim, we conclude that when \(M\theta\to\gamma,N\theta\to q\gamma\), the solutions of the linear system converge to the zero vector, in particular, \[c^{\lambda}_{F_{M,N;\theta}}\longrightarrow 0\text{ for all }|\lambda|=2k,\lambda\neq(2k). \tag{4.12}\] It remains to consider \(\lambda=(2k)\). This time we write down a single identity \[\begin{split} b^{(2k)}_{(2k)}\cdot c^{(2k)}_{F_{M,N;\theta}}+& \sum_{v:\ |v|=2k,\ l(v)>1,\ \mu\text{ is even}}b^{(2k)}_{v}\cdot c^{v}_{F_{M,N;\theta}}\\ =& M^{-1}P_{2k}\left[\exp(F_{M,N;\theta}(z_{1},...,z_ {M})]\right|_{z_{1}=...z_{M}=0}-L(c^{(i)}_{F_{M,N;\theta}},1\leq i\leq 2k-1) \right.\\ &\left.-R_{1}(c^{v}_{F_{M,N;\theta}},|v|<2k)-M^{-1}R_{2}(c^{v}_{F _{M,N;\theta}},|v|\leq 2k).\right.\end{split} \tag{4.13}\] We have that \[\text{LHS}=b^{(2k)}_{(2k)}\cdot c^{(2k)}_{F_{M,N;\theta}}+o(1),\] where \(g_{M,N;\theta}(z)=\sum_{n=1}^{\infty}nc^{(n)}_{F_{M,N;\theta}}z^{n-1}\), because of Theorem 4.9 and (4.12). And \[\text{RHS}=m_{2k}-[z^{0}]\Big{(}\partial+2\gamma d+((q-1)\gamma-\frac{1}{2}) d^{{}^{\prime}}+*_{g_{M,N;\theta}}\Big{)}^{2k-1}g_{M,N;\theta}(z)+b^{(2k)}_{(2k )}\cdot c^{(2k)}_{F_{M,N;\theta}}+o(1)\] because of Theorem 4.9 and the LLN assumption. Hence, when \(M\theta\to\gamma,N\theta\to q\gamma\), \[[z^{0}]\Big{(}\partial+2\gamma d+((q-1)\gamma-\frac{1}{2})d^{{}^{\prime}}+*_{g _{M,N;\theta}}\Big{)}^{2k-1}g_{M,N;\theta}(z)\longrightarrow m_{2k}.\] By Definition 4.6, the invertibility of \(\mathrm{T}_{k\to m}^{q,\gamma}\) and the induction hypothesis, this is equivalent to \[(2k)\cdot c^{(2k)}_{F_{M,N;\theta}}\longrightarrow k_{2k}\] in the limit regime, that \(k_{2k}\) is in the image of \(\mathrm{T}_{m\to k}^{q,\gamma}(\{m_{2j}\}_{j=1}^{\infty})\). This finishes the induction step and therefore the proof. ### Proof of Theorem 4.9 We start by reducing \(F(z_{1},...,z_{M})\) from a (locally) analytic function to its \(2k^{th}\) Talor polynomial. **Lemma 4.10**.: _For \(F(z_{1},...,z_{M})\) of the form (4.4), denote \(F^{\prime}(z_{1},...,z_{M})=\sum_{\lambda:|\lambda|\leq 2k,\ l(\lambda)\leq M }c^{\lambda}_{F}\cdot m_{\lambda}(\vec{z})\). Then for a partition \(\lambda\) with \(|\lambda|=2k\), we have_ \[\left[\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\right]\exp(F^{{}^{\prime}}(z_{1 },...,z_{M}))\Big{|}_{z_{1}=...=z_{M}=0}=\left[\prod_{i=1}^{l(\lambda)}P_{ \lambda_{i}}\right]\exp(F(z_{1},...,z_{M}))\Big{|}_{z_{1},...,z_{M}=0}.\] Proof.: Since \(F\) is analytic near \(0\), write \(\exp(F(z_{1},...,z_{M}))\) and \(\exp(F^{{}^{\prime}}(z_{1},...,z_{M}))\) as symmetric power series. Their difference \(R(\vec{z})\) is a power series of order \(O(||z||^{k+1})\). Since \(\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\) is a homogeneous polynomial of \(D_{i}\)'s and each \(D_{i}\) reduces the total power of a monomial by \(1\), \[\Bigg{[}\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\Bigg{]}R(\vec{z})=0.\qed\] By Lemma 4.10, in the remaining of this section we take \[F(z_{1},...,z_{M})=\sum_{\lambda:|\lambda|\leq 2k,\ l(\lambda)\leq M,\lambda\ \text{even}}c_{F}^{\lambda}\cdot m_{\lambda}(\vec{z}).\] \(\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\) is a sum of products of \(D_{i}\)'s (\(i=1,2,...,M\)). For each product of the form \(D_{1}^{n_{1}}...D_{M}^{n_{M}}\) acting on \(\exp(F(z_{1},...,z_{M}))\), by (2.22) the order does not matter. Recall \[D_{i}=\partial_{i}+\left[\theta(N-M+1)-\frac{1}{2}\right]\frac{1-\sigma_{i}}{ z_{i}}+\theta\sum_{j\neq i}\left[\frac{1-\sigma_{ij}}{z_{i}-z_{j}}+\frac{1-\tau_{ ij}}{z_{i}+z_{j}}\right].\] Observe that \(D_{i}[\exp(F(z_{1},...,z_{M}))]\) is of the form \(H(z_{1},...,z_{M})\exp(F(z_{1},...,z_{M}))\), where \(H(z_{1},...,z_{M})\) is a polynomial of \(z_{1},...,z_{M}\), and for any i, \(D_{i}[H(z_{1},...,z_{M})\exp(F(z_{1},...,z_{M}))]\) is still \(\exp(F(z_{1},...,z_{M}))\) multiplied by a polynomial. More precisely, \[\partial_{i}[H(z_{1},...,z_{M})\exp(F(z_{1},...,z_{M}))]\] \[\quad=\Big{(}\partial_{i}H(z_{1},...,z_{M})+H(z_{1},...,z_{M}) \partial_{i}F(z_{1},...,z_{M})\Big{)}\cdot\exp(F(z_{1},...,z_{M})), \tag{4.15}\] \[\frac{1-\sigma_{i}}{z_{i}}[H(z_{1},...,z_{M})\exp(F(z_{1},...,z_{M}))]\] \[\quad\quad\quad=\Big{(}\frac{1-\sigma_{i}}{z_{i}}H(z_{1},...,z_{M })\Big{)}\cdot\exp(F(z_{1},...,z_{M})),\] \[\frac{1-\sigma_{ij}}{z_{i}-z_{j}}[H(z_{1},...,z_{M})\exp(F(z_{1},...,z_{M}))]\] (4.16) \[\quad\quad\quad=\Big{(}\frac{1-\sigma_{ij}}{z_{i}-z_{j}}H(z_{1},...,z_{M})\Big{)}\cdot\exp(F(z_{1},...,z_{M})),\] \[\frac{1-\tau_{ij}}{z_{i}+z_{j}}[H(z_{1},...,z_{M})\exp(F(z_{1},..., z_{M}))]\] \[\quad\quad\quad=\Big{(}\frac{1-\tau_{ij}}{z_{i}+z_{j}}H(z_{1},...,z_{M})\Big{)}\cdot\exp(F(z_{1},...,z_{M})). \tag{4.14}\] We see that \(\prod_{i=1}^{l(\lambda)}[D_{1}^{n_{1}}...D_{M}^{n_{M}}]\exp(F(z_{1},...,z_{M})) \Big{|}_{z_{1},...,z_{M}=0}\) is obtained by acting a polynomial of \(\partial_{i},\frac{1-\sigma_{i}}{z_{i}},\frac{1-\sigma_{ij}}{z_{i}-z_{j}}, \frac{1-\tau_{ij}}{z_{i}+z_{j}}\) on \(F(z_{1},...,z_{M})\), then take the constant term. Then we have the following basic observation. **Proposition 4.11**.: _For any M-tuples of nonnegative integers \(n_{1},...,n_{M}\),_ \[\Big{[}D_{1}^{n_{1}}\cdots D_{M}^{n_{M}}\Big{]}\exp(F(z_{1},...,z_{M}))\Big{|}_ {z_{1},...,z_{M}=0} \tag{4.18}\] _is a homogeneous polynomial in \(c_{F}^{v}\)'s of degree \(\sum_{i=1}^{M}n_{i}\), if taking \(c_{F}^{v}\) to be of degree \(|v|\). Moreover, the coefficients of this polynomial are all uniformly bounded in the limit regime \(M\theta\to\gamma\), \(N\theta\to q\gamma\)._ Proof.: Each of \(\partial_{i}\), \(\frac{1-\sigma_{i}}{z_{i}},\frac{1-\sigma_{ij}}{z_{i}-z_{j}},\frac{1-\tau_{ij}}{z_ {i}+z_{j}}\) reduces the degree of a monomial by \(1\), the constant term of \(\left[D_{1}^{n_{1}}...D_{M}^{n_{M}}\right]\exp(F(z_{1},...,z_{M}))\) is then obtained from some monomials of \(z_{1},...,z_{M}\) of degree \(\sum_{i=1}^{M}n_{i}\). Since \(c_{F}^{v}\) is the coefficient of \(m_{v}(\vec{z})\) which is of degree \(|v|\), by assigning \(c_{F}^{v}\) with degree \(|v|\) one can pass the degree of the original monomials to their resulting constant terms. Each \(D_{i}\) is a sum of 2M single operators \(\partial_{i},\frac{1-\sigma_{i}}{z_{i}},\frac{1-\sigma_{ij}}{z_{i}-z_{j}}\) and \(\frac{1-\tau_{ij}}{z_{i}+z_{j}}\), in which \(2M-2\) terms involves a factor \(\theta\), and hence \(D_{1}^{n_{1}}...D_{M}^{n_{M}}\) is a sum of \(2M^{\sum_{i=1}^{M}n_{i}}\) products of single operators. The constant term of each of these products acting on \(\exp(F(z_{1},...,z_{M}))\) is changing with \(M,N,\theta\) as a muliple of \(\theta^{\sum_{i=1}^{M}n_{i}-\#\partial_{i}\text{'s in the product}}\), and the number of such products is of order \(O(M^{\sum_{i=1}^{M}n_{i}-\#\partial_{i}\text{'s in the product}})\). Hence as \(M\theta\to\gamma\) the coefficient is uniformly bounded. **Proposition 4.12**.: _For a partition \(\lambda\), we have_ \[\begin{split} M^{-l(\lambda)}\left[\prod_{i=1}^{l(\lambda)}P_{ \lambda_{i}}\right]\exp(F(z_{1},...,z_{M}))\Big{|}_{z_{1},...,z_{M}=0}\\ =\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i}}\right]\exp (F(z_{1},...,z_{M}))\Big{|}_{z_{1},...,z_{M}=0}+O(\frac{1}{M}),\end{split} \tag{4.19}\] _in the limit regime \(M\theta\to\gamma,N\theta\to q\gamma\), where \(O(\frac{1}{M})\) is a homogeneous polynomial of \(c_{F}^{v}\)'s (taking \(c_{F}^{v}\) to be of degree \(|v|\)) whose coefficients are of order \(O(\frac{1}{M})\)._ Proof.: Each \(P_{\lambda_{i}}\) is a sum of M terms \(D_{j}^{\lambda_{i}}(j=1,2,...,M)\), hence \(\prod_{i=1}^{l(\lambda)}P_{\lambda_{i}}\) is a sum of \(M^{l(\lambda)}\) such terms, in which \(O(M^{l(\lambda)-1})\) terms have not all distinct indices \(j\)'s. By Proposition 4.11 each of these terms has uniformly bounded coefficient, hence they together contribute \(O(\frac{1}{M})\). As for the remaining terms with all distinct indices, by symmetry of \(F(z_{1},...,z_{M})\), their action are all the same as \(\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i}}\). After all the reductions above, it remains to study \[\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i}}\right]\exp(F(z_{1},...,z_{ M}))\Big{|}_{z_{1},...,z_{M}=0}\] for an arbitrary partition \(\lambda\), whose expression should match the right side of (4.6). The expression on the right side of (4.6) can be splitted into three parts: the linear polynomials of \(c_{F}^{v}\)'s, the terms involving only \(c_{F}^{v}\)'s where \(v\) are length \(1\) partitions, and all the other remaining terms. In the next two Propositions, we deal with the first two cases separately. Before that we present several lemmas that will be used in the proof. Consider the action of \(\prod_{i=1}^{l(\lambda)}D_{i}^{\lambda_{i}}\) on \(m_{\mu}(\vec{z})\). Each \(D_{i}\) is a combination of \(\partial_{i}+[\theta(N-M+1)-\frac{1}{2}]\frac{1-\sigma_{i}}{z_{i}}\) and \(\theta\left[\frac{1-\sigma_{ij}}{z_{i}-z_{j}}+\frac{1-\tau_{ij}}{z_{i}+z_{j}}\right]\) with \(M-1\) choices of \(j\neq i\), hence \(\prod_{i=1}^{l(\lambda)}D_{i}^{\lambda_{i}}\) will lead to a big sum, whose summands are products of these two terms. **Lemma 4.13**.: _For arbitrary partitions \(\lambda\) and \(\mu\), the constant term of \(\prod_{i=1}^{l(\lambda)}D_{i}^{\lambda_{i}}m_{\mu}(\vec{z})\) has a generic part, which is contributed by the summands of \(\prod_{i=1}^{l(\lambda)}D_{i}^{\lambda_{i}}\) in which all the indices \(j\) are distinct, and all bigger than \(l(\lambda)\). The remaining part is of order \(O(\frac{1}{M})\) in the limit regime \(M\theta\rightarrow\gamma,N\theta\to q\gamma\)._ Proof.: For \(k=0,1,...,|\lambda|\), the number of summands in the remaining part (which means their exists a pair of indices that coincides) with \(k\) components of \(\theta\left[\frac{1-\sigma_{ij}}{z_{i}-z_{j}}+\frac{1-\tau_{ij}}{z_{i}+z_{j}}\right]\) is of order \(O(M^{k-1})\), and the power of \(\theta\) in these summands is \(k\). Since \(M\theta\rightarrow\gamma>0\), the remaining part is a finite sum of order \(O(M^{k-1}\theta^{k})\), which is \(O(\frac{1}{M})\). Because of this, we only consider the limit of the generic part of the expression. For simplicity we write \(l=l(\lambda)\). **Lemma 4.14**.: _The generic part of constant term of \(\prod_{i=1}^{l}D_{i}^{\lambda_{i}}m_{\mu}(\vec{z})\) is given by_ \[\Big{[}D_{l}^{\lambda_{l}-1}\partial_{l}\Big{]}\cdots\Big{[}D_{2}^{\lambda_{2 }-1}\partial_{2}\Big{]}\cdot\Big{[}D_{1}^{\lambda_{1}-1}\partial_{1}\Big{]}m_ {\mu}(\vec{z}). \tag{4.20}\] Proof.: For \(m=1\), because of the symmetry, \(m_{\mu}(\vec{z})\) is invariant under the action of \(\sigma_{i}\), \(\sigma_{ij}\) and \(\tau_{ij}\) and hence \(D_{i}m_{\mu}(\vec{z})\) is equal to \(\partial_{i}m_{\mu}(\vec{z})\). For \(m=2,3,...,l\), after acting \(\Big{[}D_{m-1}^{\lambda_{m-1}-1}\partial_{m}\Big{]}\cdots\Big{[}D_{1}^{\lambda _{1}-1}\partial_{1}\Big{]}\) on \(m_{\mu}(\vec{z})\), since the generic part has distinct indices j's bigger than \(l(\lambda)\), we get some polynomial \(H(z_{1},...,z_{M})\), where the operators act on variables \(z_{1},...,z_{m-1}\) and \(z_{j}\)'s \((j>l)\). Hence \(H(z_{1},...,z_{M})\) is still symmetric as function of \(z_{m}^{2}\) and \(z_{j^{\prime}}^{2}\) for another different \(j^{{}^{\prime}}\) in the first \(D_{m}\), invariant again under the action of \(\sigma_{i}\), \(\sigma_{ij}\) and \(\tau_{ij}\). We conclude that \[D_{m}\Big{[}D_{m-1}^{\lambda_{m-1}-1}\partial_{m-1}\Big{]}\cdots\Big{[}D_{1}^ {\lambda_{1}-1}\partial_{1}\Big{]}m_{\mu}(\vec{z})=\partial_{m}\Big{[}D_{m-1} ^{\lambda_{m-1}-1}\partial_{m-1}\Big{]}\cdots\Big{[}D_{1}^{\lambda_{1}-1} \partial_{1}\Big{]}m_{\mu}(\vec{z}).\] _Remark 4.15_.: One can replace \(m_{\mu}(\vec{z})\) by \(F(z_{1},...,z_{M})\) or \(\exp(F(z_{1},...,z_{M}))\) in last lemma, since these functions satisfy the same symmetry. The next lemma considers the concrete action of \(D_{i}\) on a polynomial of \(z_{1},...,z_{l}\). **Lemma 4.16**.: _For an arbitrary \(l\)-tuple \((n_{1},...,n_{l})\in\mathbb{Z}_{\geq 0}^{l}\) and arbitrary \(i=1,2,...,l\), we have that for the generic part of \(D_{i}\),_ \[D_{i}[z_{1}^{n_{1}}...z_{l}^{n_{l}}]=\Big{(}\partial_{i}+\left[\theta(N-M+1)- \frac{1}{2}\right]d_{i}^{{}^{\prime}}+2\theta(M-1)d_{i}\Big{)}[z_{1}^{n_{1}} \cdots z_{l}^{n_{l}}]+\sum_{j\neq i}(z_{j}p_{1}^{j}+z_{j}p_{2}^{j}), \tag{4.21}\] _where \(p_{1}^{j},p_{2}^{j}\) are some polynomials of \(z_{1},...,z_{l}\) depending on \((n_{1},...,n_{l})\), and \(d_{i}\), \(d_{i}^{{}^{\prime}}\) are linear operators on polynomials of \(z_{1},...,z_{M}\) such that_ \[d_{i}(z_{i}^{n})=\begin{cases}0&\text{$n=0$};\\ 2z_{i}^{n-1}&\text{$n>0$},\end{cases}\qquad d_{i}^{{}^{\prime}}(z_{i}^{n})= \begin{cases}0&\text{$n$ is even;}\\ 2z_{i}^{n-1}&\text{$n$ is odd.}\end{cases} \tag{4.22}\] _Note that the the action depends whether the power of \(z_{i}\) is odd or even._ Proof.: This follows directly from definition. More precisely, for \(j>l\), \[\theta\frac{1-\sigma_{ij}}{z_{i}-z_{j}}[z_{1}^{n_{1}}\cdots z_{l}^{n_{l}}]=d_{i }[z_{1}^{n_{1}}\cdots z_{l}^{n_{l}}]+z_{j}p_{1}^{j},\] \[\theta\frac{1-\tau_{ij}}{z_{i}+z_{j}}[z_{1}^{n_{1}}\cdots z_{l}^{n_{l}}]=d_{i}[z_{ 1}^{n_{1}}\cdots z_{l}^{n_{l}}]+z_{j}p_{2}^{j},\] and \[\theta\frac{1-\sigma_{i}}{z_{i}}[z_{1}^{n_{1}}\cdots z_{l}^{n_{l}}]=d_{i}^{ \prime}[z_{1}^{n_{1}}\cdots z_{l}^{n_{l}}].\] **Proposition 4.17**.: _For even partition \(\lambda\) with \(|\lambda|=2k\), we have_ \[\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i}}\right]\exp(F(z_{1},...,z_{ M}))\Big{|}_{z_{1},...,z_{M}=0}=b_{\lambda}^{\lambda}\cdot c_{F}^{\lambda}+ \sum_{\mu:|\mu|=2k,\ l(\mu)>l(\lambda)}b_{\mu}^{\lambda}\cdot c_{F}^{\mu}+R+O( \frac{1}{M}).\] _In particular,_ \[\lim_{M\theta\rightarrow\gamma,N\theta\to q\gamma}b_{\lambda}^{\lambda}= \prod_{i=1}^{l(\lambda)}\Big{[}\lambda_{i}(\lambda_{i}-2+2q\gamma)(\lambda_{i} -2+2q\gamma)(\lambda_{i}-4-2q\gamma)(\lambda_{i}-4+2\gamma)\cdots(2+2q\gamma) (2+2\gamma)2q\gamma\Big{]}.\] _The summand \(R\) is a polynomial of \(c_{F}^{v}\) that \(|v|<2k\). And \(O(\frac{1}{M})\) denotes a linear polynomial of \(c_{F}^{v}\) such that \(|v|=2k\), and the coefficients are of order \(O(\frac{1}{M})\) in the limit regime of this section._ Proof.: Since the expression on the right is homogeneous of degree 2k, all the nonlinear terms are collected as \(R\), and it suffices to consider the linear terms, which is \[\sum_{\mu:|\mu|=2k,\ \mu\ \text{is even}}b_{\mu}^{\lambda}\cdot c_{F}^{\mu}.\] We classify all the even partition \(\mu\) with \(|\mu|=2k\) in terms of their length. When \(l(\mu)>l(\lambda)\), there's nothing to show. When \(l(\mu)\leq l(\lambda)\), we want to show when \(\mu\neq\lambda\), \(b_{\mu}^{\lambda}\) is of order \(O(\frac{1}{M})\). Writing \(\exp(F(z_{1},...,z_{M}))\) as power series of \(F(z_{1},...,z_{m})\). Since each term in \(D_{i}\) reduces the total power of a monomial by \(1\), and \(F(z_{1},...,z_{M})=\sum_{\mu:|\mu|\leq 2k}c_{F}^{\mu}\cdot m_{\mu}(\vec{z})\) where each \(m_{\mu}(\vec{z})\) is homogeneous of degree \(|\mu|\), we see that \(b_{\mu}^{\lambda}\) is obtained from the action of \(\prod_{i=1}^{l(\lambda)}D_{i}\) on the single symmetric monomial \(m_{\mu}(\vec{z})\). Again let \(l=l(\lambda)\). By Lemma 4.13 and 4.14, we first consider the generic part of \(\prod_{i=1}^{l(\lambda)}D_{i}m_{\mu}(\vec{z})\). When \(l(\mu)<l\), each monomial of \(m_{\mu}(\vec{z})\) is missing some variable among \(z_{1},...,z_{M}\), say \(z_{m}\). Then when acting the \(\partial_{m}\) in (4.20), we get \(0\) since \(\left[D_{m-1}^{\lambda_{m-1}-1}\partial_{m-1}\right]...\left[D_{1}^{\lambda_{ 1}-1}\partial_{1}\right]\) does not produce any power of \(z_{m}\) to \(m_{\mu}(\vec{z})\). And the remaining part of the action gives \(O(\frac{1}{M})\). What remains is to consider the case \(|\mu|=|\lambda|\), and we calculate the limit of \(b_{\lambda}^{\lambda}\). Again \(b_{\lambda}^{\lambda}\) is obtained from the action of \(\prod_{i=1}^{l(\lambda)}D_{i}\) on \(m_{\lambda}(\vec{z})\). By Lemma 4.13, 4.14, we consider only the generic part of the Dunkl product, and act it on the monomials of \(m_{\mu}(\vec{z})\) separately. The monomials with variables other than \(z_{1},...,z_{l}\) are missing some variables, say \(z_{m}\) (\(1\leq m\leq l\)). Then (4.20) again tells that these monomials only contribute \(O(\frac{1}{M})\). Now we consider monomials formed by \(z_{1},...,z_{l}\). For an arbitrary \(l\)-tuple \((n_{1},...,n_{l})\) and arbitrary \(i=1,2,...,l\), by Lemma 4.16 we have \[D_{i}[z_{1}^{n_{1}}...z_{l}^{n_{l}}]=\Big{(}\partial_{i}+[\theta(N-M+1)-\frac{1 }{2}][1-(-1)^{n_{i}}]d_{i}+\theta(M-1)d_{i}+\theta(M-1)d_{i}\Big{)}[z_{1}^{n_{ 1}}\cdots z_{l}^{n_{l}}]+\sum_{j\neq i}(z_{j}p_{1}^{j}+z_{j}p_{2}^{j}), \tag{4.23}\] One can see from the above expression that, after a single action of \(D_{i}\), \(z_{1}^{n_{1}}...,z_{l}^{n_{l}}\) splits into two parts. The \(z_{i}\)-power of the first part decreases by \(1\). The second part has a common factor \(z_{j}\), and its \(z_{i}\)-powers decreases as well while the powers of other variables are unchanged. For the action of \(\prod_{i=1}^{l}D_{i}^{\lambda_{i}}\), we repeat the above action by another \(|\lambda|-1\) times. Since all indices \(j\) are distinct, the second part has no chance to become a constant. Hence we only apply the first part each time we apply one more single \(D_{i}\), and \(\prod_{i=1}^{l}D_{i}^{\lambda_{i}}\) results in reducing power of \(z_{i}\) by \(\lambda_{i}\). In the monomials of \(m_{\mu}(\vec{z})^{\prime}s\) where \(|\mu|=|\lambda|\), only \(z_{1}^{\lambda_{1}},...,z_{l}^{\lambda_{l}}\) survives as a nonzero constant. More precisely (we use \(\approx\) to omit the \(O(\frac{1}{M})\) part), \[b_{\lambda}^{\lambda}\approx [z^{0}]\prod_{i=1}^{l}D_{i}^{\lambda_{i}}m_{\lambda}(\vec{z}) \approx[z^{0}]\prod_{i=1}^{l}D_{i}^{\lambda_{i}}z_{1}^{\lambda_{1}}....z_{M}^{ \lambda_{M}}\] \[\approx [z^{0}]\Big{[}\partial_{l}+\Big{(}2(M-1)\theta+2(N-M+1)\theta-1 \Big{)}d_{l}\Big{]}^{\frac{\lambda_{l}}{2}}\Big{[}\partial_{l}+2(M-1)\theta d _{l}\Big{]}^{\frac{\lambda_{l}}{2}-1}\partial_{l}\] \[\cdots\Big{[}\partial_{1}+\Big{(}2(M-1)\theta+2(N-M+1)\theta-1 \Big{)}d_{1}\Big{]}^{\frac{\lambda_{1}}{2}}\Big{[}\partial_{1}+2(M-1)\theta d _{1}\Big{]}^{\frac{\lambda_{1}}{2}-1}\partial_{1}\Big{[}z_{1}^{\lambda_{1}} \cdots z_{l}^{\lambda_{l}}\Big{]}\] \[= \Big{[}\partial_{l}+(2N\theta-1)d_{l}\Big{]}^{\frac{\lambda_{l}}{ 2}}\Big{[}\partial_{l}+2(M-1)\theta d_{l}\Big{]}^{\frac{\lambda_{l}}{2}-1} \partial_{l}\] \[\cdots\Big{[}\partial_{1}+(2N\theta-1)d_{1}\Big{]}^{\frac{ \lambda_{1}}{2}}\Big{[}\partial_{1}+2(M-1)\theta d_{1}\Big{]}^{\frac{\lambda_{ 1}}{2}-1}\partial_{1}\Big{[}z_{1}^{\lambda_{1}}\cdots z_{l}^{\lambda_{l}}\Big{]}\] \[\cdots\Big{[}\partial_{1}+(2q\gamma-1)d_{1}\Big{]}^{\frac{\lambda _{1}}{2}}\Big{[}\partial_{l}+2\gamma d_{l}\Big{]}^{\frac{\lambda_{1}}{2}-1} \partial_{l}\] \[= \prod_{i=1}^{l}(\lambda_{i}-1+2q\gamma-1)(\lambda_{i}-2+2\gamma) (\lambda_{i}-3+2q\gamma-1)\cdots(2+2\gamma)(1+2q\gamma-1)\] \[= \prod_{i=1}^{l}\lambda_{i}(\lambda_{i}-2+2q\gamma)(\lambda_{i}-2+ 2\gamma)(\lambda_{i}-4+2q\gamma)(\lambda_{i}-4+2\gamma)\cdots(2+2q\gamma)(2+2 \gamma)2q\gamma.\] The above argument also implies for \(|\mu|=|\lambda|\), \(\prod_{i=1}^{l(\lambda)}D_{i}^{\lambda_{i}}m_{\mu}(\vec{z})\) is \(O(\frac{1}{M})\), and so is \(b_{\mu}^{\lambda}\). The next proposition deals with the terms involving only length \(1\) partitions, and identify them with \(L\) in (4.6). **Proposition 4.18**.: _For even partition \(\lambda\) with \(|\lambda|=2k\), we have_ \[\begin{split}\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i}} \right]&\exp(F(z_{1},...,z_{M}))\Big{|}_{z_{1}=...z_{M}=0}\\ &=\prod_{i=1}^{l(\lambda)}\Big{(}[z^{0}](\partial+2\gamma d+((q-1 )\gamma-\frac{1}{2})d^{{}^{\prime}}+*_{g})^{\lambda_{i}-1}g(z)\Big{)}+R+O( \frac{1}{M}),\end{split} \tag{4.24}\] _where \(g(z)=\sum_{n=1}^{\infty}nc_{F}^{(n)}z^{n-1}\), and \(\partial\), \(d\), \(d^{{}^{\prime}}\) and \(*_{g}\) are defined in Definition 4.6. Moreover, \(R\) is a homogeneous polynomial of \(c_{F}^{v}\)'s that \(|v|\leq 2k\), and each monomial contains at least one that \(l(v)>1\), and \(O(\frac{1}{M})\) is a homogeneous polynomial of \(c_{F}^{v}\)'s whose coefficients are of order \(O(\frac{1}{M})\) in the limit regime \(M\theta\rightarrow\gamma,N\theta\to q\gamma\)._ Proof.: Again by Lemma 4.13, 4.14, we only take the generic part of the action of Dunkl operators, namely, all indices \(j\)'s involved are distinct and bigger than \(l(\lambda)\), and the remaining part becomes \(O(\frac{1}{M})\) in (4.24). Moreover, we only consider the polynomials involving only \(c_{F}^{(n)}\)'s (\(n=2,4,...,2k\)), which are corresponding to \(m_{(n)}(\vec{z})\), and all other terms are collected in \(R\) and \(O(\frac{1}{M})\). Hence, we only look at the action \[\begin{split}\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i} }\right]&\exp(\sum_{n=2}^{2k}c_{F}^{(n)}m_{(n)}(\vec{z}))\Big{|}_ {z_{1},...,z_{M}=0}\\ &=\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i}}\right] \prod_{t=1}^{M}\exp(\sum_{n=2}^{2k}c_{F}^{(n)}(z_{t}^{n}))\Big{|}_{z_{1},...,z_{M}=0}.\end{split} \tag{4.25}\] **Claim:** \[\begin{split}\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i} }\right]&\prod_{t=1}^{M}\exp(\sum_{n=2}^{2k}c_{F}^{(n)}(z_{j}^{n} ))\Big{|}_{z_{1}=...z_{M}=0}\\ &=\prod_{i=1}^{l(\lambda)}\Bigg{(}(D_{i})^{\lambda_{i}-1}\partial _{i}\Big{[}\prod_{t=1}^{M}\exp(\sum_{n=2}^{2k}c_{F}^{(n)}(z_{t}^{n})\Big{]} \Big{|}_{z_{i}=0}\Bigg{)}.\end{split} \tag{4.26}\] Proof of the Claim:.: Since the indices \(j\)'s of \(D_{i}\)'s are distinct and bigger than \(l(\lambda)\), for \(i_{1}\neq i_{2}\), \(D_{i_{1}}^{\lambda_{i_{1}}}\) and \(D_{i_{2}}^{\lambda_{i_{2}}}\) are acting on two groups of disjoint variables. Hence the action of each \((D_{i})^{\lambda_{i}}\) factors. Moreover, the first \(D_{i}\) acts as \(\partial_{i}\) for the same reason as in the proof of Lemma 4.14. Without loss of generality consider \(i=1\). \[\partial_{1}\Big{[}\prod_{t=1}^{M}\exp\Big{(}\sum_{n=2}^{2k}c_{F}^{(n)}(z_{t}^ {n})\Big{)}\Big{]}=g(z_{1})\prod_{t=1}^{M}\exp\Big{(}\sum_{n=2}^{2k}c_{F}^{(n) }(z_{t}^{n})\Big{)}.\] By (4.14)-(4.17), it suffices to consider the explicit action of \(D_{i}\) on \(H(z_{1})\prod_{t=1}^{M}\exp\Big{(}\sum_{n=2}^{2k}c_{F}^{(n)}(z_{t}^{n})\Big{)}\), where \(H(z_{1})\) is a polynomial of \(z_{1}\), and \[D_{1}\Big{[}H(z_{1})\prod_{t=1}^{M}\exp\left(\sum_{n=2}^{2k}c_{F}^{(n)}(z_{t}^ {n})\right)\Big{]}=\Big{[}D_{1}H(z_{1})+g(z_{1})\Big{]}\prod_{t=1}^{M}\exp \left(\sum_{n=2}^{2k}c_{F}^{(n)}(z_{t}^{n})\right).\] Hence we have for \(i=1,2,...,l(\lambda)\), \[D_{i}^{\lambda_{i}-1}\partial_{i}\left[\prod_{t=1}^{M}\Big{(}\exp(\sum_{n=2}^{2 k}c_{F}^{(n)}(z_{t}^{n}))\Bigg{)}\right]\Big{|}_{z_{1}=...=z_{M}=0}=(D_{i}+*_{g})^{ \lambda_{i}-1}g(z_{i})\Big{|}_{z_{i}=0}.\] Again by Lemma 4.13, 4.14, 4.16, up to \(O(\frac{1}{M})\) error, \[\begin{split}(D_{i}+*_{g})^{\lambda_{i}-1}g(z_{i})\Big{|}_{z_{i}=0} \approx&\Bigg{(}\partial_{i}+2(M-1)\theta d_{i}+\Big{[}\theta(N- M+1)-\frac{1}{2}\Big{]}d^{{}^{\prime}}_{i}\Bigg{)}^{\lambda_{i}-1}g(z_{i}) \Big{|}_{z_{i}=0}\\ &\frac{N\theta\to q\gamma}{M\theta\to\gamma}\Bigg{(}\partial_{i} +2\gamma d_{i}+\Big{[}(q-1)\gamma-\frac{1}{2}\Big{]}d^{{}^{\prime}}_{i}+*_{g} \Bigg{)}^{\lambda_{i}-1}g(z_{i})\Big{|}_{z_{i}=0},\end{split} \tag{4.27}\] and plugging this back to (4.26) gives \[\begin{split}\left[\prod_{i=1}^{l(\lambda)}(D_{i})^{\lambda_{i} }\right]&\prod_{t=1}^{M}\exp\Big{(}\sum_{n=1}^{2k}c^{(n)}_{F}(z^{ n}_{j})\Big{)}\Big{|}_{z_{1}=...z_{M}=0}\\ =&\prod_{i=1}^{l(\lambda)}\left([z^{0}]\Big{(} \partial+2\gamma d+\Big{[}(q-1)\gamma-\frac{1}{2}\Big{]}d^{{}^{\prime}}+*_{g} \Big{)}^{\lambda_{i}-1}g(z)\right)+O(\frac{1}{M}).\end{split} \tag{4.28}\] Proposition 4.18 then follows. Combining all the results above in this section, we arrive at the expansion (4.6) representing action of Dunkl operators on \(\exp(F(z_{1},...,z_{M}))\). **Proof of Theorem 4.9:** By Proposition 4.11 and 4.12 the left side of (4.6) is a homogeneous polynomial of \(c^{v}_{F}\)'s of degree \(2k\) with uniformly bounded coefficients in the limit regime. The right side of (4.6) is a combination of Proposition 4.17 (which gives \(b^{\lambda}_{\lambda}\cdot c^{\lambda}_{F}+\sum_{\mu:|\mu|=k,\ l(\mu)>l(\lambda )}b^{\lambda}_{\mu}\cdot c^{v}_{F}\)) and (4.18) (which gives polynomial L), and note that the only possible overlap of linear terms and terms involving only length \(1\) partitions is when \(\lambda\) itself is \((2k)\), which gives the term subtracted in (4.8). ### \(\mathbf{q}\)-\(\gamma\) convolution After stating the equivalence in Theorem 4.8, Theorem 1.11 follows as a direct consequence. **Proof of Theorem 1.11.** For each \(M\leq N,\theta>0\), let \(G^{a}_{M,N,\theta}\), \(G^{b}_{M,N,\theta}\), \(G^{c}_{M,N,\theta}\) denote the type BC Bessel generating function of \(\vec{a}_{M},\vec{b}_{M}\) and \(\vec{c}_{M}=\vec{a}_{M}\boxplus_{M,N}^{\theta}\vec{b}_{M}\). Then \[G^{c}_{M,N,\theta}(z_{1},...,z_{M})=G^{a}_{M,N,\theta}(z_{1},...,z_{M})\cdot G^ {b}_{M,N,\theta}(z_{1},...,z_{M}),\] and hence partial derivatives of \(\ln(G^{c}_{M,N,\theta})\) are equal to the sum of the ones of \(\ln(G^{b}_{M,N,\theta})\) and \(\ln(G^{a}_{M,N,\theta})\). By assumption of the theorem, \(\{\vec{a}_{M}\}\) and \(\{\vec{b}_{M}\}\) satisfy LLN condition, then by Theorem 4.8 they are q-\(\gamma\)-LLN appropriate. Hence by Definition 4.2 \(\{\vec{c}_{M}\}\) is also q-\(\gamma\)-LLN appropriate. By Theorem 4.8 again \(\{\vec{c}_{M}\}\) satisfies LLN. ## 5. \(q\)-\(\gamma\) cumulants and moments Fix \(q\geq 1,\gamma>0\), in this section we continue with the limit regime \(M,N\to\infty\), \(\theta\to 0\), \(M\theta\to\gamma\), \(N\theta\to q\gamma\). Definition 4.6 introduces a map \(\Upsilon^{q,\gamma}_{k\to m}\) in terms of operators, that sends the real sequence \(\{k_{l}\}_{l=1}^{\infty}\) to another real sequence \(\{m_{k}\}_{k=1}^{\infty}\). We keep the interpretation from Theorem 4.8, that is, we call \(\{k_{l}\}_{l=1}^{\infty}\) the q-\(\gamma\) cumulants and take \(k_{l}=0\) for all odd l's. In this section we give a more combinatorical descriptions of \(T^{q,\gamma}_{k\to m}\). After that, we also provide an explicit relation of \(T^{q,\gamma}_{m\to k}\) in terms of generating functions, and by taking \(q,\gamma\) to some extreme values, we set the connections of our \(q\)-\(\gamma\)-cumulants to the usual cumulants and (rectangular) free cumulants in free probability theory, and also to the \(\gamma\)-cumulants defined in [BCG], that arises in the high temperature regime of self-adjoint matrix additions. ### From q-\(\gamma\) cumulants to moments We start by introducing some basic notions of set partitions, which are necessary for the statement of the main theorem. For \(k\in\mathbb{Z}_{\geq 1}\), a _set partition_\(\pi\) of \([k]\) is a way to write \([k]:=\{1,2,...,k\}\) as disjoint union of sets \(B_{1},...,B_{n}\) for some \(m\). We write \(\pi=B_{1}\sqcup B_{2}\sqcup...\sqcup B_{m}\), and denote the space of all set partitions of \([k]\) by \(P(k)\). Given a set partition \(\pi\), for each \(B_{i}\) let \(\min(B_{i})\) and \(\max(B_{i})\) denote the minimal and maximal number in the subset \(B_{i}\) of \([k]\), and for simplicity, we label \(B_{1},...,B_{n}\) by \(\min(B_{i})\) in increasing order. In this text, we are in particular interested in the non-crossing partitions. **Definition 5.1**.: _Fix \(k\in\mathbb{Z}_{\geq 1}\), a set partition \(\pi=B_{1}\sqcup...\sqcup B_{n}\) of \([k]\) is non-crossing if for any \(l=2,...,m\), and any \(j=1,2,...,l-1\), the elements in \(B_{j}\) are either bigger than \(\max(B_{l})\) or smaller than \(\min(B_{l})\). See Figure 1. Denote the set of all non-crossing partition of \([k]\) by \(NC(k)\)._ Each set partition can be realized visually as a collection of blocks \(B_{1},...,B_{n}\) with \(k\) legs in total, and the block \(B_{i}\) has \(|B_{i}|\) legs, which is the number of elements in \(B_{i}\). See Figure 1. From this point of view, \(\pi\) is non-crossing if and only if the legs of one block does not cross any other blocks. Next we define a quantity associated with the non-crossing set partition \(\pi\). **Definition 5.2**.: _Given \(\pi=B_{1}\sqcup...\sqcup B_{m}\in NC(k)\), for \(i=1,2,...,n\), let \(P_{i}=\#\) of elements in \(B_{1},...,B_{i}\) bigger than \(\min(B_{i})\), and \(Q_{i}=\#\) of elements in \(B_{1},...,B_{i}\) bigger than \(\max(B_{i})\)(note that \(P_{i}-Q_{i}=|B_{i}|-1\) by definition). Let \(C_{1},C_{2},...\) be the countable sequence of constants_ \[2q\gamma,\ 2\gamma+2,\ 2q\gamma+2,\ 2\gamma+4,\ 2q\gamma+4,\ 2\gamma+6,\ 2q\gamma+6\...\] Figure 1. The graph on the left represents a noncrossing partition \(\pi\) of [6], where \(B_{1}=\{1,4,6\}\), \(B_{2}=\{2,3\}\), \(B_{3}=\{5\}\), and the graph on the right represents a crossing partition \(\pi^{{}^{\prime}}\) of [6], where \(B_{1}=\{1,3,6\}\), \(B_{2}=\{2,4\}\), \(B_{3}=\{5\}\). respectively. Then we define_ \[W(\pi)=\prod_{i=1}^{m}\Big{[}C_{Q_{i}+1}C_{Q_{i}+2}\cdots C_{P_{i}}\Big{]}. \tag{5.1}\] **Example 5.3**.: _In Figure 2, \(\pi\) is a non-crossing partition of [14], such that \(B_{1}=\{1,7,8\}\), \(B_{2}=\{2,3,6\}\), \(B_{3}=\{4,5\}\), \(B_{4}=\{9,10,13,14\}\), and \(B_{5}=\{11,12\}\). Moreover, \(P_{1}=2\), \(Q_{1}=0\), \(P_{2}=4\), \(Q_{2}=2\), \(P_{3}=4\), \(Q_{3}=3\), \(P_{4}=3\), \(Q_{4}=0\), \(P_{5}=1\), \(Q_{5}=0\), so \(W(\pi)=C_{1}\cdot C_{3}C_{4}\cdot C_{4}\cdot C_{1}C_{2}C_{3}\cdot C_{1}=C_{1}^ {3}C_{2}C_{3}^{2}C_{4}^{2}=(2q\gamma)^{3}(2\gamma+2)(2q\gamma+2)^{2}(2\gamma+4 )^{2}\)._ We also introduce a notion of _even partition_ that will be used later. **Definition 5.4**.: _We say \(\pi\) is even if \(|B_{1}|,...,|B_{n}|\) are all even, and denote the collection of all non-crossing even set partition of \([2k]\) by \(\mathfrak{NC}(2k)\), for some \(k\in\mathbb{Z}_{\geq 1}\)._ The following main theorem of this section gives the combinatorial expression of moments as polynomials of q-\(\gamma\) cumulants, whose coefficients are given by \(W(\pi)\). **Theorem 5.5**.: _(q-\(\gamma\) cumulants to moments formula) Let \(\{k_{l}\}_{l=1}^{\infty}\), \(\{m_{k}\}_{k=1}^{\infty}\) be two real sequences such that \(k_{l}=0\) for all odd \(l\)'s, and \(\{m_{2k}\}_{k=1}^{\infty}=\mathrm{T}_{k\to m}^{q,\gamma}(\{k_{l}\}_{l=1})\). Then for any \(k=1,2,...\),_ \[m_{2k-1}=0,\quad m_{2k}=\sum_{\pi\in\mathfrak{NC}(2k)}W(\pi)\prod_{B\in\pi}k_{| B_{i}|}. \tag{5.2}\] **Example 5.6**.: _By manipulating (5.2), we have the explicit expression of the first few q-\(\gamma\) cumulants in terms of moments:_ \[\begin{split} m_{2}=& 2q\gamma k_{2},\\ m_{4}=& 2q\gamma(2\gamma+2)(2q\gamma+2)k_{4}+[(2q \gamma)^{2}+2q\gamma(2\gamma+2)]k_{2}^{2},\\ m_{6}=&\Big{[}2q\gamma(2\gamma+2)(2q\gamma+2)(2 \gamma+4)(2q\gamma+4)\Big{]}k_{6}\\ +&\Big{[}2q\gamma(2\gamma+2)(2q\gamma+2)(3\times 2q \gamma+2\gamma+2+2q\gamma+2+2\gamma+4+2q\gamma+4)\Big{]}k_{4}k_{2}\\ +&\Big{[}(2q\gamma)^{3}+2(2q\gamma)^{2}(2\gamma+2)+2 q\gamma(2\gamma+2)(2q\gamma+2)\Big{]}k_{2}^{3}.\end{split} \tag{5.3}\] Proof of Theorem 5.5.: Recall from Definition 4.6 that \[m_{2k}=[z^{0}]\left(\partial+2\gamma d+\left[(q-1)\gamma-\frac{1}{2}\right]d^ {{}^{\prime}}+*_{g}\right)^{2k-1}g(z)\] Figure 2. The graphical representation of the non -crossing partition in Example 5.3. where \(g(z)=\sum_{l=1}^{\infty}k_{l}z^{l-1}\), i.e, we act the operator \(D:=\partial+2\gamma d+((q-1)\gamma-\frac{1}{2})d^{{}^{\prime}}+*_{a}\) on \(g(z)\) by \(2k-1\) times, then take the constant term of the resulting expression. Here \(a(z)=\sum_{l=1}^{\infty}a_{l}z^{l-1}\) such that \(a_{l}=k_{l}\), and the resulting polynomial \(D^{p-1}g(z)\), before the acting \(D\) by the \(p^{th}\) time, contains only odd powers of \(z\) when \(p\) is odd, and contains only even power of \(z\) when \(p\) is even. Because of this, we claim that \(D^{2k-1}\) acting on \(g(z)\) is equivalent to the alternate product of two operators \[D^{{}^{\prime}}\circ(D^{{}^{\prime\prime}}\circ D^{{}^{\prime}})^{k-1}. \tag{5.4}\] More precisely, \(D^{{}^{\prime\prime}}\circ D^{{}^{\prime}}\) has the following explicit effect: \[...\longrightarrow\sum_{l=1}^{\infty}b_{l}z^{l-1}\xrightarrow{D^{{}^{\prime \prime}}}\sum_{l=1}^{\infty}c_{l}z^{l-1}\xrightarrow{D^{{}^{\prime}}}\sum_{l= 1}^{\infty}d_{l}z^{l-1}\longrightarrow...,\] where \(D^{{}^{\prime}}\) acts on the polynomial \(b(z)=\sum_{l=1}b_{l}z^{l-1}\) which contains only odd powers, with output \(c(z)=\sum_{l=1}b_{l}z^{l-1}\) such that \[c_{l}=\begin{cases}0&\text{l is even;}\\ (2q\gamma+l-1)b_{l+1}+\sum_{j=1}^{l}a_{j}b_{l+1-j}&\text{l is odd,}\end{cases} \tag{5.5}\] and then \(D^{{}^{\prime\prime}}\) acts on \(c(z)\) which contains only odd powers, with output \(d(z)=\sum_{l=1}^{\infty}d_{l}z^{l-1}\) such that \[d_{l}=\begin{cases}0&\text{l is odd;}\\ (2\gamma+l)c_{l+1}+\sum_{j=1}^{l}a_{j}c_{l+1-j}&\text{l is even.}\end{cases} \tag{5.6}\] We verify the expression (5.2) inductively on \(k\), by visualizing the action of \(D^{{}^{\prime}}\) or \(D^{\prime}\circ D^{{}^{\prime\prime}}\) under the graphical representation of set partitions. When \(k=1\), \[\begin{split} m_{2}=&[z^{0}]\Big{(}D^{{}^{\prime}}(g(z) \Big{)}\\ =&(2q\gamma)k_{2}+\sum_{j=1}^{1}a_{1}k_{1+1-1}=(2q \gamma)k_{2}+a_{1}k_{1}\end{split} \tag{5.7}\] by (5.5). Correspondingly, we realize this expression by the following concrete operations: (0). Start with an empty set partition, corresponding to a blank graph with no leg. (1). Draw a single leg, and label it with monomial \(k_{1}\). (2). Add one more leg on the right of the first leg. Now we have two options: connect this leg with the first leg to form a block of size 2, and label the new block by \(C_{1}=2q\gamma k_{2}\) or keep them to be separate as two blocks with single leg, and label the second block by \(a_{1}\). After step 2 the first configuration corresponds to monomial \(k_{2}\) with weight \(2q\gamma\), and the second one corresponds to monomial \(k_{1}\cdot a_{1}\) with weight \(1\) (which is the product of the labels of the two blocks). Adding them together gives \(m_{2}\). To match (5.2), note that \(W(\{1,2\})=C_{1}=2q\gamma\), and \(k_{1}=a_{1}=0\) so the second term vanishes. To obtain \(m_{2k}\), suppose \(m_{2},...,m_{2k-2}\) obtained from step 0, 1, 2,..., \(2k-3\), \(2k-2\) all match (5.2). We give the operations in step \(2k-1\), \(2k\): \((2k-1)\). Given a configuration \(\pi=B_{1}\sqcup...\sqcup B_{n}\) of \(2k-2\) legs in total, with a corresponding weight \(\tilde{W}(\pi)\), insert one more leg right after the first leg to \(B_{1}\), so that we get a new configuration \(\pi^{{}^{\prime}}=B_{1}^{{}^{\prime}}\sqcup...\sqcup B_{n}^{{}^{\prime}}\), where \(B_{1}^{{}^{\prime}}=\{1\}\cup\{j+1:j\in B_{1}\}\), and \(B_{i}^{{}^{\prime}}=\{j+1:j\in B_{i}\}\). Then one have \(|B_{1}^{{}^{\prime}}|\) options: either keep everything unchanged and let \(B_{i}^{{}^{\prime\prime}}=B_{i}^{{}^{\prime}}\) for \(i=1,2,...,n\), \[\tilde{W}(\pi^{{}^{\prime\prime}})=\begin{cases}\tilde{W}(\pi)\cdot(2\gamma+| B_{1}^{{}^{\prime\prime}}|-1)&\text{ if }|B_{1}^{{}^{\prime\prime}}|\text{ is odd;}\\ \tilde{W}(\pi)\cdot(2q\gamma+|B_{1}^{{}^{\prime\prime}}|-2)&\text{ if }|B_{1}^{{}^{ \prime\prime}}|\text{ is even,}\end{cases}\] or split \(B_{1}^{{}^{\prime}}\) into two non-crossing blocks \(B_{1}^{{}^{\prime\prime}}\sqcup B_{2}^{{}^{\prime\prime}}\), where \(\min(B_{1}^{{}^{\prime\prime}})=1\), \(\min(B_{2}^{{}^{\prime\prime}})=2\), and let \(B_{i}^{{}^{\prime\prime}}=B_{i-1}^{{}^{\prime}}\) for \(i=3,4,...,n+1\), \(\tilde{W}(\pi^{{}^{\prime\prime}})=\tilde{W}(\pi)\). \((2k)\). Repeat the operations in step \(2k-1\) one more time, and still denote the output configuration with 2k legs by \(\pi^{{}^{\prime\prime}}=B_{1}^{{}^{\prime\prime}}\sqcup...\sqcup B_{n^{{}^{ \prime\prime}}}^{{}^{\prime\prime}}(n^{{}^{\prime\prime}}\) denotes the number of blocks) with weight \(W^{{}^{\prime\prime}}(\pi^{{}^{\prime\prime}})\). Then delete all configurations with at least one odd block. For each remaining configuration \(\pi^{{}^{\prime\prime}}\), assign it with monomial \[W^{{}^{\prime\prime}}(\pi^{{}^{\prime\prime}})k_{|B_{1}^{{}^{\prime\prime}}|} \cdot a_{|B_{2}^{{}^{\prime\prime}}|}...\overset{a_{|B_{n^{{}^{\prime\prime}} }|}}{}. \tag{5.8}\] We claim that the above steps represent the action of (5.4). Indeed, both step 0-2 and \(D^{{}^{\prime}}\) generate \((2q\gamma)k_{2}+a_{1}k_{1}\), and for \(l\geq 3\), step \(l\) is corresponding to the \((l-1)^{th}\) term (from left to right) in the product. More precisely, the expression of \(d_{l}\) (\(c_{l}\) resp.) in (5.6) ((5.5) resp.) is recording the \(l+1\) options one can choose on a configuration whose first block is of size \(l\), that choosing to enlarge the first block by 1 gives one extra factor \((2\gamma+l)\) (\(2q\gamma+l-1\) resp.), and splitting \(d_{l}\) ((\(c_{l}\) resp.) into \(a_{j}\) and \(c_{l+1-j}\) corresponds to splitting the first block into two new blocks of size \(l+1-j\) and \(j\) respectively. Therefore, acting (5.4) on \(g(z)\) and take the constant term is equivalent to a chain of compositions of (5.5) and (5.6). Compared to (5.8), the output is also a large sum of monomials of \(k_{l}\)'s (coefficients of \(g(z)\)) and \(a_{l}\)'s (coefficients of \(a(z)\)), that each non-vanishing monomial is corresponding to a unique non-crossing even partition \(\pi\) (recall \(k_{l}=a_{l}=0\) for odd l's), and the unique \(k_{l}\) it has is giving the size of the first block of \(\pi\). To see that each non-crossing even set partition can be realized in this way, we do induction on \(k\) and assume this holds for the set partitions of size up to \(2k-2\). For \(\pi=B_{1}\sqcup...\sqcup B_{n}\in\mathfrak{NC}(2k)\), just combine \(B_{1}\) with \(B_{2}\) and \(B_{3}\)(both might be \(\varnothing\)) as a single block, then remove two legs from this new block. What we get is an element \(\tilde{\pi}\) in \(\mathfrak{NC}(2k-2)\), which can be realized by induction hypothesis, and one can construct \(\tilde{\pi}\) from \(\tilde{\pi}\) using the step \(2k-1\) and \(2k\) above. Since \(a_{l}=k_{l}\) for all \(l=1,2,...\), to match \(m_{k}\) with the right side of (5.2), it remains to match the coefficient of each monomials. Given \(\pi\in\mathfrak{NC}(2k-1)\) (\(\mathfrak{NC}(2k-1)\) resp.), define its degeneration \(\tilde{\pi}=\tilde{B}_{1}\sqcup...\sqcup\tilde{B}_{n}\in\mathfrak{NC}(2k-2)\) (\(\mathfrak{NC}(2k-2)\) resp.) by taking \(\tilde{B}_{1}\) as the combination of the first two blocks removing 1 leg, or simply removing 1 leg from \(B_{1}\), similarly as in last paragraph. Then compared to \(W(\tilde{\pi})\), we only replace \(C_{|\tilde{Q}_{1}|+1}...C_{|\tilde{P}_{1}|}\) by \(C_{|Q_{2}|+1}...C_{|P_{2}|}\cdot C_{|Q_{1}|+1}...C_{|P_{1}|}\) in \(W(\pi)\) when we choose to split the first block, and one can check that \(W(\tilde{\pi})\) after the change, and when we choose to enlarge the first block by \(1\), \(W(\pi)\) has one more factor \(C_{|P_{1}|}=C_{|\tilde{B}_{1}|}\) than \(W(\tilde{\pi})\). On the other hand, for \(l=1,2,...,2k\), as pointed out in step \(2k-1\) and \(2k\), since \(a_{l}=k_{l}=0\) for \(l\) odd, in order to get a non-vanishing term after \(2k\) steps, one can only enlarge \(|\tilde{B}_{1}|\) by \(1\) in even steps, when \(|\tilde{B}_{1}|\) is odd, then multiply the new factor \(2q\gamma+|\tilde{B}_{1}|-1\) to the monomial, and enlarge \(|\tilde{B}_{1}|\) by \(1\) in odd steps, when \(|\tilde{B}_{1}|\) is even, then multiply the new factor \(2\gamma+|\tilde{B}_{1}|\). In both cases this factor matches \(C_{|\tilde{B}_{1}|}\). This finishes the proof. ### From moments to q-\(\gamma\)-cumulants Recall that \(T_{k\to m}^{q,\gamma}\) is invertible, and for each \(l=1,2,...\), \(k_{2l}\) is a polynomial of \(m_{2},m_{4},...,m_{2l}\) with leading term as a multiple of \(m_{2l}\). For example, by reversing (5.3), we have \[k_{2}= \frac{1}{2q\gamma}m_{2},\] \[k_{4}= \frac{1}{2q\gamma(2\gamma+2)(2q\gamma+2)}\Big{[}m_{4}-(1+\frac{ \gamma+1}{q\gamma})m_{2}^{2}\Big{]},\] \[k_{6}= \frac{1}{2q\gamma(2\gamma+2)(2q\gamma+2)(2\gamma+4)(2q\gamma+4)}\] \[\cdot \Bigg{(}m_{6}-\Big{[}(3\times 2q\gamma+2\gamma+2+2q\gamma+2+2 \gamma+4+2q\gamma+4)\cdot\frac{1}{2q\gamma}\Big{]}\Big{[}m_{4}-(1+\frac{ \gamma+1}{q\gamma})m_{2}^{2}\Big{]}m_{2}\] \[-\Big{[}1+\frac{\gamma+1}{q\gamma}+\frac{(\gamma+1)(q\gamma+1)}{ (q\gamma)^{2}}\Big{]}m_{2}^{3}\Bigg{)}. \tag{5.9}\] For the more general cases, we express the generating function of q-\(\gamma\) cumulants by the generating function of moments. **Theorem 5.7**.: _Let \(\{m_{2k}\}_{k=1}^{\infty}\), \(\{k_{l}\}_{l=1}^{\infty}\) be two real sequences such that \(\{k_{l}\}_{l=1}^{\infty}=\mathrm{T}_{m\to k}^{q,\gamma}(\{m_{2k}\}_{k=1}^{ \infty})\). Then \(k_{l}=0\) for all odd \(l\)'s, and_ \[\begin{cases}\exp\Big{[}\gamma\sum_{k=1}^{\infty}\frac{m_{2k}}{k}y^{2k}\Big{]} =\sum_{n=0}^{\infty}c_{n}\cdot y^{2n},\\ \exp\Big{[}\sum_{l=1}^{\infty}\frac{k_{2l}}{2l}y^{2l}\Big{]}=\sum_{n=0}^{ \infty}\frac{c_{n}}{(q\gamma)_{n}(\gamma)_{n}}2^{-2n}y^{2n}\end{cases} \tag{5.10}\] _for some auxiliary sequence \(\{c_{n}\}_{n=0}^{\infty}\). Here we use the Pochhammmer symbol notation_ \[(x)_{n}:=\begin{cases}x(x+1)\cdot(x+n-1),&\text{ if }n\in\mathbb{Z}_{\geq 1},\\ 1,&\text{ if }n=0.\end{cases}\] _Alternatively one has the more compact expression_ \[\exp\Big{[}\sum_{l=1}^{\infty}\frac{k_{2l}}{2l}y^{2l}\Big{]}=[z^{0}]\Bigg{\{} \sum_{n=0}^{\infty}\frac{(yz)^{2n}}{(q\gamma)_{n}(\gamma)_{n}}2^{-2n}\cdot \exp\Big{[}\gamma\sum_{k=1}^{\infty}\frac{m_{2k}}{k}z^{-2k}\Big{]}\Bigg{\}}. \tag{5.11}\] Before giving the proof, we first present two technical results that will be used. **Lemma 5.8**.: _(a). The following Talor series expansion holds:_ \[\sum_{k=0}^{\infty}Q_{(k)}(a_{1}^{2},...,a_{M}^{2};\theta)y^{2k}=\prod_{i=1}^{M} (1-a_{i}^{2}y^{2})^{-\theta}. \tag{5.12}\] _(b). For \(\theta>0\), \(y\in\mathbb{C}\) and \(\vec{a}=(a_{1}\geq...\geq a_{M}\geq 0)\),_ \[\mathbb{B}(\vec{a},y,0^{M-1};\theta)=\sum_{k=0}^{\infty}\frac{1}{(N\theta)_{k }(M\theta)_{k}}2^{-2k}Q_{(k)}(a_{1}^{2},...,a_{M}^{2};\theta)y^{2k}, \tag{5.13}\] _where \(Q_{(k)}(a_{i,M}^{2};\theta)\) is defined in Definition 2.8, and \((k)\) denotes the partition \((k,0,...,0)\in\Lambda_{M}\). Moreover, the power series converges uniformly in a domain near 0._ Proof.: (a) is a well known result that can be found in [M, p378 and p380]. (b) follows from Proposition 2.16 and (2.3), after specifying all but one variables to 0. Proof of Theorem 5.7.: First we note that (5.10) and (5.11) are equivalent by comparing the coefficients for \(y^{2n}\) for each \(n=0,1,2,...\), and we will prove (5.10). For now, we assume that there exists a probability measure \(\mu\) supported on \([a,b]\subset\mathbb{R}_{\geq 0}\), such that for \(k=1,2,...\) \[m_{2k}=\int x^{k}d\mu.\] We take a sequence of deterministic M-tuples \(\{\vec{a}_{M}\}_{M=1}^{\infty}\) such that \(\vec{a}_{M}=(a_{1,M},...,a_{M,M})\in[-\sqrt{b},\sqrt{b}]^{M}\), and define \(\mu_{M}=\frac{1}{M}\sum_{i=1}^{M}\delta_{a_{i,M}^{2}}\). We choose \(\{\vec{a}_{M}\}\) in a way that \(\mu_{M}\rightarrow\mu\) weakly as \(M\rightarrow\infty\). This implies that the moments of \(\mu_{M}\) also converge pointwisely to the corresponding moments of \(\mu\), i.e, \[\frac{1}{M}\sum_{i=1}^{M}a_{i,M}^{2}\longrightarrow m_{2k}.\] In other words, \(\{\vec{a}_{M}\}_{M=1}^{\infty}\) satisfies the LLN condition, and by Theorem 4.8\(\{\vec{a}_{M}\}_{M=1}^{\infty}\) is q-\(\gamma\)-LLN-appropriate. By Lemma 5.8 (a), \[\begin{split}&\sum_{k=0}^{\infty}Q_{(k)}(a_{i,M}^{2};\theta)y^{2k}= \prod_{i=1}^{M}(1-a_{i,M}^{2}y^{2})^{-\theta}\\ =&\exp\bigg{[}-\theta\sum_{i=1}^{M}\ln\Big{(}1-a_{i,M}^{2}y^{2}\Big{)}\bigg{]}=\exp\bigg{[}\theta M\sum_{k=1}^{\infty}\frac{y^{2k }}{k}\frac{1}{M}\sum_{i=1}^{M}(a_{i,M})^{2k}\bigg{]}\end{split} \tag{5.14}\] as a formal power series. Taking \(M\rightarrow\infty,\theta\to 0,M\theta\rightarrow\gamma\), the above equality becomes \[\sum_{k=0}^{\infty}c_{k}\cdot y^{2k}=\exp\Big{[}\gamma\sum_{k=1}^{\infty}\frac {m_{k}}{k}y^{2k}\Big{]}, \tag{5.15}\] where \(c_{k}\) is the pointwise limit of \(Q_{(k)}(a_{i,M}^{2};\theta)\) in the above limit regime. This defines \(\{c_{k}\}_{k=1}^{\infty}\) in terms of \(\{m_{2k}\}_{k=1}^{\infty}\). On the other hand, since \(\mu_{M}\) is deterministic, its type BC Bessel generating function is equal to its Bessel function. And we have for \(l=1,2,...\) \[(\frac{\partial}{\partial y})^{2l}\ln[\mathbb{B}(\vec{a}_{M},y,0^{M-1};\theta)] \Big{|}_{y=0}\xrightarrow[M\theta\to q\gamma]{N\theta\to\gamma}(2l-1)!\cdot k_ {2l}. \tag{5.16}\] By Lemma 5.8 (b), the above equation is equivalent to \[(\frac{\partial}{\partial y})^{2l}\ln\Big{[}\sum_{k=0}^{\infty}\frac{1}{(N \theta)_{k}(M\theta)_{k}}2^{-2k}Q_{(k)}(a_{i,M}^{2};\theta)y^{2k}\Big{]}\Big{|} _{y=0}\xrightarrow[M\theta\to q\gamma]{N\theta\to\gamma}(2l-1)!\cdot k_{2l}. \tag{5.17}\] Also, since type BC Bessel function is analytic over \(z_{1},...,z_{M}\), so is its logarithm near \(0\), and by Talor expanding \(\ln\Big{[}\sum_{k=0}^{\infty}\frac{1}{(N\theta)_{k}(M\theta)_{k}}2^{-2k}Q_{k }(a_{i,M}^{2};\theta)y^{2k}\Big{]}\) we see that each \(k_{2l}\) is a polynomial of finitely many terms \(\frac{1}{(N\theta)_{k}(M\theta)_{k}}2^{-2k}Q_{k}(a_{i,M}^{2};\theta)\), each of which converges to \(\frac{1}{(q\gamma)_{k}(\gamma)_{k}}2^{-2k}c_{k}\). We claim that as \(M\theta\to\gamma,N\theta\to q\gamma\), \(\sum_{k=0}^{\infty}\frac{1}{(N\theta)_{j}(M\theta)_{j}}2^{-2k}Q_{(k)}(a_{i,M} ^{2};\theta)y^{2k}\) converges uniformly on a domain near \(0\). Indeed, the pointwise convergence of coefficient of \(y\) is already given above, and to obtain a tail bound of the power series, first note that we have assumed \(a_{i,M}\)'s are uniformly bounded, then by writing each \(Q_{(k)}(a_{1,M}^{2},...,a_{M,M}^{2};\theta)\) as a contour integral of the right side of (5.12) on the circle \(\{z:|z|=r\}\) for some \(r\) small enough, we see that \(Q_{(k)}(a_{1,M}^{2},...,a_{M,M}^{2};\theta)\) are uniformly bounded by \(C\cdot r^{-2k}\) for some constant \(C\) and \(r\). By (5.14), (5.15), the limit is \[\sum_{k=0}^{\infty}\frac{c_{k}}{(q\gamma)_{k}(\gamma)_{k}}y^{2k},\] but since the uniform convergence of analytic functions implies convergence of derivatives, it's also equal to \(\exp\Big{[}\sum_{l=1}^{\infty}\frac{k_{2l}}{2l}y^{2l}\Big{]}\) by (5.17). Hence these two functions are equal. It remains to generalize to the case where \(m_{2},m_{4},...\) are arbitrary real sequence. For each \(l=1,2,...\), \(k_{2l}\) is a polynomial \(h_{l}(m_{2},m_{4},...,m_{2l})\) of degree at most \(l\), where the expression of \(h_{l}\) is given by (5.11), while on the other hand, for each \(k_{2l}\), (5.2) gives another polynomial of degree at most \(l\), such that \(k_{2l}=h_{l}^{{}^{\prime}}(m_{2},m_{4},...,m_{2l})\). What we need to show is that \(h_{l}=h_{l}^{{}^{\prime}}\) for all \(l\)'s. Fix \(l\geq 1\). We have already shown that \(h_{l}(m_{2},m_{4},...,m_{2l})=h_{l}^{{}^{\prime}}(m_{2},m_{4},...,m_{2l})\), when \(m_{2},m_{4},...,m_{2l}\) are the first \(l\) moments of some compactly supported probability measure \(\mu\) on \(\mathbb{R}_{\geq 0}\). Clearly there exists more than \(l\) such choices of \(m_{2},m_{4},...,m_{2l}\), and therefore by fundamental theorem of algebra these two polynomials coincide. The same argument holds for arbitrary \(l\geq 1\). ### Connections to self-adjoint additions Let \(A\),\(B\) be two independent \(N\times N\) matrices, uniformly chosen from the sets of self-adjoint matrices with deterministic eigenvalues \(a_{1}\geq...\geq a_{N}\) and \(b_{1}\geq...\geq b_{N}\) respectively. The study of eigenvalues of \(C=A+B\) dates back to [Vo], which considers the empirical measure of \(C\) in fixed temperature regime. In high temperature regime, it was proved in [BCG] that when \(N\to\infty\), \(\theta\to 0\) and \(N\theta\to\gamma\), assuming the empirical measure of \(A\), \(B\) converge to some deterministic probability measure \(\mu_{A}\), \(\mu_{B}\) on \(\mathbb{R}\), then the empirical measure of \(C\) converges to some deterministic probability measure \(\mu_{C}\), which is named as the \(\gamma\)-convolution of \(\mu_{A}\) and \(\mu_{B}\). There is a collection of quantities \(\{k^{\gamma}_{l}\}_{l=1}^{\infty}\) introduced in [BCG], such that \[k^{\gamma}_{l}(\mu_{C})=k^{\gamma}_{l}(\mu_{A})+k^{\gamma}_{l}(\mu_{B})\] for each \(l\geq 1\). We write \(\{m^{{}^{\prime}}_{k}\}_{k=1}^{\infty}=\mathrm{T}^{\gamma}_{k\to m}(\{k^{ \gamma}_{l}\}_{l=1}^{\infty})\), where \(m^{{}^{\prime}}_{k}\in\mathbb{R}\) denotes the \(k^{th}\) moment of the limiting empirical measure, and \(\mathrm{T}^{\gamma}_{k\to m}\) is a map that gives moment-cumulant relation of \(\gamma\)-convolution. While in this text we are considering addition of a different type of matrices, we find a limit transition in high temperature regime from rectangular addition to self-adjoint addition, which is stated in terms of cumulants. **Theorem 5.9**.: _Given a real sequence \(\{k_{l}\}_{l=1}^{\infty}\) such that \(k_{l}=0\) for all odd \(l\)'s, let \(\{m_{2k}\}_{k=1}^{\infty}=\mathrm{T}^{q,\gamma}_{k\to m}(\{k_{l}\}_{l=1}^{ \infty})\), \(m^{{}^{\prime}}_{k}=\frac{m_{2k}}{(q\gamma)^{k}}\), \(k^{{}^{\prime}}_{l}=2^{2l-1}k_{2l}\) for \(l=1,2,...\). Then_ \[\lim_{q\to\infty}\{m^{{}^{\prime}}_{k}\}_{k=1}^{\infty}=\mathrm{T}^{\gamma}_{ k\to m}(\{k^{{}^{\prime}}_{l}\})_{l=1}^{\infty}. \tag{5.18}\] Proof.: This follows from a straightforward limit transition of (5.10) under the assigned rescaling, and the moment-cumulant relation of \(\gamma\)-convolution in [BCG, Theorem 3.11]. Moreover, we point out that the combinatorial moment-cumulant formula of \(\gamma\)-convolution, given in [BCG, Theorem 3.10] can be expressed in an alternate way similar to our Theorem 5.5. **Proposition 5.10**.: _Let \(\{m^{{}^{\prime}}_{k}\}_{k=1}^{\infty}=\mathrm{T}^{\gamma}_{k\to m}(\{k^{ \gamma}_{l}\}_{l=1}^{\infty})\). Then for each \(k=1,2,...\),_ \[m^{{}^{\prime}}_{k}=\sum_{\pi\in NC(k)}W(\pi)\prod_{B\in\pi}k^{\gamma}_{|B_{i }|}, \tag{5.19}\] _where \(W(\pi)\) is defined in the same way as in Definition 5.2, after replacing the values of \(C_{1}\), \(C_{2}...\) to be \(\gamma+1\), \(\gamma+2...\)._ Proof.: By [BCG, Definition 3.7 and Theorem 3.8], \[m_{k}=[z^{0}](\partial+\gamma d+*_{g})^{k-1}g(z),\ k=1,2,...\] The statement then follows from the similar argument as in the proof of Theorem 5.5. ### Connections to the classical convolutions Recall from previous sections that, limit of \(\mathbb{H}^{\theta}_{M,N}\) gives the q-\(\gamma\) convolution \(\boxplus_{q,\gamma}\) of two (virtual) probability measures on \(\mathbb{R}\), which is linearized by q-\(\gamma\) cumulants. We show in this section that, under certain limit transition of the parameters \(q,\gamma\), \(\boxplus_{q,\gamma}\) converge to the usual convolution, classical free convolution, and rectangular free convolution respectively. We first provide the connection of q-\(\gamma\) cumulants to usual cumulants. For this we recall the combinatorial classical moment-cumulant formula: for \(k=1,2,...\) \[m_{k}=\sum_{\pi^{{}^{\prime}}\in P(k)}\prod_{i=1}^{n}k^{{}^{\prime}}_{|B_{i}|}\] where \(\{k^{{}^{\prime}}_{l}\}_{l=1}^{\infty}\) stands for the usual cumulants, and \(P(k)\) is the set of all set partitions of \([k]\). We denote the map that sends \(\{m_{k}\}_{k=1}^{\infty}\) to \(\{k^{{}^{\prime}}_{l}\}_{l=1}^{\infty}\) by \(\mathrm{T}^{0}_{m\to k}\). **Theorem 5.11**.: _Given a real sequence \(\{m_{2k}\}_{k=1}^{\infty}\), let \(\{k_{l}\}_{l=1}^{\infty}=\mathrm{T}_{m\to k}^{q,\gamma}(\{m_{2k}\}_{k=1}^{\infty})\), \(k_{l}^{{}^{\prime}}=(q\gamma)^{l}2^{2l-1}(l-1)!k_{2l}\), then_ \[\lim_{\gamma\to 0,\ q\gamma\to\infty}\{k_{l}^{{}^{\prime}}\}_{l=1}^{\infty}= \mathrm{T}_{m\to k}^{0}(\{m_{2k}\}_{k=1}^{\infty}). \tag{5.20}\] Proof.: By Theorem 5.5, after rescaled by \((q\gamma)^{k}\), the coefficient \(W(\pi)\) does not vanish asymptotically only if for each \(i=1,2,...,m\), \(2l_{i}-1:=P_{i}-Q_{i}\), there are \(l_{i}\) terms in \(C_{Q_{i}+1}\cdots C_{P_{i}}\) that contain \(q\gamma\). Hence each \(Q_{i}\) must be even. Recall that \(NC(k)\) denote the space of all (not neccesary even) non-crossing partitions. We say a non-crossing even partition \(\pi\) of \([2k]\) is _equivalent to \(\pi^{{}^{\prime}}\in NC(k)\)_, if there exists some \(\pi^{{}^{\prime}}\in NC(k)=B_{1}^{{}^{\prime}}\sqcup...\sqcup B_{m}^{{}^{ \prime}}\), such that by replacing all element \(j\in B_{i}\) by \(\{2j-1,2j\}\), we get the set \(B_{i}\), for any \(i=1,2,...,m\). **Claim:** For \(\pi=B_{1}\sqcup...\sqcup B_{m}\in\mathfrak{NC}(2k)\), each \(Q_{i}\) is even if and only if \(\pi\) is equivalent to some \(\tilde{\pi}\in NC(k)\). Proof of the claim.: The "if" part is clear. For the "only if" part, just notice that when \(\pi\) is even, non-crossing and each \(Q_{i}\) is even, \(max(B_{i})\) turn out to be all even. The statement then follows by going over all the legs in the graphical representation of \(\pi\) from right to left. Set \(\tilde{C}_{i}=i\), then after taking the limit, \[\frac{C_{Q_{i}+1}\cdots C_{P_{i}}}{(q\gamma)^{l_{i}}}\xrightarrow[q\gamma\to 0 ]{\gamma\to\infty}2^{2l_{i}-1}\tilde{C}_{Q_{i}+1}\cdots\tilde{C}_{P_{i}}.\] In other words, \[\begin{split}& m_{2k}=\sum_{\pi=B_{1}\sqcup...\sqcup B_{m}\in \mathfrak{NC}(2k)}W(\pi)\prod_{B\in\pi}k_{|B_{i}|}\\ &\xrightarrow[q\gamma\to 0]{\gamma\to\infty}\sum_{\tilde{\pi}= \tilde{B}_{1}\sqcup...\sqcup\tilde{B}_{m}\in NC(k)}\prod_{i=1}^{m}\Big{[}(Q_{i }+1)\cdots(P_{i})\cdot k_{|\tilde{B}_{i}|}^{{}^{\prime}}\Big{]}\\ &=\lim_{\gamma\to 0}\sum_{\tilde{\pi}=\tilde{B}_{1}\sqcup...\sqcup \tilde{B}_{m}\in NC(k)}\prod_{i=1}^{m}\Big{[}(\gamma+Q_{i}+1)\cdots(\gamma+P_{ i})\cdot k_{|\tilde{B}_{i}|}^{{}^{\prime}}\Big{]}\\ &=\lim_{\gamma\to 0}\mathrm{T}_{k\to m}^{\gamma}(\{k_{l}^{{}^{ \prime}}\}_{l=1}^{\infty})_{k}=\mathrm{T}_{k\to m}^{0}(\{k_{l}^{{}^{\prime}}\}_ {l=1}^{\infty})_{k}\\ &=\sum_{\pi^{{}^{\prime}}=B_{1}^{{}^{\prime}}\sqcup...\sqcup B_{ m}^{{}^{\prime}}\in P(k)}\prod_{i=1}^{m}k_{|B_{i}^{{}^{\prime}}|}^{{}^{ \prime}}.\end{split} \tag{5.21}\] The two equalities in the second to last row hold by Proposition 5.10 and [BCG, Theorem 8.2] respectively. Then (5.20) follows from acting \(\mathrm{T}_{m\to k}^{0}\) on both sides. **Corollary 5.12**.: _For two real sequences \(\{m_{2k}^{a}\}_{k=1}^{\infty}\), \(\{m_{2k}^{b}\}_{k=1}^{\infty}\), set \(m_{2k-1}^{a}=m_{2k-1}^{b}=0\) for \(k=1,2,...\), and define_ \[\{m_{k}^{c}\}_{k=1}^{\infty}:=\lim_{\gamma\to 0,\ q\gamma\to\infty}\Big{[}\{m_{k}^{a}\}_{k=1}^{ \infty}\boxplus_{q,\gamma}\{m_{k}^{b}\}_{k=1}^{\infty}\Big{]}. \tag{5.22}\] _Then \(m^{c}_{2k-1}=0\) for \(k=1,2,...\), and the usuall cumulants of \(\{m^{c}_{2k}\}_{k=1}^{\infty}\)'s are given by the sum of the corresponding usual cumulants of \(\{m^{a}_{2k}\}_{k=1}^{\infty}\) and \(\{m^{b}_{2k}\}_{k=1}^{\infty}\), i.e,_ \[\mathrm{T}^{0}_{m\to k}\Big{(}\{m^{c}_{2k}\}_{k=1}^{\infty}\Big{)}=\mathrm{T}^ {0}_{m\to k}\Big{(}\{m^{a}_{2k}\}_{k=1}^{\infty}\Big{)}+\mathrm{T}^{0}_{m \to k}\Big{(}\{m^{b}_{2k}\}_{k=1}^{\infty}\Big{)}. \tag{5.23}\] _Remark 5.13_.: Suppose \(\mu_{a},\mu_{b}\) are two probability measures on \(\mathbb{R}_{\geq 0}\) that for \(k=1,2,...\) \[m^{a}_{2k}=\int x^{k}d\mu_{a},\;m^{b}_{2k}=\int x^{k}d\mu_{b},\] then \(\{m^{c}_{2k}\}_{k=1}^{\infty}\) are the moments of the usual convolution of \(\mu_{a}\) and \(\mu_{b}\). Next, we consider \(\boxplus_{q,\gamma}\) and match its asymptotic behavior with the classical and rectangular free convolution. Before that we recall the definitions of their corresponding cumulants. For \(k\in\mathbb{Z}_{\geq 1}\), let \(\mathrm{T}^{{}^{\prime}}_{r\to m}\) denote the map sending the real sequence \(\{r_{l}\}_{l=1}^{\infty}\) of classical free cumulants to the sequence \(\{m_{k}\}_{k=1}^{\infty}\) of moments. Then there is a moment-cumulant formula: \[m_{k}=\sum_{\pi=B_{1}\sqcup...\sqcup B_{m}\in NC(k)}\prod_{B\in\pi}r_{|B_{i}|} \tag{5.24}\] for \(\{m_{k}\}_{k=1}^{\infty}=\mathrm{T}^{{}^{\prime}}_{r\to m}(\{r_{l}\}_{l=1}^{ \infty})\). See e.g [No] for a reference. Similarly, as defined in [B1, Section 3.1], rectangular free cumulants are a real sequence \(\{c^{q}_{l}\}_{l=1}^{\infty}\) parametrized by \(q\geq 1\), such that for \(l=1,2,...\), \(c^{q}_{2l-1}=0\), and \(c^{q}_{2l}\) are related with moments \(\{m_{k}\}_{k=1}^{\infty}\) by the following identities: \[m_{2k}=\sum_{\pi\in\mathfrak{Re}(2k)}q^{-e(\pi)}\prod_{B\in\pi}c_{|B_{i}|}, \tag{5.25}\] where \(e(\pi)=\#\) of block \(B_{i}\)'s with even \(\min(B_{i})\), and \(m_{2k-1}=0\) for \(k=1,2,...\) Denote the map sending even moments to rectangular free cumulants by \(\mathrm{T}^{\infty}_{m\to k}\), i.e, \(\mathrm{T}^{\infty}_{m\to k}(\{m_{2k}\}_{k=1}^{\infty})=\{c_{l}\}_{l=1}^{\infty}\). **Theorem 5.14**.: _Given a real sequence \(\{m_{2k}\}_{k=1}^{\infty}\), \(q\geq 1\), let_ \[\{k_{l}\}_{l=1}^{\infty}=\mathrm{T}^{q,\gamma}_{m\to k}(\{m_{2k}\}_{k=1}^{ \infty}),\quad r_{l}^{\gamma}=(2q\gamma)^{l-1}\cdot k_{l}.\] _Then we have the following._ _(a)._ \[\lim_{\gamma\to\infty}\{r_{l}^{\gamma}\}_{l=1}^{\infty}=\mathrm{T}^{\infty}_{ m\to k}(\{m_{2k}\}_{k=1}^{\infty}).\] _(b)._ \[\lim_{\gamma\to\infty}\{r_{l}^{\gamma}\}_{l=1}^{\infty}=\mathrm{T}^{{}^{ \prime}}_{m\to r}(\{m_{2k}\}_{k=1}^{\infty})\] _when \(q=1\)._ _Remark 5.15_.: (b) is a special case of (a) when \(q=1\). Such connection of rectangular free convolution and classical free convolution was first pointed out in [B1, Remark 2.2]. Proof.: It suffices to prove (a). By Theorem 5.5, \[m_{2k}=\sum_{\pi\in\mathfrak{Re}(2k)}W(\pi)\prod_{B\in\pi}k_{|B_{i}|},\] where \(C_{i}(i=1,2,...)\) are \(2\gamma,2\gamma+2,2\gamma+2,2\gamma+4,2\gamma+4...\). Hence by taking \(\gamma\to\infty\), the right side above becomes \[\sum_{\pi\in\mathfrak{Re}(2k)}q^{-c(\pi)}\prod_{B\in\pi}c_{|B_{i}|}, \tag{5.26}\] where \(c(\pi):=\#\) of block \(B_{i}\)'s such that \(Q_{i}\) is odd. Since \(Q_{i}\) is odd \(\iff\) there are odd elements of \(B_{1},...,B_{i-1}\) bigger than \(\max(B_{i})\) \(\iff\) there are odd elements of \(B_{1},...,B_{i-1}\) smaller than \(\min(B_{i})\iff\min(B_{i})\) is even, \(c(\pi)=e(\pi)\), and (5.26) is equal to the right side of (5.25). Recall also that similar to q-\(\gamma\) convolution, free convolution and rectangular free convolution are both binary operation of two probability measures linearized by free cumulants. Therefore Theorem 5.14 implies the following. **Corollary 5.16**.: _Given \(q\geq 1\), for two real sequences \(\{m^{a}_{2k}\}_{k=1}^{\infty}\), \(\{m^{b}_{2k}\}_{k=1}^{\infty}\), set \(m^{a}_{2k-1}=m^{b}_{2k-1}=0\) for \(k=1,2,...\), and define_ \[\{m^{c}_{k}\}_{k=1}^{\infty}:=\lim_{\gamma\to\infty}\Big{[}\{m^{a}_{k}\}_{k=1} ^{\infty}\boxplus_{q,\gamma}\{m^{b}_{k}\}_{k=1}^{\infty}\Big{]}. \tag{5.27}\] _Then the free cumulants of \(\{m^{c}_{k}\}\)'s are given by the sum of the corresponding rectangular free cumulants of \(\{m^{a}_{k}\}_{k=1}^{\infty}\) and \(\{m^{b}_{k}\}_{k=1}^{\infty}\), i.e,_ \[\mathrm{T}^{\infty}_{m\to k}\Big{(}\{m^{c}_{k}\}_{k=1}^{\infty}\Big{)}= \mathrm{T}^{\infty}_{m\to k}\Big{(}\{m^{a}_{k}\}_{k=1}^{\infty}\Big{)}+ \mathrm{T}^{\infty}_{m\to k}\Big{(}\{m^{b}_{k}\}_{k=1}^{\infty}\Big{)}. \tag{5.28}\] _Remark 5.17_.: Suppose \(\mu_{a},\mu_{b}\) are two symmetric probability measures on \(\mathbb{R}\) that for \(k=1,2,...\) \[m^{a}_{k}=\int x^{k}d\mu_{a},\;m^{a}_{k}=\int x^{k}d\mu_{a},\] then \(\{m^{c}_{k}\}_{k=1}^{\infty}\) are the moments of the rectangular free convolution of \(\mu_{a}\) and \(\mu_{b}\). _Remark 5.18_.: Similar results hold for classical free convolution when \(q=1\). ### Law of large numbers of Laguerre \(\beta\) ensembles For \(M\leq N\) and \(\theta=\frac{1}{2},1,2\), a \(M\times N\) Wishart matrix \(X\) is a rectangular random matrix, whose entries are real/complex/real quaternionic i.i.d Gaussian random variables \(\mathscr{N}(0,1)/\mathscr{N}(0,1)+i\mathscr{N}(0,1)/\mathscr{N}(0,1)+i \mathscr{N}(0,1)+j\mathscr{N}(0,1)+k\mathscr{N}(0,1)\). One can check directly that \(X\) satisfies the same invariant property given in Section 2.4, with \(M\) random singular values \(\vec{x}_{M}=(x_{1,M}\geq...\geq x_{M,M}\geq 0)\). The density of \(\vec{x}_{M}\) is (see e.g [F, Chapter 3]) \[f(\vec{x}_{M};M,N,\theta)=\frac{1}{Z_{M,N,\theta}}\prod_{i=1}^{M}\Big{[}x_{M,i }^{2\theta(N-M+1)-1}\exp(-\frac{1}{2}x_{M,i}^{2})\Big{]}\prod_{1\leq j\leq k \leq M}(x_{M,j}^{2}-x_{M,k}^{2})^{2\theta}, \tag{5.29}\] where \(Z_{N,M,\theta}\) is the normalizing constant. While for general \(\theta>0\) there's again no skew field of real dimension \(2\theta\), \(f(\vec{x}_{M};M,N,\theta)\) continues to make sense, and is defined as the so-called Laguerre \(\beta\) ensemble. _Remark 5.19_.: It's easy to check that \(f(\vec{x}_{M};M,N,\theta)\) is an exponential decaying measure defined in Definition 2.29, and therefore by Theorem 2.31, its type BC Bessel generating function is defined and well-behaved under the action of type BC Dunkl operators. **Proposition 5.20**.: _Let \(G^{L}_{M,N,\theta}(z_{1},...,z_{M})\) denote the type BC Bessel generating function_ \[\int_{x_{M,1}\geq...\geq x_{M,M}\geq 0}\mathbb{B}(\vec{x}_{M},z_{1},...,z_{M}; \theta,N)f(\vec{x}_{M};M,N,\theta)dx_{M,1}\cdots dx_{M,M},\] _then_ \[G^{L}_{M,N,\theta}(z_{1},...,z_{M})=\exp\Big{[}\frac{1}{2}(z_{1}^{2}+...+z_{M} ^{2})\Big{]}.\] Proof.: For \(\theta=\frac{1}{2},1,2\), one can use Definition 2.10 and check this by hand. For general \(\theta>0\), this is a special case of [Ro, Proposition 2.37.(2)], such that in that identity \(y\) is set to be \(0\), and our \(\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta,N)\) is a symmetric version of \(E_{k}(x,z)\), see [Ro, Definition 2.35] For each \(M,N,\theta\), Denote the random empirical measure of \(f(\vec{x}_{M};M,N,\theta)\) by \(\mu_{M,N,\theta}:=\frac{1}{M}\sum_{i=1}^{M}\delta_{x_{M,i}^{2}}\). **Theorem 5.21**.: _As \(M\to\infty,N\to\infty,\theta\to 0,M\theta\to\gamma,N\theta\to q\gamma\),_ \[\mu_{M,N,\theta}\longrightarrow\mu_{q,\gamma}\] _weakly in moments, where \(\mu_{q,\gamma}\) is a probability measure on \(\mathbb{R}_{\geq 0}\), which is uniquely determined by its moments:_ \[m_{k}^{{}^{\prime}}=\int_{\mathbb{R}\geq 0}x^{k}d\mu_{q,\gamma}=\sum_{\pi} \prod_{i=1}^{k}C_{P_{i}},\text{ for }k=1,2,... \tag{5.30}\] _where \(C_{l}\)'s, \(P_{i}\)'s are defined in the same way as in Section 5.1, and \(\pi\) goes over all non-crossing perfect matching of \([2k]\)._ _Remark 5.22_.: With a bit more efforts (e.g, a tightness argument), it's likely that one can show the convergence of the empirical measure holds weakly in probability. Proof.: By taking logarithm and partial derivatives of \(G^{L}_{M,N,\theta}\) we have that \(\{\vec{x}_{M}\}\) is \(q\)-\(\gamma\)-LLN appropriate with q-\(\gamma\) cumulant \(k_{2}=1\), \(k_{l}=0\) for \(l\neq 2\). By Theorem 5.5, only the set partitions that are formed by blocks of size two survive, and in this case \(P_{i}=Q_{i}+1\). (5.30) is then specified from (5.2). It remains to show that the moments in (5.30) does correspond to a unique probability measure. This is the so-called Stieltjes moment problem, since the (potential) corresponding measure lies on \([0,\infty)\), see e.g [Ak]. We need to check \[\sum_{k=1}^{\infty}(m_{k}^{{}^{\prime}})^{-\frac{1}{2k}}=\infty.\] Again by (5.30), \(m_{k}\) is a sum of \(\prod_{i=1}^{k}C_{P_{i}}\)'s. Among these summands the biggest term corresponds to \(P_{i}=i\) for \(i=1,2,...,k\), and \[\prod_{i=1}^{k}C_{P_{i}}\leq C(C+2)(C+4)\cdots(C+2k-2)=2^{k}\frac{\Gamma(\frac{ C}{2}+k)}{\Gamma(\frac{C}{2})},\] where \(C:=max\{2q\gamma-2,2\gamma\}\). The number of non-crossing perfect matching is \(Cat(k)=\frac{1}{k+1}\binom{2k}{k}\), the \(k^{th}\) Catalan number. Multiplying these two gives an upper bound of \(m_{k}\). By Stirling approximation, it turns out that \[(m_{k}^{{}^{\prime}})^{\frac{1}{2k}}\leq C_{1}\cdot\sqrt{k}\] for some positive constant \(C_{1}\). Hence the series diverges. The limiting measure \(\mu_{q,\gamma}\) is an q-\(\gamma\) analog of the Gaussian and semicircle law, in the sense that their only nonvanishing (q-\(\gamma\)/classical/free) cumulant is \(k_{2}=1\). Moreover, the connections to the usual and free convolution in Section 5.4 continues to hold in this special case. Indeed, one can show from (5.30) that \[\frac{m_{k}^{{}^{\prime}}}{(q\gamma)^{k}}\xrightarrow[q\gamma\to\infty]{ \gamma\to 0}\sum_{\pi}\prod_{i=1}^{k}(2),\] where \(\pi\) goes over all set partitions of \([k]\) into k blocks (which is indeed a single one), since any other non-crossing perfect matching has coefficient with \(C_{2}=2\gamma+2\), and therefore after rescaled by \((q\gamma)^{k}\) this term vanishes in the limit. The sum on the right is equal to \(2^{k}\), which means \(m_{k}^{{}^{\prime}}=2^{k}\), and \[\mu_{q,\gamma}\longrightarrow\delta_{2}\] weakly when \(q\gamma\to\infty,\gamma\to 0\). _Remark 5.23_.: One can obtain the same result, by doing limit transition on the density of Laguerre ensemble \(f(\vec{x}_{M};M,N,\theta)\) after a change of variables \(\lambda_{i}=x_{i}^{2}\), in the regime \(M\to\infty,N\to\infty,\theta\to 0,M\theta\to 0,N\theta\to\infty\). On the other hand, by taking \(q=1\), \(\gamma\to\infty\), (5.30) becomes \[\frac{m_{k}^{{}^{\prime}}}{(2\gamma)^{k}}\longrightarrow\#\text{ of non-crossing perfect matchings of }[2k]=Cat(k). \tag{5.31}\] \(Cat(k)\) is exactly the \(2k^{th}\) moment of the semicircle law. ## 6. Duality between convolutions in high and low temperature After studying the behavior of rectangular matrix additions in both high and low temperatures, we present a quantitative connection between these two regimes. Recall that in low temperature regime, given two deterministic M-tuples \(\vec{a}\), \(\vec{b}\), the limit of \(\vec{c}=\vec{a}\oplus\mathbb{H}_{M,N}^{\theta}\vec{b}\) is a deterministic M-tuples \(\vec{\lambda}\), where \(\vec{\lambda}\) is the \((M,N)\)-rectangular finite convolution of \(\vec{a}\) and \(\vec{b}\). For a M-tuples \(\vec{a}=(a_{1},...,a_{M})\), let \(r_{i}=a_{i}^{2}\) for \(i=1,2,...,M\). Let \(m_{k}^{{}^{\prime}}=\frac{1}{M}(r_{1}^{k}+...+r_{d}^{k})\) be the finite version of moments, for \(k=1,2,...,M\). Then the \((M,N)\)-rectangular finite convolution of \(\vec{a}\) and \(\vec{b}\) can be thought as a deterministic binary operation of \(\{m_{k}^{{}^{\prime}}(\vec{a})\}_{k=1}^{M}\) and \(\{m_{k}^{{}^{\prime}}(\vec{b})\}_{k=1}^{M}\) Similarly, we view the q-\(\gamma\) convolution of \(\{m_{k}^{a}\}\}_{k=1}^{\infty}\) and \(\{m_{k}^{b}\}_{k=1}^{\infty}\) as a deterministic binary operation of \(\{m_{k}^{a}\}_{k=1}^{M}\) and \(\{m_{k}^{b}\}_{k=1}^{M}\). **Theorem 6.1**.: _By identifying \(M\) with \(-\gamma\), \(\frac{M}{N}\) with \(q\), \(m_{2k}\) with \(m_{k}^{{}^{\prime}}(-N)^{k}\) for \(k=1,2,...,M\), the \((M,N)\)-rectangular convolution matches the \(q\)-\(\gamma\) convolution as binary operation of first \(M\) nontrivial moments._ The theorem is claiming that under the above identification, the moment-cumulant formula of these two convolutions are the same, and therefore we need to introduce a version of cumulants for the rectangular finite convolution. For this we refer to [Gri], which considers sum of two invariant \(M\times N\) (\(M=N\lambda,\ \lambda\in[0,1]\)) rectangular matrices as in Section 3, and defines the rectangular finite R-transform as the analog of the R-transform in (classical) free probability theory, in the sense that it linearizes the finite rectangular addition. **Definition 6.2**.: _[_Gri, Definition 3.7_]__\(R_{S_{p_{A}}}^{M,\lambda}(z)\) is the unique polynomial of degree \(M\) verifying_ \[R_{S_{p_{A}}}^{M,\lambda}(z)\equiv\frac{-1}{M}z\frac{d}{dz}\ln\left(\mathbb{E} \left[e^{-T_{S_{p_{A}}}^{(N,M)}zNM}\right]\right)\ \ \ \text{mod}\ [z^{M+1}], \tag{6.1}\] _where \(T_{S_{p_{A}}}^{(N,M)}\) is a random variable. By [Gri, p13], for \(i=1,2,...,M\),_ \[\mathbb{E}\left[(T_{S_{p_{A}}}^{(N,M)})^{i}\right]=\frac{i!(m-i)!}{m!}\frac{(d -i)!}{d!}a_{i}, \tag{6.2}\] _where \(a_{i}=e_{i}(\vec{r})\)._ Inspired by the fact that (classical) R-transform is the generating function of free cumulants, We define the rectangular finite cumulants, such that \[R_{S_{p_{A}}}^{d,\lambda}(z)=\sum_{l=1}^{d}k_{l}^{m,d}z^{l}. \tag{6.3}\] \(k_{1}^{m,d},...,k_{d}^{m,d}\) uniquely determine \(r_{1},...,r_{d}\). Proof of Theorem 6.1.: We prove that under the following identification of parameters \[\begin{split} k_{l}^{N,M}&\longleftrightarrow \frac{k_{2l}}{2}\gamma^{l-1}\text{ for }l=1,2,...,M\\ M&\longleftrightarrow-\gamma\\ \frac{N}{M}&\longleftrightarrow q\\ N^{n}a_{n}&\longleftrightarrow c_{n}\\ m_{2k}&\longleftrightarrow m_{k}^{{}^{\prime}} \cdot(-N)^{k},\end{split} \tag{6.4}\] the moment-cumulant relation in rectangular finite convolution and (5.10) match exactly. We match the second formula of (5.10) and (6.1), which play the role of cumulant generating function in their own setting. Let \(y^{2}=(-M)z\), then the first formula of (5.10) becomes \[\begin{split}&\exp\Big{(}\sum_{l=1}^{\infty}\frac{k_{2l}}{2l} \gamma^{l}z^{l}\Big{)}=\sum_{k=0}^{\infty}\frac{c_{k}}{(q\gamma)_{k}(\gamma)_ {k}}(-M)^{k}z^{k}\\ \Longrightarrow&\sum_{l=1}^{\infty}k_{2l}\gamma^{l-1} z^{l}=-\frac{1}{d}z\frac{d}{dz}\ln\Big{(}\sum_{k=0}^{\infty}\frac{c_{k}}{(q \gamma)_{k}(\gamma)_{k}}(-M)^{k}z^{k}\Big{)}.\end{split} \tag{6.5}\] It remains to match the right side of (6.5) and (6.1), i.e, matching \[\mathbb{E}\left[e^{-T_{SpA}^{(N,M)}zNM}\right]\qquad\text{with}\qquad\sum_{k=0 }^{\infty}\frac{c_{k}}{(q\gamma)_{k}(\gamma)_{k}}(-M)^{k}z^{k}\] for \(k=1,2,...,M\). This follows by Talor expanding \(e^{-T_{SpA}^{(N,M)}zNM}\), (6.2) and (6.4). Then we identify the first formula of (5.10) with the moment generating function in rectangular finite convolution. In the latter setting, recall that \(a_{n}=e_{n}(\vec{r})\) for \(n=1,2,...,M\), and \(m_{k}^{{}^{\prime}}=\frac{1}{M}p_{k}(\vec{r})\) for \(k=1,2,.....\). Moreover, take \(r_{i}=0\) for all \(i>M\), and identify \(a_{n}\) with \(e_{n}(\vec{r})\) formally for \(n>M\) (both have value \(0\)) as well. Then on the rectangular finite addition side, \[\begin{split}&\sum_{n=0}^{\infty}N^{n}a_{n}y^{2n}=\sum_{n=0}^{ \infty}e_{n}(\vec{r})(Ny^{2})^{n}\\ =&\prod_{n=1}^{\infty}(1+r_{n}Ny^{2})=\exp\Big{(}- \sum_{k=1}^{\infty}\frac{p_{k}(\vec{r})(-N)^{k}y^{2k}}{k}\Big{)}=\exp\Big{(}- M\sum_{k=1}^{\infty}\frac{m_{k}^{{}^{\prime}}(-N)^{k}y^{2k}}{k}\Big{)}.\end{split} \tag{6.6}\] This matches the first formula of (5.10) under the identification of parameters. _Remark 6.3_.: After identifying the \(k_{1}^{m,d},...,k_{d}^{m,d}\) with the first d even q-\(\gamma\) cumulants, one can define \(k_{l}^{m,d}\) for \(l\geq d+1\) for rectangular finite convolution, by the moment-cumulant relation of q-\(\gamma\) convolution under the same parameter identification in (6.4). _Remark 6.4_.: Note that in (6.4), both \(M\) and \(\gamma\) are positive, hence there's no choice of parameters that the finite rectangular cumulants coincide with q-\(\gamma\) cumulants. Instead, one can combine the domain of these two groups of parameters, and treat the result as an extension of the moment-cumulant relation to, say, \(\gamma\in\mathbb{R}_{>0}\bigcup\mathbb{Z}_{\leq-1}\). ## 7. Appendix A:Dunkl operators and hypergeometric functions In this appendix, we give a brief review of the basic settings of multivariate hypergeometric functions defined by abstract root systems, the differential operators acting on them, their connection to symmetric spaces and their spherical functions, and the limit transition to multivariate Bessel functions. The purpose is to provide a theoretical background to the particular objects appearing and used later in this text, and explain the connections between them. In Section 2.2 to 2.5, we specify from general theory to the special case and provide more concrete formulas, that we operate with in Section 3-6. A large part of our presentation is a simplification of [RR] Section 2 and 3, which gives a brief and clear review of the theory with more explanations of the concepts. For more detailed exposition of Dunkl theory, see [Ro] and [A]. For any \(M\geq 1\), consider the Euclidean space \(\mathbb{R}^{M}\) with the standard scalar product \(\langle x,y\rangle=\sum_{i=1}^{M}x_{i}y_{i}\). For \(\alpha\in\mathbb{R}^{M}\setminus\{0\}\), denote the reflection of point \(x\) about the hyperplane \(\langle\alpha\rangle^{\perp}\) by \(\sigma_{\alpha}\), such that \[\sigma_{\alpha}(x)=x-2\frac{\langle\alpha,x\rangle}{\langle\alpha,\alpha\rangle ^{2}}\alpha.\] Clearly each \(\sigma_{\alpha}\) is an element in the orthogonal group \(O(M)\). **Definition 7.1**.: _A root system \(R\) is a finite set of vectors in \(\mathbb{R}^{M}\setminus\{0\}\), such that \(\sigma_{\alpha}(R)=R\) for all \(\alpha\in R\). We say \(R\) is irreducible if it cannot be decomposed into two disjoint subsets whose elements are mutually orthogonal. \(R\) is crystallographic, if for any \(\alpha,\beta\in R\) we have_ \[\frac{2\langle\alpha,\beta\rangle}{\langle\beta,\beta\rangle}\in\mathbb{Z}.\] \(\{\sigma_{\alpha}\}_{\alpha\in R}\) _generate a subgroup of \(O(M)\), which is called the Weyl group of root system \(R\) and record the symmetry that \(R\) has._ _Each root system can be written as a disjoint union \(R=R_{+}\bigcup(-R_{+})\), such that \(R_{+}\) and \(-R_{+}\) are separated by some hyperplane through the origin. We call \(R_{+}\) the positive part of \(R\)._ _Remark 7.2_.: The choice of \(R_{+}\) is not unique, but all the choices are identical under a linear transformation. _Remark 7.3_.: In this text, we do not require \(R\) to be reduced, i.e, \(R\bigcap\mathbb{R}\alpha=\pm\alpha\), for all \(\alpha\in R\). **Example 7.4**.: _In practise, people care mostly about the classical root systems of type A-D. The following two crystallographic root systems appear in random matrices: the root system of \(A_{M}\), \(M=1,2,...\), that is_ \[R=\{e_{i}-e_{j},1\leq i<j\leq M\},\] _and the root system \(BC_{M}\), \(M=1,2,...\), that is_ \[R=\{\pm e_{i},1\leq i\leq M,\pm 2e_{i},1\leq i\leq M,\pm e_{i}\pm e_{j},1\leq i <j\leq M\},\] _where \(e_{i}\), \(i=1,2,...,M\) denotes the \(i^{th}\) standard basis in \(\mathbb{R}^{M}\)._ After recalling the notion of root system, we are now able give the definition of hypergeometric and Bessel functions, which were introduced in a series of works by Heckman and Opdam, see [HS],[O1],[O2] for details. Fix \(M\in\mathbb{Z}_{\geq 1}\), take \(\mathfrak{A}\) to be a \(M\)-dimensional Euclidean space, let \(\mathfrak{A}_{\mathbb{C}}\) be the complexification of \(\mathfrak{A}\), which is isomorphic to \(\mathbb{C}^{M}\), and let \(R\) be a crystallographic root system on \(\mathfrak{A}\) with Weyl group \(W\), \(R^{+}\) be the positive part of \(R\), and \(P^{+}\) be the set of dominant weights associated with \(R^{+}\), i.e, \[P^{+}:=\Big{\{}\lambda\in\mathfrak{A}:\frac{\langle\lambda,\alpha\rangle}{ \langle\alpha,\alpha\rangle}\in\mathbb{Z}^{+}\text{ for all }\alpha\in R^{+}\Big{\}}.\] _Remark 7.5_.: View a partition \((\lambda_{1},...,\lambda_{M})\) as a vector with nonnegative integer entries. For root system of type A, \(P^{+}\) can be identified with the set of all partitions of length at most \(M\), and for root system of type BC, \(P^{+}\) can be identified with the set of all even partitions of length at most \(M\). For \(\mu,\lambda\in P^{+}\), we write \(\mu\leq\lambda\) if \(\mu_{i}\leq\lambda_{i}\) for all \(i=1,2,...,M\). A root multiplicity function \(m_{\alpha}:\alpha\in R\) on \(R\) is a W-invariant map which assigns to each root in \(R\) a real number. For \(\alpha\in R\), let \[\rho=\rho(m):=\frac{1}{2}\sum_{\alpha\in R^{+}}m_{\alpha}\alpha, \tag{7.1}\] and \(\alpha_{i},\rho_{i}\) be the \(i^{th}\) component of \(\alpha,\rho\) respectively. Let \(s_{\alpha}\) to be the reflection operator about \(\{\alpha\}^{\perp}\) such that \(s_{\alpha}f(x):=f(\sigma_{\alpha}(x))\). The following differential operator acts on any smooth function on \(\mathfrak{A}\). **Definition 7.6**.: _For \(i=1,2,...,M\), given a root multiplicity function \(m_{\alpha}\), the trigonometric Dunkl operator associated with \(R\) and \(m\) is_ \[T_{i}=\partial_{i}+\sum_{\alpha\in R^{+}}m_{\alpha}\frac{\alpha_{i}}{1-e^{-2 \langle\alpha,\cdot\rangle}}(1-s_{\alpha})-\rho_{i} \tag{7.2}\] Let \(S(\mathfrak{A}_{\mathbb{C}})\) denote the space of complex polynomials \(p\) in M variables, such that when identifying the \(i^{th}\) variable with the \(i^{th}\) standard basis \(e_{i}\) in \(\mathfrak{A}\), \(p\in S(\mathfrak{A}_{\mathbb{C}})\) is invariant under action of W. **Definition 7.7**.: _For \(\lambda\in\mathfrak{A}_{\mathbb{C}}\), the hypergeometric function associated with R is an analytic W-invariant function \(F_{\lambda}(x;m)\) on \(\mathfrak{A}\), such that for each \(p\in S(\mathfrak{A}_{\mathbb{C}})\),_ \[p(T)F_{\lambda}=p(\lambda)F_{\lambda}. \tag{7.3}\] _We set \(F_{\lambda}(0;m)=1\)._ **Theorem 7.8**.: _[_HS_]_ _There exists an open set of root multiplicity functions \(M_{reg}\), which contains all nonnegative m's, such that if \(m\in M_{reg}\), for each \(\lambda\in\mathfrak{A}_{\mathbb{C}}\) there exists a unique function \(F_{\lambda}(z;m)\) satisfying Definition 7.7. Moreover, \(F:\mathfrak{A}_{\mathbb{C}}\times M_{reg}\times\mathfrak{A}\to\mathbb{C}\) is analytic._ Similar to hypergeometric functions, the multivariate Bessel functions are also defined as W-invariant eigenfunctions of some differential operators, which are called rational Dunkl operators. **Definition 7.9**.: _For i=1,2,...,M, given a root multiplicity function \(m_{\alpha}\), the rational Dunkl operator associated with \(R\) and \(m\) is_ \[D_{i}=\partial_{i}+\sum_{\alpha\in R^{+}}m_{\alpha}\frac{\alpha_{i}}{2\langle \alpha,\cdot\rangle}(1-s_{\alpha}) \tag{7.4}\] **Definition 7.10**.: _For \(\lambda\in\mathfrak{A}_{\mathbb{C}}\), \(m\geq 0\), the Bessel function associated with R is an analytic W-invariant function \(f_{\lambda}(x;m)\) on \(\mathfrak{A}\), such that for each \(p\in S(\mathfrak{A}_{\mathbb{C}})\),_ \[p(D)f_{\lambda}=p(\lambda)f_{\lambda}. \tag{7.5}\] Note that both \(F_{\lambda},f_{\lambda}\) are invariant in both \(\lambda\) and \(z\). Indeed, Bessel functions can be obtained from hypergeometric functions by a limit transition. This is simply because under the same limit transition, the trigonometric Dunkl operator \(T_{i}\) converges to the corresponding rational Dunkl operator \(D_{i}\). **Proposition 7.11**.: _[_10_]__, [_A, Section 4.4_]_ _For \(\lambda\in\mathfrak{A}_{\mathbb{C}}\), \(m\geq 0\),_ \[f_{\lambda}(z;m)=\lim_{\epsilon\to 0}F_{\epsilon^{-1}\lambda}(\epsilon z;m). \tag{7.6}\] From now consider only \(\lambda\in P^{+}\). Let \(M_{\lambda}:=\sum_{\mu\in W\lambda}e^{i\langle\mu,\cdot\rangle}\) be the trigonometric symmetric monomial on \(\mathbb{T}\) indexed by \(\lambda\), where \(\mathbb{T}\) is a torus obtained as the quotient space of \(\mathfrak{A}\), on which \(e^{i\langle\mu,\cdot\rangle}\) is periodic. For simplicity take \(m\) to be a nonnegative root multiplicity function (which is in \(M_{reg}\) by Theorem 7.8). Let \[w_{m}(z):=\prod_{\alpha\in R^{+}}\Bigl{|}e^{i\langle\alpha,z\rangle}-e^{-i \langle\alpha,z\rangle}\Bigr{|}^{m_{\alpha}}. \tag{7.7}\] **Definition 7.12**.: _The Jacobi polynomials (Heckman-Opdam polynomials) associated with \(R\) and \(m\geq 0\) are a collection of functions \(\mathfrak{J}_{\lambda}\) on \(T\) indexed by \(\lambda\in P^{+}\), where_ \[\mathfrak{J}_{\lambda}(\cdot;m)=\sum_{\mu\in P^{+},\ \mu\leq\lambda}c_{ \lambda\mu}(m)M_{\mu},\] _and coefficients \(c_{\lambda\mu}(m)\)'s are uniquely determined by_ 1. \(c_{\lambda\lambda}(m)=1\)__ 2. \(\mathfrak{J}_{\lambda}\)_'s are mutually orthogonal in_ \(L^{2}(\mathbb{T};w_{m})\)_._ Note that \(\mathfrak{J}_{\lambda}\)'s form an orthogonal basis of \(L^{2}(\mathbb{T};w_{m})^{W}\), the subspace of W-invariant elements in \(L^{2}(\mathbb{T};w_{m})\). **Example 7.13**.: _When taking \(R\) to be the type A root system, the Jacobi polynomials of type A are Jack polynomial \(P_{\lambda}(\vec{x};\theta)\in\Lambda_{M}\)'s, where \(x_{i}\) is identified with \(e^{iz_{i}}\)._ The following important result identifies each hypergeometric function with a Jacobi polynomial, when the corrsponding weight \(\lambda\) is positive integer-valued. **Theorem 7.14**.: _[_18_, Section 4.4]_ _For all \(\lambda\in P^{+}\), \(m\geq 0\), the function \(F_{\lambda+\rho}(x;m)\) extends holomorphically to \(\mathfrak{A}_{\mathbb{C}}\), and_ \[F_{\lambda+\rho}(iz;m)=c(\lambda+\rho,m)\mathfrak{J}_{\lambda}(z;m), \tag{7.8}\] _where \(c(\lambda+\rho,m)\) is a constant depending on \(\lambda\) and \(m\)._ _Remark 7.15_.: For some multiplicity function \(m\) that is not nonnegative, as long as the \(L^{2}\) kernel \(w_{m}(x)\) is integrable on \(T\), \(\mathfrak{J}_{\lambda}\)'s are still well-defined and Theorem 7.14 holds for such m. See Section 2.2. When the root multiplicity function \(m\) takes some special values and \(\lambda\in P^{+}\), the Jacobi polynomial \(\mathfrak{J}_{\lambda}\)'s can be identified with the spherical functions on one of the classical symmetric spaces. The theory of symmetric spaces are classical, and the standard references are [11], [12], which discuss the classification problem, representation theory and analytic properties of symmetric spaces. In a word, a Riemannian symmetric space is a certain quotient of classical Lie groups \(G/K\) or \(U/K\), where \(G\) is noncompact, \(U\) is compact and \(K\) is a compact subgroup. \(G/K\) and \(U/K\) are of the so-called noncompact type and compact type, respectively, and there's a duality between one noncompact symmetric space and one compact symmetric space. Let \(G/K\) and \(U/K\) be the dual of each other, and let \(g,u,l\) denote the Lie algebra of \(G,U,K\) respectively, and let \(\mathfrak{A}\) denote the maximal abelian subspace in \(g/l\), then by duality \(i\mathfrak{A}\) is the maximal abelian subspace in \(u/l\). The restricted root system of \(G/K\) (or \(U/K\)) is set to be on \(\mathfrak{A}\) (or \(i\mathfrak{A}\)), and they share the same root multiplicities. In Appendix B we list the (restricted) root multiplicity functions of several compact symmetric spaces that are connected to random matrices. For a noncompact symmetric space \(G/K\), its spherical function is defined as a nonzero \(K\)-biinvariant function \(\phi_{\lambda}:G\to\mathbb{C}\), which is an eigenfunction of any so-called invariant differential operator on \(G/K\), indexed by \(\lambda\in\mathfrak{A}_{\mathbb{C}}\). For a compact symmetric space \(U/K\) with restricted root system \(R\), its spherical function \(\psi:U\to\mathbb{C}\) are indexed by the highest weights of unitary irreducible \(K\)-spherical representations (the representation with a \(1\)-dimensional invariant subspace \(V^{K}_{\lambda}\)) \(\pi_{\lambda}\) of \(U\), where the space of all highest weight is identified as \(P^{+}(R)\), and each spherical function is given by \[\psi_{\lambda}(u)=\langle\pi_{\lambda}(u)e_{\lambda},e_{\lambda}\rangle,\] where \(e_{\lambda}\) is the unique unit vector in \(V^{K}_{\lambda}\). For a spherical function \(\phi\) of \(G/K\), by Cartan decomposition and the \(K\)-biinvariance \(\phi\) is determined by the value on \(\mathfrak{A}\), and same for \(\psi\) of \(U/K\). Moreover we have the following result. **Theorem 7.16**.: _For \(x\in\mathfrak{A}\), any \(\lambda\in P^{+}\),_ \[\psi_{\lambda}(\exp(ix))=\phi_{\lambda+\rho}(\exp(ix))=F_{\lambda+\rho}(ix;m)= c(\lambda+\rho,m)\mathfrak{J}_{\lambda}(x;m). \tag{7.9}\] The first equality is given in [Hel2], and the second equality is [HS, Theorem 5.2.2]. Because of this identification, we no longer distinguish spherical function (indexed by \(\lambda\in P^{+}\) of noncompact and compact symmetric spaces, and treat them simply as an analytic function on M dimensional Euclidean space \(\mathfrak{A}\). _Remark 7.17_.: The Heckman-Opdam Laplacian is an analog of the usual Laplace operator on \(\mathbb{R}^{M}\), given by \(p_{2}(T)=\sum_{i=1}^{M}T_{i}^{2}\). In the case of Theorem 7.16, \[p_{2}(T)=\Delta+\langle\rho,\rho\rangle,\] where \(\Delta\) is the Laplace-Beltrami operator on \(G/K\) (or \(U/K\)). Moreover, the limit transition in Proposition 7.11 specifies to a contraction of a Riemannian symmetric space to a corresponding Euclidean symmetric space, and convergence of the spherical function of the former to the latter. See [SO2, Theorem 3.4]. The spherical functions of Riemannian and Euclidean symmetric spaces can both be written as the so-called Harish-Chandra integrals, see [Hel2, Chapter IV] or [SO2, Section 2] for their explicit forms. We give calculations of the Harish-Chandra integral corresponding to several Euclidean symmetric spaces in Appendix C. ## 8. Appendix B: Root multiplicities Let \(U\) be a classical compact Lie group, and \(K\) be a Lie subgroup of \(U\). We list several examples that \(U/K\) is a compact Riemannian symmetric space of rank \(M\), and give its root multiplicities. For a complete list of classifications of irreducible Riemannian symmetric spaces, see [Hel1, Chapter X]. Let \(O(M),U(M),Sp(M)\) denote the \(M\times M\) orthogonal/unitary/compact sympletic group. Let \(M\leq N\), \(1\leq i\neq j\leq M\), \(\{e_{i}\}_{i=1}^{M}\) be the standard basis of \(\mathfrak{A}=\mathbb{R}^{M}\). \begin{tabular}{|l||l|l|l|l|} \hline Compact symmetric space & \(m_{e_{i}-e_{j}}\) & \(m_{\pm e_{i}\pm e_{j}}\) & \(m_{\pm e_{i}}\) & \(m_{\pm 2e_{i}}\) \\ \hline \(U(M+1)/O(M+1)\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \(U(M+1)\) & \(2\) & \(0\) & \(0\) & \(0\) \\ \(U(2M+2)/Sp(M+1)\) & \(4\) & \(0\) & \(0\) & \(0\) \\ \(O(N+M)/O(N)\times O(M)\) & \(0\) & \(1\) & \(N-M\) & \(0\) \\ \(U(N+M)/U(N)\times U(M)\) & \(0\) & \(2\) & \(2(N-M)\) & \(1\) \\ \(Sp(N+M)/Sp(N)\times Sp(M)\) & \(0\) & \(4\) & \(4(N-M)\) & \(3\) \\ \hline \end{tabular} Now let \(m\) be a root multiplicity function such that \(m_{\pm e_{i}\pm e_{j}}=2\theta\), \(m_{\pm e_{i}}=2\theta(N-M)\), \(m_{\pm 2e_{i}}=2\theta-1\), the \(\theta=\frac{1}{2},1,2\) cases correspond to the last three rows in the above table. On the other hand, \(m_{e_{i}-e_{j}}=2\theta\) corresponds to the first three rows when \(\theta=\frac{1}{2},1,2\), and the Bessel functions with these root multiplicities are of type A, and were studied in [BCG]. ## 9. Appendix C: Power series expression of Harish-Chandra integrals In this appendix we consider matrix integrals which appear as spherical functions of certain Euclidean type symmetric spaces, whose root systems are of type A or type BC. These so-called Harish-Chandra integral originated in the representation theory gain independent interests from physics and special functions, and were also studied intensively. One goal of people is to calculate out an explicit expression of these integrals. There were a large amount of literature studying this, but while different people care about Harish-Chandra integral of different forms, the explicit expressions they provide and the level of explicitness also differ a lot. In the remaining pages, we try to give a relatively systematic summary of explicit expressions of Harish-Chandra integrals related to several classical symmetric spaces, highlight their connections with some classical random matrix ensembles, and we give the expressions in terms of the symmetric polynomials. While the results and proof ingredient here are well known by experts, parts of them might not been formally published and might be helpful to readers. Fix \(N\in\mathbb{Z}_{>0}\). Let \(U\) denote the compact orthogonal/unitary/unitary sympletic Lie group \(O(N)/U(N)/Sp(N)\), and \(dU\) be the corresponding Haar measure while the parameter \(\theta=\frac{1}{2},1,2\). **Proposition 9.1**.: _[_F_, Proposition 13.4.1]_ _For \(N\geq 2\), \(\theta=\frac{1}{2},1,2\), let \(A=diag(a_{1},...,a_{N}),Z=diag(z_{1},...,z_{N})\),_ \[\int\exp(Tr(ZUAU^{-1})dU=\sum_{\mu}\frac{1}{H(\mu)}\frac{P_{\mu}(a_{1},...,a_ {N};\theta)P_{\mu}(z_{1},...,z_{N};\theta)}{P_{\mu}(1^{N};\theta)}. \tag{9.1}\] For \(\theta=\frac{1}{2},1,2\), (9.1) corresponds by a limit transition to spherical function of compact symmetric spaces \(U(N)/O(N),U(N),U(2N)/Sp(N)\) of type \(A_{N-1}\), where \(\theta\) is the (restricted) root multiplicity \(k_{e_{i}-e_{j}}=\frac{1}{2}m_{e_{i}-e_{j}}\) (\(1\leq i\neq j\leq N\)). See [OO1, Section 4]. In probablistic context, (9.1) is introduced as "characteristic function" of \(N\times N\) real/complex/real quaternionic self-adjoint random matrices whose distribution is invariant under unitary conjugations. The typical examples are GOE/GUE/GSE. See [GM], [BCG] for more details. [OV, Section 4] gives a proof of Proposition 9.1 for the case \(\theta=1\). Inspired by their approach, we provide another proof of Theorem 2.18 following the same line. Fix \(M\leq N\), \(\theta=\frac{1}{2},1,2\), define \[\Lambda=\begin{bmatrix}a_{1}&&&&0&...&0\\ &a_{2}&&&&0&...&0\\ &&...&&&&\\ &&&...&&&\\ &&&a_{M}&0&...&0\end{bmatrix}_{M\times N},\] \[Z=\begin{bmatrix}z_{1}&&&&\\ &z_{2}&&&&\\ &&...&&&\\ &&&...&\\ &&&z_{M}\\ 0&...&0\\ &&...&\\ 0&...&0\end{bmatrix}_{N\times M},\] \(U\in O(M)/U(M)/Sp(M)\), \(V\in O(N)/U(N)/Sp(N)\) are integrated under Haar measures. **Lemma 9.2**.: _[_M_, Chapter I, (7.8)]_ _For \(m\in\mathbb{Z}_{\geq 1}\), expand \(p_{1}^{m}\) in terms of Schur polynomials, i.e,_ \[p_{1}^{m}=\sum_{|\lambda|=m}C_{m}^{\lambda}S_{\lambda}. \tag{9.2}\] _Then_ \[C_{m}^{\lambda}=\frac{m!}{\prod_{s\in\lambda}[a(s)+l(s)+1]}. \tag{9.3}\] _Remark 9.3_.: \(C_{m}^{\lambda}\) can be interpreted in view of both representation theory of symmetric group and combinatoric: \(C_{m}^{\lambda}=\chi_{\lambda}(1^{m})=dim_{S_{m}}(\lambda)\), the character of \(S_{m}\) at identity, and it's equal to the number of standard Young tableaux of shape \(\lambda\). **Proposition 9.4**.: _[_F_, Proposition 13.4.1]_ _For \(\theta=\frac{1}{2},1,2\),_ \[\int dU\int dV\ \exp(Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*}))\] \[= \sum_{\mu}\prod_{j=1}^{M}\frac{\Gamma(\theta N-\theta(j-1))}{ \Gamma(\theta N-\theta(j-1)+\mu_{j})}\frac{1}{H(\mu)}\frac{P_{\mu}(a_{1}^{2},\cdots,a_{M}^{2};\theta)P_{\mu}(z_{1}^{2},\cdots,z_{M}^{2};\theta)}{P_{\mu}(1 ^{M};\theta)}, \tag{9.4}\] _where \(H(\mu)\) is defined in (2.7)._ _Remark 9.5_.: _[_F_, Chapter 13] provides a different self-contained proof of Proposition 9.1 and 9.4._ Proof.: Throughout this proof, for a symmetric polynomial \(f(x_{1},...,x_{M})\) and a \(M\times M\) matrix X, let \(f(X)\) be the value of \(f\) evaluated at eigenvalues of X. For \(\theta=1\) this integral and its various generalizations were well-studied in physic literature, e.g, in [GT], [SW],[GW]. [GT] gives the same power series expansion as the right side, which degenerates to \[\sum_{\mu}\frac{\prod_{i=1}^{M}(N-i)!}{\prod_{i=1}^{M}(N-i+\mu_{i})!}\frac{ \prod_{i=1}^{M}(M-i)!}{\prod_{i=1}^{M}(M-i+\mu_{i})!}S_{\mu}(a_{1}^{2},...,a_{M }^{2})S_{\mu}(b_{1}^{2},...,b_{M}^{2}),\] and its proof relies on the well known fact that Schur polynomials \(s_{\mu}(x_{1},...,x_{N})\) are the characters of \(U(N)\) and \(SL_{N}(\mathbb{R})\). For \(\theta=\frac{1}{2}\), \[\exp(Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*})=\exp(Tr(2U\Lambda VZ)=\sum_{l (\mu)\leq M}\frac{C_{|\mu|}^{\mu}}{|\mu|!}S_{\mu}(2U\Lambda VZ).\] By [M, Chapter VII, Section 3, (2.23)] and [M, Chapter VI, (10.22)], \[\int S_{\mu}(U\Lambda VZ)dU=\begin{cases}0&\mu\,\,\,\text{is\,\,\,not\,\,\, even;}\\ \Omega_{\mu}(\Lambda VZ)=\frac{P_{\lambda}(Z^{T}V^{T}\Lambda^{T}\Lambda VZ; \frac{1}{2})}{P_{\mu}(1^{M};\frac{1}{2})}&\mu\,\,\,\text{is\,\,\,\,even,\,\, $\mu=2\lambda$.}\end{cases} \tag{9.5}\] \[\int P_{\lambda}(Z^{T}V^{T}\Lambda^{T}\Lambda VZ;\frac{1}{2})dV= \int P_{\lambda}(ZZ^{T}V^{T}\Lambda^{T}\Lambda V;\frac{1}{2})dV\] \[= \frac{P_{\lambda}(ZZ^{T};\frac{1}{2})P_{\lambda}(\Lambda^{T} \Lambda;\frac{1}{2})}{P_{\lambda}(1^{N};\frac{1}{2})}=\frac{P_{\lambda}(Z^{T} Z)P_{\lambda}(\Lambda\Lambda^{T})}{P_{\lambda}(1^{N};\frac{1}{2})}, \tag{9.6}\] where the second equality holds by [M, VII.4.2]. Then \[\int\int\exp(Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*})dUdv\] \[= \sum_{\lambda}\frac{C_{2|\lambda|}^{2\lambda}}{|\lambda|!}\frac{ 4^{|\lambda|}}{P_{\lambda}(1^{M};\frac{1}{2})P_{\lambda}(1^{N};\frac{1}{2})}P_ {\lambda}(Z^{T}Z;\frac{1}{2})P_{\lambda}(\Lambda\Lambda^{T};\frac{1}{2})\] \[= \sum_{\lambda}\frac{C_{2|\lambda|}^{2\lambda}}{|\lambda|!}\frac{4^ {|\lambda|}}{P_{\lambda}(1^{N};\frac{1}{2})}\frac{1}{P_{\lambda}(1^{M};\frac{1 }{2})}P_{\lambda}(a_{1}^{2},...,a_{M}^{2};\frac{1}{2})P_{\lambda}(z_{1}^{2},...,z_{M}^{2};\frac{1}{2})\] \[= \sum_{\lambda}\frac{4^{|\lambda|}}{\prod_{s\in 2\lambda}[a(s)+l(s)+1 ]}\frac{\prod_{s\in\lambda}[a(s)+\frac{1}{2}l(s)+\frac{1}{2}]}{\prod_{s\in \lambda}[\frac{N}{2}+j-1-\frac{1}{2}(i-1)]}\frac{1}{P_{\lambda}(1^{M};\frac{1} {2})}P_{\lambda}(a_{1}^{2},...,a_{M}^{2};\frac{1}{2})P_{\lambda}(z_{1}^{2},...,z_{M}^{2};\frac{1}{2})\] \[= \sum_{\lambda}\prod_{i=1}^{M}\frac{\Gamma(\frac{N}{2}-\frac{1}{2} (i-1))}{\Gamma(\frac{1}{2}N-\frac{1}{2}(i-1)+\lambda_{i})}\frac{1}{\prod_{s\in \lambda}[a(s)+1+\frac{1}{2}l(s)]}\frac{P_{\lambda}(a_{1}^{2},\cdots,a_{M}^{2}; \frac{1}{2})P_{\lambda}(z_{1}^{2},\cdots,z_{M}^{2};\frac{1}{2})}{P_{\lambda}(1^ {M};\frac{1}{2})}\] where the third equality follows from Lemma 9.2 and (2.11). For \(\theta=2\), let \(\eta\) denote the map embedding real quaternion into \(M_{2\times 2}(\mathbb{C})\), such that for \(x=a+bi+cj+dk\), \(a,b,c,d\in\mathbb{R}\), \[\eta(x)=\begin{bmatrix}a+bi&c+di\\ -c+di&a-bi\end{bmatrix}. \tag{9.7}\] Similarly for \(\eta\) embeds \(GL_{M}(Sp)\) into \(GL_{2M}(\mathbb{C})\) by \[\eta(X)=[\eta(X_{ij})]_{1\leq i,j\leq M}. \tag{9.8}\] \[\exp(Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{*})=\exp(Tr(\eta(U\Lambda VZ))=\sum_ {l(\mu)\leq M}\frac{C^{\mu}_{|\mu|}}{|\mu|!}S_{\mu}(\eta(U\Lambda VZ)).\] By [M, Chapter VII, (6.13), (6.14), (6.20), Exercise 2.7 and Chapter VI, (10.22)], \[\int S_{\mu}(\eta(U\Lambda VZ))dU=\begin{cases}0&\mu^{{}^{\prime}}\ \ \text{is\ \ not\ even;}\\ \Omega_{\mu}(\Lambda VZ)=\frac{P_{\lambda}(Z^{*}V^{*}\Lambda^{*}\Lambda VZ;2 )}{P_{\mu}(1^{M};2)}&\mu^{{}^{\prime}}\ \ \text{is\ even, }\mu=\lambda\cup\lambda,\end{cases} \tag{9.9}\] where \(*\) denotes the conjugate transpose of quaternionic matrices. \[\begin{split}&\int P_{\lambda}(Z^{*}V^{*}\Lambda^{*}\Lambda VZ; 2)dV=\int P_{\lambda}(ZZ^{*}V^{*}\Lambda^{*}\Lambda V;2)dV\\ =&\frac{P_{\lambda}(ZZ^{*};2)P_{\lambda}(\Lambda^{*} \Lambda;2)}{P_{\lambda}(1^{N};2)}=\frac{P_{\lambda}(Z^{*}Z;2)P_{\lambda}( \Lambda\Lambda^{*};2)}{P_{\lambda}(1^{N};2)},\end{split} \tag{9.10}\] where the second equality holds by [M, Chapter VII, Exercise 6.4]. Then \[\begin{split}&\int\int\exp(Tr(U\Lambda VZ+Z^{*}V^{*}\Lambda^{*}U^{ *})dUdV\\ =&\sum_{\lambda}\frac{C^{\lambda\cup\lambda}_{| \lambda\cup\lambda|}}{|\lambda\cup\lambda|!}\frac{1}{P_{\lambda}(1^{M};2)P_{ \lambda}(1^{N};2)}P_{\lambda}(Z^{*}Z;2)P_{\lambda}(\Lambda\Lambda^{*};2)\\ =&\sum_{\lambda}\frac{C^{\lambda\cup\lambda}_{| \lambda\cup\lambda|}}{|\lambda\cup\lambda|!}\frac{1}{P_{\lambda}(1^{N};2)} \frac{1}{P_{\lambda}(1^{M};2)}P_{\lambda}(a_{1}^{2},...,a_{M}^{2};2)P_{ \lambda}(z_{1}^{2},...,z_{M}^{2};2)\\ =&\sum_{\lambda}\frac{1}{\prod_{s\in\lambda\cup \lambda}[a(s)+l(s)+1]}\frac{\prod_{s\in\lambda}[a(s)+2l(s)+2]}{\prod_{s\in \lambda}[2N+j-1-2(i-1)]}\frac{1}{P_{\lambda}(1^{M};2)}P_{\lambda}(a_{1}^{2},...,a_{M}^{2};2)P_{\lambda}(z_{1}^{2},...,z_{M}^{2};2)\\ =&\sum_{\lambda}\prod_{i=1}^{M}\frac{\Gamma(2N-2(i-1)) }{\Gamma(2N-2(i-1)+\lambda_{i})}\frac{1}{\prod_{s\in\lambda}[a(s)+1+2l(s)]} \frac{P_{\lambda}(a_{1}^{2},\cdots,a_{M}^{2};2)P_{\lambda}(z_{1}^{2},\cdots,z_ {M}^{2};2)}{P_{\lambda}(1^{M};2)},\end{split}\] where the third equality again follows from Lemma 9.2 and (2.11) \(\square\) ## 10. Appendix D: Limit transition of type BC Bessel functions In this section we provide a limit transition of \(\mathbb{B}(\vec{a},z_{1},...,z_{M};\theta)\) to a simple symmetric combination of exponents. This transition implies that in the \(\theta=0\) regime, the rectangular addition \(\vec{a}\boxplus_{M,N}^{\theta}\vec{b}\) becomes the usual convolution of the empirical measures \(\frac{1}{M}\sum_{i=1}^{M}\delta_{a_{i}^{2}}\) and \(\frac{1}{M}\sum_{i=1}^{M}\delta_{b_{i}^{2}}\). **Proposition 10.1**.: _Given \(\vec{a}=(a_{1}\geq a_{2}\geq...\geq a_{M})\), take \(M\) to be fixed, \(N\to\infty,\theta\to 0,N\theta\to\infty\), then_ \[\mathbb{B}(\vec{a},N\theta z_{1},...,N\theta z_{M};\theta,N)\longrightarrow\frac {1}{M!}\sum_{\sigma\in S_{M}}\prod_{i=1}^{M}e^{a_{i}^{2}z_{\sigma(i)}^{2}}. \tag{10.1}\] Proof.: This follows from a straightforward calculation. Indeed, by Proposition 2.16, \[\mathbb{B}(\vec{a},N\theta z_{1},...,N\theta z_{M};\theta,N)=\sum_{ \mu\in\operatorname{YD}}\prod_{j=1}^{l(\mu)}\frac{(N\theta)^{\mu_{j}}}{[\theta( N-j+1)]\cdots[\theta(N-j+1)+\mu_{j}-1]}\] \[\cdot\frac{\prod_{s\in\mu}\Big{[}a(s)+\theta l(s)+\theta\Big{]}}{ \prod_{s\in\mu}\Big{[}M\theta+(j-1)-\theta(i-1)\Big{]}}\cdot\frac{1}{\prod_{s \in\mu}\Big{[}a(s)+1+\theta l(s)\Big{]}}P_{\mu}(a_{1}^{2},...,a_{M}^{2};\theta )P_{\mu}(z_{1}^{2},...,z_{M}^{2};\theta). \tag{10.2}\] When taking the limit in the above way, \[\prod_{j=1}^{l(\mu)}\frac{(N\theta)^{\mu_{j}}}{[\theta(N-j+1)]\cdots[\theta(N -j+1)+\mu_{j}-1]}\longrightarrow 1,\] and \[\frac{1}{\prod_{s\in\mu}\Big{[}a(s)+1+\theta l(s)\Big{]}}\longrightarrow\prod _{j=1}^{l(\mu)}\frac{1}{\mu_{j}!}.\] Also note that \(a(s)+\theta l(s)+\theta\) does not go to \(0\) only if \(a(s)=0\), and \(M\theta+(j-1)-\theta(i-1)\) does not go to \(0\) only if \(j=1\). These terms contribute \[\prod_{i\geq 1}k_{i}!\cdot\frac{\Big{(}M-l(\mu)\Big{)}!}{M!}=\prod_{i\geq 0}k_{i }!\cdot\frac{1}{M!},\] where \(k_{i}\) denotes the number of rows in \(\mu\) of length \(i\). And the remaining part of \[\frac{\prod_{s\in\mu}\Big{[}a(s)+\theta l(s)+\theta\Big{]}}{\prod_{s\in\mu} \Big{[}M\theta+(j-1)-\theta(i-1)\Big{]}}\] converges to \(1\). Together with (3.12), we have the limit is equal to \[\sum_{\mu}\frac{\prod_{i\geq 0}k_{i}!}{M!}\frac{1}{\prod_{j=1}^{l(\mu)}\mu_{j}!} m_{\mu}(a_{1}^{2},...,a_{M}^{2})m_{\mu}(z_{1}^{2},...,z_{M}^{2})\] which is the Talor expansion of \[\frac{1}{M!}\sum_{\sigma\in S_{M}}\prod_{i=1}^{M}e^{a_{i}^{2}z_{\sigma(i)}^{2}}.\]
2306.08352
Bayesian Non-linear Latent Variable Modeling via Random Fourier Features
The Gaussian process latent variable model (GPLVM) is a popular probabilistic method used for nonlinear dimension reduction, matrix factorization, and state-space modeling. Inference for GPLVMs is computationally tractable only when the data likelihood is Gaussian. Moreover, inference for GPLVMs has typically been restricted to obtaining maximum a posteriori point estimates, which can lead to overfitting, or variational approximations, which mischaracterize the posterior uncertainty. Here, we present a method to perform Markov chain Monte Carlo (MCMC) inference for generalized Bayesian nonlinear latent variable modeling. The crucial insight necessary to generalize GPLVMs to arbitrary observation models is that we approximate the kernel function in the Gaussian process mappings with random Fourier features; this allows us to compute the gradient of the posterior in closed form with respect to the latent variables. We show that we can generalize GPLVMs to non-Gaussian observations, such as Poisson, negative binomial, and multinomial distributions, using our random feature latent variable model (RFLVM). Our generalized RFLVMs perform on par with state-of-the-art latent variable models on a wide range of applications, including motion capture, images, and text data for the purpose of estimating the latent structure and imputing the missing data of these complex data sets.
Michael Minyi Zhang, Gregory W. Gundersen, Barbara E. Engelhardt
2023-06-14T08:42:10Z
http://arxiv.org/abs/2306.08352v1
# Bayesian Non-linear Latent Variable Modeling via Random Fourier Features ###### Abstract The Gaussian process latent variable model (GPLVM) is a popular probabilistic method used for nonlinear dimension reduction, matrix factorization, and state-space modeling. Inference for GPLVMs is computationally tractable only when the data likelihood is Gaussian. Moreover, inference for GPLVMs has typically been restricted to obtaining maximum a posteriori point estimates, which can lead to overfitting, or variational approximations, which mischaracterize the posterior uncertainty. Here, we present a method to perform Markov chain Monte Carlo (MCMC) inference for generalized Bayesian nonlinear latent variable modeling. The crucial insight necessary to generalize GPLVMs to arbitrary observation models is that we approximate the kernel function in the Gaussian process mappings with random Fourier features; this allows us to compute the gradient of the posterior in closed form with respect to the latent variables. We show that we can generalize GPLVMs to non-Gaussian observations, such as Poisson, negative binomial, and multinomial distributions, using our _random feature latent variable model_ (RFLVM). Our generalized RFLVMs perform on par with state-of-the-art latent variable models on a wide range of applications, including motion capture, images, and text data for the purpose of estimating the latent structure and imputing the missing data of these complex data sets. L + Footnote †: CC-BY 4.0, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/). Latent variable modeling, Gaussian processes, probabilistic modeling. ## 1 Introduction A broad category of commonly used machine-learning techniques can be viewed as latent variable models. These methods model hidden structure in data via unobserved or "latent" variables. Thus, latent variable models naturally lend themselves to dimension-reduction tasks, since the latent variables can localize the observations in lower-dimensional spaces. Examples of this include matrix factorization techniques, dimension reduction, autoencoders, and state-space models. From a computational perspective, the simplest scenario to fit a probabilistic latent variable model is when the observations are assumed to be Gaussian distributed and when the mapping between the latent and observed variables is assumed to be linear. Many methods fit into this linear-Gaussian framework, such as factor analysis, probabilistic principal component analysis, canonical correlation analysis, and Kalman filters (Roweis and Ghahramani, 1999). With these linear-Gaussian assumptions, we have closed-form expressions to estimate the statistical parameters and latent variables. However, inference becomes more challenging as we depart from assumptions of normality and linearity, which have many nice mathematical properties with respect to integration. Non-Gaussian and nonlinear models generally do not have closed-form estimates of the latent space. In this paper, we introduce the _random feature latent variable model_ (RFLVM), in which we generalize the Gaussian process latent variable model to a wide variety of non-Gaussian likelihoods by using a random Fourier feature approximation to obtain a computationally tractable inference procedure. Using the random features, we obtain a fairly simple and generalizable Markov chain Monte Carlo (MCMC) sampler for this model which allows us to perform asymptotically exact Bayesian inference in generalized Gaussian process latent variable models. We show that we can easily derive a Gibbs sampler for binomial, multinomial and negative binomial observations using the Polya-gamma augmentation scheme and can uncover the latent manifold structure of count data. Furthermore, we extend our earlier work in our conference paper by developing a dynamic state space model based off the RFLVM (Gundersen et al., 2021). We show that, due to the computational tractability from the random features, we can easily extend our RFLVM to model a dynamic latent space, similar to a non-linear state-space model. As a motivating example, consider the problem of modeling neural spike train time series data. In this setting, sets of neurons are recorded as analog signals, which are thresholded into bits and then binned by discretizing time. Thus, these data can be viewed as count data with smooth underlying latent dynamics. A Gaussian assumption is clearly inappropriate, and we must incorporate the time dependent nature of the underlying manifold in the latent variable model as well. Our framework allows for exact posterior exploration of these data using a non-Gaussian, non-linear, state-space model. We will first begin this paper in Section 2 with an overview of previous work in probabilistic latent variable models, with a specific focus on the Gaussian process-based latent variable models. We then introduce our _random feature latent variable model_ in Section 3 and detail the MCMC sampling scheme for exact posterior inference. Next, in Section 4 we evaluate our method on a wide range of synthetic and empirical data sets, particularly on neural spike train data sets, to show that we can uncover the latent structure of high dimensional data sets. Lastly, we conclude the paper in Section 5 with a discussion of future directions with our random feature latent variable model. ## 2 Background In this paper, we focus on a simple but broad class of latent variable models taking the following form: our observed data is an \(N\times J\) matrix \(\mathbf{Y}\) with \(N\) observations and \(J\) features; the latent variables are an \(N\times D\) matrix \(\mathbf{X}\) where \(D\ll J\); and these two variables are related through some function of \(\mathbf{X}\), \[\mathbf{Y}=f(\mathbf{X}). \tag{1}\] Many latent variable models assume \(f(\mathbf{X})\) is a linear function, \[\mathbf{Y}=\mathbf{X}\mathbf{W}, \tag{2}\] where \(f(\mathbf{X})=\mathbf{X}\mathbf{W}\) for a \(D\times J\) projection matrix \(\mathbf{W}\). Principal components analysis (PCA), for example, can be viewed through Equation (2). In PCA, \(\mathbf{W}\) is the solution to an optimization problem of a linear projection of \(\mathbf{Y}\) to a lower-dimensional representation \(\mathbf{X}\) which retains the maximum amount of variance from the original data (Hotelling, 1933; Jolliffe, 2002). Due to linearity, the optimization problem that PCA solves has a simple closed-form solution in the form of the eigenvectors for the sample covariance matrix \(\mathbf{Y}\mathbf{Y}^{\top}/N\) corresponding to its \(D\) largest eigenvalues. Non-negative matrix factorization (NNMF) is another latent variable model that fits in the framework of Equation (2). In NNMF, the observations, latent variables, and projection matrix are positive-valued (Lee and Seung, 1999). Again, due to the linear relationship between the observations and the latent variables, inference in NNMF is fairly simple as we can update the values of \(\mathbf{W}\) and \(\mathbf{X}\) using an alternating least-squares steps. However, these models are not probabilistic; they have made no statistical assumptions about the latent space or the data-generating process. We can reinterpret Equation (1) as a probabilistic model by adding a noise term \(\mathbf{E}\) to our basic model, \[\mathbf{Y}=f(\mathbf{X})+\mathbf{E}, \tag{3}\] and assuming a distribution on \(\mathbf{X}\). For example, Tipping and Bishop (1999) formulates PCA as a probabilistic model with Gaussian-distributed latent variables and independent Gaussian-distributed noise, \(\mathbf{e}_{i}\sim\mathcal{N}(0,\sigma_{y}^{2}\mathbf{I})\). Since the normal distribution is closed under an affine transformation, we can write probabilistic PCA (PPCA) as \[\mathbf{y}_{i}\sim\mathcal{N}(\mathbf{W}\mathbf{x}_{i},\sigma_{y}^{2}\mathbf{ I}),\quad\mathbf{x}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{4}\] In PPCA, the latent space, \(\mathbf{X}\), can be marginalized out in closed form: \[\mathbf{y}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{W}\mathbf{W}^{\top}+\sigma_ {y}^{2}\mathbf{I}). \tag{5}\] This linear-Gaussian latent variable model has an analytical solution for the maximum likelihood estimate for \(\mathbf{W}\), which corresponds to the standard solution in PCA based on eigendecomposition and zero noise (Tipping, 2001; Roweis, 1998). PPCA is similar to perhaps the oldest and simplest latent variable model, factor analysis (FA) (Spearman, 1904; Cattell, 1945), which is equivalent to PPCA but with non-isotropic noise, i.e., \(\sigma_{y}^{2}\mathbf{I}\) is replaced with a diagonal matrix \(\mathbf{\Psi}\) in Equation (4). Additionally, we may place a prior on the mapping weights \(\mathbf{w}_{d}\sim\mathcal{N}(\mathbf{0},\sigma_{w}^{2})\) and, again, can integrate out the weights and obtain a closed-form representation of the marginal likelihood: \[\mathbf{y}_{i}\sim\mathcal{N}(\mathbf{0},\sigma_{w}^{-2}\mathbf{X}\mathbf{X}^ {\top}+\sigma_{y}^{2}\mathbf{I}). \tag{6}\] In fact, Lawrence (2005) shows that marginalizing the weights and optimizing the latent variables with respect to the marginal likelihood results in a maximum a posteriori (MAP) estimate that is equivalent to the MLE eigenvector solution from PCA, under a particular choice of prior. This formulation of probabilistic PCA under Equation (6) is crucial to extending the linear model of PPCA to the non-linear, kernel PCA (Scholkopf et al., 1997), where the outer product of \(\mathbf{X}\mathbf{X}^{\top}\) is replaced by a kernel matrix \(\mathbf{K}_{x}\). ### Gaussian processes In the Bayesian setting, a popular choice of prior distribution on the mapping function is the Gaussian process (Williams and Rasmussen, 2006, GP), which puts a prior on arbitrary smooth non-linear functions. A GP-distributed function mapping \(\mathbf{x}_{i}\) to \(\mathbf{y}_{i}\), \[f\sim\mathcal{GP}(\mu(\cdot),k(\cdot,\cdot)), \tag{7}\] is defined by its mean function, \(\mu(\mathbf{x}_{i})\), and covariance function, \(k(\mathbf{x}_{i},\mathbf{x}_{j})\). The defining property of GPs is that a GP-distributed function evaluated at a finite set of points is distributed as a multivariate Gaussian, \[f(\mathbf{x}_{i})\sim\mathcal{N}(\boldsymbol{\mu}_{x},\mathbf{K}_{xx}). \tag{8}\] Here, \(\boldsymbol{\mu}_{x}\) and \(\mathbf{K}_{xx}\) denote the mean vector and covariance matrix induced by the mean and covariance functions. This property allows for tractable inference when GPs are used as a prior in non-linear Bayesian generative models, as conditional and marginal distributions of those variables can be written in closed form when the observations have a multivariate Gaussian distribution. ### Gaussian process latent variable models The Gaussian process latent variable model (GPLVM) provides a Bayesian, probabilistic variant of non-linear latent variable modeling (Lawrence, 2004, 2005) where we let the mean function to be zero, and the observations \(\mathbf{Y}\) to be Gaussian distributed: \[\mathbf{y}_{j}\sim\mathcal{N}(f_{j}(\mathbf{x}),\sigma_{j}^{2}\mathbf{I}), \quad f_{j}(\mathbf{x})\sim\mathcal{N}(\mathbf{0},\mathbf{K}_{xx}),\quad \mathbf{x}_{i}\sim\mathcal{N}(0,\sigma_{x}^{2}\mathbf{I}), \tag{9}\] where \(\mathbf{K}_{xx}\) is an \(N\times N\) covariance matrix defined by a positive definite kernel function \(k(\mathbf{x},\mathbf{x}^{\prime})\) and where \(f_{j}(\mathbf{X})=[f_{j}(\mathbf{x}_{1})\dots f_{j}(\mathbf{x}_{N})]^{\top}\). Due to conjugacy between the GP prior on \(f_{j}\) and Gaussian likelihood on \(\mathbf{y}_{j}\), we can integrate out \(f_{j}\) in closed-form. The resulting marginal likelihood for \(\mathbf{y}_{j}\) : \[\mathbf{y}_{j}\sim\mathcal{N}(\mathbf{0},\mathbf{K}_{xx}+\sigma_{j}^{2}\mathbf{ I}), \tag{10}\] From here, we can observe that the marginal likelihood of the GPLVM is identical to the objective function of kernel PCA. Hence, we can interpret kernel PCA as the MLE solution for the GPLVM (Lawrence, 2004). The ability to analytically marginalize the GP-distributed mapping in Equation 10 allows for computationally tractable posterior inference. Thus, the GPLVM is a popular probabilistic dimension reduction method because it combines flexible nonlinear modeling with computational tractability. We cannot find the optimal \(\mathbf{X}\) analytically in the GPLVM, but various approximations have been proposed. We can obtain a MAP estimate by integrating out the GP-distributed maps and then optimizing \(\mathbf{X}\) with respect to the posterior using scaled conjugate gradients (Lawrence, 2004, 2005), where computation scales as \(\mathcal{O}(N^{3})\). To scale inference, we may use sparse inducing point methods where the computational complexity is \(\mathcal{O}(NC^{2})\), for \(C\ll N\) inducing points (Lawrence, 2007). However, these methods only obtain a MAP estimate of the latent space, which is prone to overfitting (Damianou et al., 2016). Instead, we may adopt a variational Bayes approximation of the posterior and minimize the Kullback-Leibler divergence between the posterior and the variational approximation with the latent variables \(\mathbf{X}\) marginalized out, in order to infer a posterior distribution, instead of point estimates as in previous work. This variational approach is called a Bayesian GPLVM (Titsias and Lawrence, 2010; Damianou et al., 2016). However, integrating out \(\mathbf{X}\) in the approximate marginal likelihood is only tractable when we assume that we have Gaussian observations and when we use an RBF kernel with automatic relevance determination, which limits its flexibility. However, if the observation is no longer assumed to be generated from a Gaussian distribution, we then lose the aforementioned tractability, since we are no longer convolving a Gaussian likelihood with a Gaussian prior. Previous work in non-Gaussian GPLVMs were only capable of obtaining tractability by forming approximations to the posterior distribution, either in the form of a variational or a Laplace approximation. Otherwise, posterior inference in this setting is vulnerable to become trapped in poor local modes. One of the primary reasons why computation is intractable with non-Gaussian likelihoods is because we cannot calculate closed-form gradients of the objective function-the posterior density, with respect to the latent variables. Further extensions to the basic GPLVM framework have been developed. In the original GPLVM paper, the prior on the latent space, \(\mathbf{X}\), is a zero mean Gaussian prior with identity covariance. We can further imbue structure in the latent space prior to better reflect the underlying manifold used to model the data. For example, Lawrence and Quinonero Candela (2006) and van der Maaten (2009) modify the prior on the latent space so that observations close together in observation space are close together in latent space. From a similar perspective, Urtasun et al. (2008) model the latent space with respect to the topological structure of the underlying latent manifold. Urtasun and Darrell (2007); Martens et al. (2019) use label information and covariate data to inform the latent space structure. Beyond additional structure of the latent space, previous work has also incorporated hierarchical components in the generative process to share information across different but associated data sets (Lawrence and Moore, 2007; Kazlauskaite et al., 2019; Eleftheriadis et al., 2013). GPLVMs have proven to be a useful statistical model in numerous applications, ranging from single-cell RNA sequencing (Verma and Engelhardt, 2020; Ahmed et al., 2018), where researchers are interested in studying gene expression patterns in lower dimensions. Previous work have also used GPLVMs successfully in modeling motion capture data and human poses (Ek et al., 2007; Wang et al., 2007). Moreover, GPLVMs have been used to model multi-neuron spike train data (Wu et al., 2017; She and Wu, 2020), where neuroscientists are interested in modeling a non-linear lower-dimensional representation of the spiking activity. This type of data usually arrive in the form of positive integer-valued counts, where the typical Gaussian assumption of the data becomes inappropriate. This non-Gaussian assumption will result in a loss of computational tractability in the model as the GP-distributed maps are no longer marginalizable in closed form. However, previous work in extending GPLVMs to multinomial and Poisson distributed data used only approximations to the posterior and these approximations are distribution specific (Gal et al., 2015; Wu et al., 2017). However, developing a generalized method for extending GPVLMs to arbitrary data likelihoods still remains a difficult task. ### Non-linear latent variable models Other non-linear dimensionality reduction techniques like stochastic neighbor embedding (SNE) and \(t\)-SNE represent data with latent variables by minimzing the Kullback-Leibler divergence of the normalized kernel distance of the observed data and the normalized kernel distance of the latent variables (Hinton and Roweis, 2002; van der Maaten and Hinton, 2008). Uniform manifold approximation and projection (UMAP) preserves the similarity of the observation space and the latent space with respect to the geodesic distances between points, as opposed to Euclidean distances in SNE and \(t\)-SNE (McInnes et al., 2018). Similarly, locally linear embedding (LLE) uses the same intuition where data that are similar in observation space should be close together in the latent space (Saul and Roweis, 2003). LLE takes a linear combination of an observations neighbors and takes a linear projection of the neighboring local observations into a lower dimension. Another variation of using neighboring data to inform the non-linear dimensional reduction is Isomap, where a distance kernel is calculated for each pair of observations and embeds the data into the latent space by taking the eigenvectors corresponding to the top \(D\) eigenvalues of the pairwise distance matrix (Tenenbaum et al., 2000). While neural network-based approaches are effective for learning lower dimensional representations, they only provide point estimates without uncertainty quantification. For downstream tasks like prediction, having uncertainty quantification is crucial for correct decision making. In engineering tasks (like tracking a moving target) we need to properly account for a noisy environment so that we may ensure applications of machine learning like autonomous vehicles can correctly identify the position of other vehicles or pedestrians and safely adjust its behavior accordingly. Bayesian models can naturally quantify uncertainty through the posterior distribution. The variational Bayesian autoencoder assumes that the prior distribution for the latent variables is a standard normal and uses a neural network to model the mapping function between the observed space and the latent space. Inference is tractable in this setting as we can optimize with respect to a variational approximation of the marginal likelihood by taking advantage of the _reparameterization trick_, which allows us to construct a gradient-based estimator of the evidence lower bound (ELBO) (Kingma and Welling, 2013). Recent work, like the diffusion probabalistic models assume the latent variables are Gaussian noise propagated through a Markov chain (Ho et al., 2020; Song et al., 2021), rather than the standard normal assumption on the latent space in the VAE, and has proven to yield state-of-the-art results on a wide range of machine learning benchmark data sets. ### Latent dynamic variable models The basic dynamic latent variable model is the linear state-space model, where the \(J\)-dimensional observation \(\mathbf{y}_{i}\in\mathbb{R}^{J}\) at time index \(i\in[1,\ldots,N]\) is generated by \(D\ll J\)- dimensional state vector, or latent variable, \(\mathbf{x}_{i}\in\mathbb{R}^{D}\) where the latent dimension \(D\) should be much less than the observation dimension, \(J\). This vector autoregressive linear system is represented as \[\mathbf{x}_{i}=\mathbf{A}\mathbf{x}_{i-1}+\mathbf{r}_{i},\quad \mathbf{y}_{i}=\mathbf{B}\mathbf{x}_{i}+\mathbf{s}_{i}, \tag{11}\] where \(\mathbf{A}\in\mathbb{R}^{D\times D},\mathbf{B}\in\mathbb{R}^{J\times J}\) are state and output transition matrices, and \(\mathbf{r}_{i}\in\mathbb{R}^{D},\mathbf{s}_{i}\in\mathbb{R}^{J}\) are state and output Gaussian noise vectors with mean zero and covariance matrices \(\mathbf{R}\) and \(\mathbf{S}\), respectively. If the observations and parameters in the linear state-space are assumed to be Gaussian distributed, we can obtain closed-form estimates of the unknown parameters, \((\mathbf{A},\mathbf{B},\mathbf{R},\mathbf{S})\), using an EM algorithm (Ghahramani and Hinton, 1996), or, in the special case of the Kalman filter, marginalize the parameters and obtain the optimal estimate of the latent space conditioned on the previous observations in closed form (Kalman, 1960). However, if the dynamic properties of the process change over time, then we may assume that the state space model follows a switching linear dynamical system (SLDS, Oh et al., 2008): \[\mathbf{x}_{i}=\mathbf{A}_{z_{i}}\mathbf{x}_{i-1}+\mathbf{r}_{i}^ {(z_{i})},\quad\mathbf{y}_{i}=\mathbf{B}\mathbf{x}_{i}+\mathbf{s}_{i},\quad z _{i}\mid z_{i-1}\sim\mathbf{P}_{z_{i-1}}, \tag{12}\] where \(z_{t}=k\), for \(k\in\{1,\ldots,K\}\), represents the state indicator variable with latent state switching matrix \(\mathbf{P}\in[0,1]^{K\times K}\). Here, the latent transition matrix and the noise parameters \((\mathbf{A}_{z_{i}},\mathbf{r}_{i}^{(z_{i})})\) depend on the current hidden state \(z_{i}\) at time \(i\). Furthermore, the SLDS model can be extended to the infinite state-space model where the hierarchical Dirichlet process (Teh et al., 2006) is used as the prior over the SLDS parameters, and the number of hidden states is estimated _a posteriori_(Fox et al., 2011). ### Non-linear dynamic models However, a dynamic model where the latent state space and the observation generating process follow a nonlinear dynamic process may be a more realistic model for many applications because the underlying data generating process is nonlinear. Therefore, functions like the Gaussian process are capable of capturing local dependencies in the latent dynamics between the latent space and the observations through the form of a kernel function. In contrast, linear models assume a constant relationship in the dynamic model. GPLVMs have been extended to time series data as well, by modifying the prior on the latent variables, \(\mathbf{X}\), to incorporate dynamic behavior. The Gaussian process dynamical model (GPDM) assumes a non-linear state-space model (Wang et al., 2007): \[\mathbf{x}_{i}=\mathbf{A}\Phi(\mathbf{x}_{i-1})+\mathbf{r}_{i}, \quad\mathbf{y}_{i}=\mathbf{B}\Phi(\mathbf{x}_{i})+\mathbf{s}_{i} \tag{13}\] where \(\Phi(\cdot)\) represents a basis function. In this model also, if \((\mathbf{A},\mathbf{B})\) are assumed to be Gaussian, then we may analytically integrate out these parameters. The resulting marginal likelihood is a GPLVM where the prior on \(\mathbf{X}\) is now an autoregressive Gaussian process. Alternatively, we can model the latent dynamic process as a GP regression of the time indices, \(i\in\{1,\ldots,N\}\), onto the latent space (Lawrence and Moore, 2007; Damianou et al., 2011): \[p(\mathbf{X})=\prod_{d=1}^{D}p(\mathbf{x}_{d}),\quad\mathbf{x}_{ d}(i)\sim\mathcal{N}(0,\mathbf{K}_{N}),\quad f_{j}(\mathbf{x})\sim\mathcal{GP}( \mathbf{0},\mathbf{K}_{X}),\quad\mathbf{y}_{j}\sim\mathcal{N}(f_{j}(\mathbf{ x}),\sigma_{j}^{2}\mathbf{I}), \tag{14}\] Aside from GP-based models, many other models are common for modeling nonlinear latent dynamic systems, such as the unscented extension of the original Kalman filter (Wan and van der Merwe, 2000), recurrent neural networks (RNNs), and long short-term memory (LSTM) networks, which are extensions of RNNs (Hochreiter and Schmidhuber, 1997). The basic architecture of the recurrent neural network consists of an input \(\mathbf{X}_{i}\), latent hidden layer \(\mathbf{h}_{i}\), model weights \(\mathbf{W}\), and an output \(\mathbf{Y}_{i}\): \[\mathbf{Y}_{i}=g(\mathbf{W}_{yh}\mathbf{h}_{i}),\quad\mathbf{h}_{i}=f(\mathbf{ W}_{xh}\mathbf{X}_{i}+\mathbf{W}_{hh}\mathbf{h}_{i-1}), \tag{15}\] where \(f\) is the hidden layer activation function and \(g\) is the output layer activation function. Recurrent neural networks can model dynamic structure in the data, but the one-layer RNN is often limited in its expressiveness to approximate different functions. Analogous to how we combine multiple layers of perceptrons to form a deep neural network, we can also form multiple layers of recurrent neural networks to form a deep RNN (Pascanu et al., 2013). With a deep RNN architecture, we are able to capture highly nonlinear relationships between the observed data and the hidden layers, as well as between the hidden layers themselves. ### Random Fourier features Kernel-based methods, like the GPLVM, are popular machine learning models because the kernel models nonlinear functions. However, kernel methods scale \(\mathcal{O}(N^{3})\) in terms of computational complexity and scale \(\mathcal{O}(N^{2})\) in terms of storage for \(N\) observations. One method to approximate the kernel function is to use random Fourier features (RFFs) by sending the input space through a randomized feature map into \(M\) dimensional space, which reduces the computation cost to \(\mathcal{O}(NM^{2})\) and the storage cost to \(\mathcal{O}(NM)\)(Rahimi and Recht, 2008). Mercer's theorem states that any positive definite kernel function \(k(\cdot,\cdot)\) can be equivalently computed as an inner product of a feature mapping, \(\phi(\cdot)\), between a pair of points so that (Mercer, 1909): \[k(\mathbf{x},\mathbf{x}^{\prime})=\left\langle\phi(\mathbf{x}),\phi(\mathbf{x} ^{\prime})\right\rangle,\quad\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{D}. \tag{16}\] If we approximate \(\phi(\cdot)\) with a low-dimensional mapping \(\varphi(\cdot)\) such that \(\varphi(\mathbf{x}_{i})^{\top}\varphi(\mathbf{x}_{j})\approx\left\langle\phi (\mathbf{x}_{i}),\phi(\mathbf{x}_{j})\right\rangle\), then we can substantially reduce the computational burden of using kernel methods as the memory requirements only scale \(O(NM)\) and the computational complexity scales \(O(NM^{2})\). Bochner's theorem states that any continuous shift-invariant kernel function \(k(\mathbf{x},\mathbf{x}^{\prime})=k(\mathbf{x}-\mathbf{x}^{\prime})\) on \(\mathbb{R}^{D}\) is positive definite if and only if \(k(\mathbf{x}-\mathbf{x}^{\prime})\) is the Fourier transform of a nonnegative measure \(p(\mathbf{w})\). If the kernel is properly scaled, the kernel's Fourier transform \(p(\mathbf{w})\) is guaranteed to be a density (Bochner, 1959). Let \(h(\mathbf{x})\triangleq\exp(i\mathbf{w}^{\top}\mathbf{x})\) be a randomized functions which depends on the inputs \(\mathbf{x}\) and the random features \(\mathbf{w}\), and let \(h(\mathbf{x})^{*}\) denote its complex conjugate: \[k(\mathbf{x}-\mathbf{x}^{\prime})=\int_{\mathbb{R}^{D}}p(\mathbf{w})\exp(i \mathbf{w}^{\top}(\mathbf{x}-\mathbf{x}^{\prime}))\mathrm{d}\mathbf{w}= \mathbb{E}_{p(\mathbf{w})}[h(\mathbf{x})h(\mathbf{x}^{\prime})^{*}]. \tag{17}\] So \(h(\mathbf{x})h(\mathbf{x}^{\prime})^{*}\) is an unbiased estimate of \(k(\mathbf{x}-\mathbf{x}^{\prime})\). By dropping the imaginary portion, then \(h(\mathbf{x})\triangleq\cos(\mathbf{w}^{\top}\mathbf{x})\) by Euler's formula. Then by Monte Carlo approximation, we have \(k(\mathbf{x},\mathbf{x}^{\prime})\approx\varphi_{w}(\mathbf{x})^{\top}\varphi_ {w}(\mathbf{x}^{\prime})\), where \[\varphi_{w}(\mathbf{x})\triangleq\sqrt{\frac{2}{M}}\begin{bmatrix}\sin(\mathbf{ w}_{1}^{\top}\mathbf{x}),\,\cos(\mathbf{w}_{1}^{\top}\mathbf{x})\\ \vdots\\ \sin(\mathbf{w}_{M/2}^{\top}\mathbf{x}),\,\cos(\mathbf{w}_{M/2}^{\top}\mathbf{x })\end{bmatrix},\quad\mathbf{w}_{m}\sim p(\mathbf{w}). \tag{18}\] We draw \(M\) random frequencies from \(p(\mathbf{w})\) to approximate the kernel function. Because the optimal solution to the objective function of a kernel method, \(f^{*}(\mathbf{x})\), is linear in pairwise evaluations of the kernel (Kimeldorf and Wahba, 1971), we can represent \(f^{*}(\mathbf{x})\) as \[\begin{split} f^{*}(\mathbf{x})&=\sum_{n=1}^{N} \alpha_{n}k(\mathbf{x}_{n},\mathbf{x})=\sum_{n=1}^{N}\alpha_{n}\langle\phi( \mathbf{x}_{n}),\phi(\mathbf{x})\rangle\\ &\approx\sum_{n=1}^{N}\alpha_{n}\varphi_{w}(\mathbf{x}_{n})^{ \top}\varphi_{w}(\mathbf{x})=\boldsymbol{\beta}^{\top}\varphi_{w}(\mathbf{x}), \end{split} \tag{19}\] given a reproducing kernel Hilbert space, \(\mathcal{H}\). The randomized approximation of this inner product lets us replace expensive calculations involving the kernel with an \(M\)-dimensional inner product. Previous work has only looked at using RFFs to speed up computation (Lazaro-Gredilla et al., 2010; Hensman et al., 2017). However, our critical insight regarding RFFs is that by using the random projection of the input space to approximate the kernel function, as seen in Equation (19), we can take closed-form evaluations of the likelihood density function. This allows us to use gradient-based optimization techniques to take a MAP estimate or directly take Markov chain Monte Carlo samples. Given the representer theorem in Equation (19), we can approximate the GPLVM using random features as: \[\mathbf{y}_{j}\sim\mathcal{N}_{N}(\varphi_{w}(\mathbf{X})\boldsymbol{\beta}_{ j},\sigma_{j}^{2}\mathbf{I}),\quad\boldsymbol{\beta}_{j}\sim\mathcal{N}_{M}( \mathbf{b}_{0},\mathbf{B}_{0}),\quad\mathbf{x}_{i}\sim\mathcal{N}_{D}( \mathbf{0},\mathbf{I}), \tag{20}\] which we use as the foundation for our proposed latent variable model. ### Kernel learning In kernel methods, the choice of kernel function, \(k(\cdot,\cdot)\), is assumed to already be known _a priori_. However, the choice of kernel function is usually crucial for modeling the behavior of the mapping function, \(f(\cdot)\), but is rarely known in most statistical modeling applications. In multiple kernel learning, the kernel function can be estimated through a linear combination of kernels by solving a semidefinite optimization problem (Lanckriet et al., 2004; Bach et al., 2004). Yang et al. (2015) proposed a method for estimating a mixture of kernels using a fast approximation of the kernel using Hadamard matrices which reduce the computational complexity of the kernel machine to \(\mathcal{O}(N\log J)\). Wilson and Adams (2013) proposed a Bayesian variant of kernel learning by introducing a prior over the space of stationary covariance functions in the form of the spectral mixture kernel. Lastly, Oliva et al. (2016) developed a RFF variant of the spectral mixture kernel by placing a Dirichlet process mixture prior on the random frequencies, thereby allowing for a Bayesian non-parametric procedure for kernel learning. One of the key tools we use to obtain computational tractability in our model is a basis function representation for GPs using a low-rank approximations of a kernel function. The behavior of a GP is typically defined by the choice of kernel used as the covariance function. In the typical implementation of GP-based models, the inverse of the covariance kernel incurs a cubic computational cost with respect to the number of observations. Using random Fourier features, we form a Monte Carlo approximation of the kernel using \(M\) random frequencies and can now model a GP-distributed function as a linear function with respect to the random features. In order to take closed-form gradient evaluations of the posterior distribution, we approximate the GP-distributed mappings using random Fourier features (RFFs). Moreover, this RFF approximation of a GP-distributed function induces a closed form expression for the gradient of the posterior with respect to the latent variables. This is the key insight we need to generalize the GPLVM to the non-Gaussian setting. ## 3 Method We first introduce the Bayesian model with the following data generating process1: Footnote 1: In Appendix A, we provide a glossary for the definition of the variables used in this paper. \[\begin{split}\mathbf{y}_{j}&\sim\mathcal{L}\big{(}g \big{(}\varphi_{w}(\mathbf{X})\boldsymbol{\beta}_{j}\big{)},\boldsymbol{\theta }\big{)},\quad\quad\boldsymbol{\theta}\sim p(\boldsymbol{\theta}),\quad\quad \quad\quad\quad\quad\quad\quad\quad\boldsymbol{\beta}_{j}\sim\mathcal{N}_{M}( \boldsymbol{\beta}_{0},\mathbf{B}_{0}),\\ \mathbf{x}_{n}&\sim\mathcal{N}_{D}(\mathbf{0},\mathbf{ I}),\quad\quad\quad\quad\quad\quad\mathbf{w}_{m}\sim\sum_{k=1}^{\infty}\pi_{k}\cdot \mathcal{N}_{D}(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}),\quad\boldsymbol {\pi}\sim\mathcal{DP}(\alpha,\mathcal{H}).\end{split} \tag{21}\] Here, \(\mathcal{L}(\cdot)\) is a likelihood function, \(g(\cdot)\) is an invertible link function that maps the real numbers onto the likelihood parameters' support, and \(\boldsymbol{\theta}\) are other likelihood-specific parameters, if they exist (e.g., the dispersion parameter in the negative binomial likelihood). We assume \(p(\mathbf{w})\) is drawn from a Dirichlet process mixture model (Ferguson, 1973; Antoniak, 1974). For computational tractability, we assign each \(\mathbf{w}_{m}\) in \(\mathbf{W}=[\mathbf{w}_{1}\ldots\mathbf{w}_{M/2}]^{\top}\) to a mixture component using variable \(z_{m}\), which is distributed according to a Chinese restaurant process (CRP, Aldous, 1985) with concentration parameter \(\alpha\). This prior introduces additional random variables: the mixture means \(\{\boldsymbol{\mu}_{k}\}_{k=1}^{K}\) and the mixture covariance matrices \(\{\boldsymbol{\Sigma}_{k}\}_{k=1}^{K}\) where \(K\) is the number of instantiated clusters in the current sampling iteration. By placing a prior of a Dirichlet process mixtures of Gaussian-inverse Wisharts on the random frequencies, we are able to tractably explore the space of stationary kernels (Oliva et al., 2016; Wilson and Adams, 2013). This choice is in contrast to typical approaches in kernel methods, where the kernel is chosen from a small set of common and tractable kernel functions, such as the radial basis function (RBF), whose Fourier transform corresponds to a Gaussian prior on \(\mathbf{w}_{m}\). However, the true kernel function in the data generating process is rarely known and should be estimated from the data, hence the flexible choice of the DPMM prior on the kernel. As seen in Equation (19), the optimal approximation of the non-linear function using RFFs is one that is linear with respect to the random features. We use \(\boldsymbol{\beta}_{j},j\in\{1,\ldots,J\}\) to represent the linear mapping parameters and place a Gaussian prior on \(\boldsymbol{\beta}_{j}\), this allows us to use typical Bayesian linear regression Gibbs sampling updates to take posterior samples of \(\boldsymbol{\beta}\). A typical choice of prior on \(\mathbf{X}\) in LVMs assumes _a priori_ that there is no dependency structure in the latent space (Gundersen et al., 2021). In realistic modeling scenarios, we may have more information about the latent space than what a standard Gaussian prior could convey. For example, the discriminative GPLVM incorporates labeled data for the observations to train the model (Urtasun and Darrell, 2007). Dynamic GPLVMs incorporate a temporal dependency by placing a dynamic prior on the latent space, \(\mathbf{X}\)(Damianou et al., 2011; Lawrence and Moore, 2007; Wang et al., 2007). Following these dynamic latent GP models, we can place a GP prior on the latent space \(\mathbf{x}_{d}(t)\sim\mathcal{N}(0,\mathbf{K}_{T})\), for the dynamic RFLVM, where \(\mathbf{K}_{T}\) is the covariance kernel evaluated on the time indices, \(t=1,\ldots,T\). Previous work in dynamic GPLVMs only achieves tractability by taking advantage of the conjugacy between the GP prior on \(\mathbf{f}_{j}\) and the Gaussian likelihood on \(\mathbf{Y}_{j}\), or they have tailor-made approximate inference strategies for specific likelihoods. However, our random feature-based model flexibly allows for modifications to the prior on the latent variables, \(\mathbf{X}\), in a general inference approach. ### Inference for the random feature latent variable model In this section, we present the MCMC sampling steps to perform posteior inference for our proposed latent variable model. We sample the posterior of \(\mathbf{W}\) using a Metropolis-Hastings (MH) sampler, where our proposal distribution, \(q(\mathbf{W})\), is set to the prior. This simplifies the acceptance probability to the ratio of likelihoods: \[\mathbf{w}_{m}^{\star}\sim q(\mathbf{W})\triangleq p(\mathbf{W}\mid\mathbf{ z},\boldsymbol{\mu},\boldsymbol{\Sigma}),\quad\rho_{\texttt{MH}}=\min \Bigg{\{}1,\frac{p(\mathbf{Y}\mid\mathbf{X},\mathbf{w}_{m}^{\star}, \boldsymbol{\theta})}{p(\mathbf{Y}\mid\mathbf{X},\mathbf{w}_{m},\boldsymbol{ \theta})}\Bigg{\}}. \tag{22}\] and we sample the latent indicators of the DP mixture, \(\mathbf{z}=[z_{1}\ldots z_{M}]\), using the standard algorithm, "Algorithm 8" for sampling for Dirichlet process mixture models (Neal, 2000): \[p(z_{m}=k\mid\boldsymbol{\mu},\boldsymbol{\Sigma},\mathbf{W},\alpha)\sim\begin{cases} \frac{n_{k}^{-m}}{M-1+\alpha}\mathcal{N}(\mathbf{w}_{m}\mid\boldsymbol{\mu}_ {k},\boldsymbol{\Sigma}_{k})&n_{k}^{-m}>0\\ \frac{\alpha}{M-1+\alpha}\int\mathcal{N}(\mathbf{w}_{m}\mid\boldsymbol{\mu}, \boldsymbol{\Sigma})\mathrm{NIW}(\boldsymbol{\mu},\boldsymbol{\Sigma}) \mathrm{d}\boldsymbol{\mu}\mathrm{d}\boldsymbol{\Sigma}&n_{k}^{-m}=0.\end{cases} \tag{23}\] Given the indicator assignments \(\mathbf{z}\) and the random frequencies \(\mathbf{W}\), we take advantage of conjugacy between the inverse Wishart-Gaussian prior on the scale-location parameters of the frequencies and Gibbs sample the scale and location parameters (Gelman et al., 2013): \[\boldsymbol{\Sigma}_{k} \sim\mathcal{W}^{-1}(\boldsymbol{\Psi}_{k},\nu_{k}),\quad \boldsymbol{\mu}_{k}\sim\mathcal{N}(\mathbf{m}_{k},\frac{1}{\lambda_{k}} \boldsymbol{\Sigma}_{k}). \tag{24}\] \[\boldsymbol{\Psi}_{k} =\boldsymbol{\Psi}_{0}+\sum_{m:z_{m}=k}^{M/2}(\mathbf{w}_{m}- \bar{\mathbf{w}}^{(k)})(\mathbf{w}_{m}-\bar{\mathbf{w}}^{(k)})^{\top}+\frac{ \lambda_{0}n_{k}}{\lambda_{0}+n_{k}}(\mathbf{w}_{m}-\boldsymbol{\mu}_{0})( \mathbf{w}_{m}-\boldsymbol{\mu}_{0})^{\top}\] \[\bar{\mathbf{w}}^{(k)} =\frac{1}{n_{k}}\sum_{m:z_{m}=k}^{M}\mathbf{w}_{m},\quad\nu_{k}= \nu_{0}+n_{k},\mathbf{m}_{k}=\frac{\lambda_{0}\boldsymbol{\mu}_{0}+n_{k}\bar{ \mathbf{w}}^{k}}{\lambda_{0}+n_{k}},\lambda_{k}=\lambda_{0}+n_{k}.\] Finally, we sample the DPMM concentration parameter \(\alpha\) by using an augmentation scheme to make sampling \(\alpha\) conditionally conjugate with a Gamma prior (Escobar and West, 1995): \[\eta \sim\mathrm{Beta}(\alpha+1,M),\frac{\pi_{\eta}}{1-\pi_{\eta}}= \frac{a_{\alpha}+K-1}{M(b_{\alpha}-\log(\eta))},\;\;K=\left|\{k:n_{k}>0\} \right|, \tag{25}\] \[\alpha \sim\pi_{\eta}\mathrm{Ga}(a_{\alpha}+K,b_{\alpha}-\log(h))+(1- \pi_{\eta})\mathrm{Ga}(a_{\alpha}+K-1,b_{\alpha}-\log(\eta)).\] Sampling the posterior of the other likelihood specific parameters (if they exist), \(\boldsymbol{\theta}\), is the only part of our proposed model that requires a likelihood-specific approach. In some special cases, we have a Gibbs sampling update for the parameter, like an inverse gamma distributed prior on the variance parameter of a Gaussian distribution, but can be sampled generally with gradient-based MCMC algorithms like the Hamiltonian Monte Carlo sampler (Duane et al., 1987). We cannot obtain an closed form optimal value of \(\mathbf{X}\) analytically even in the GPLVM, but various approximations have been proposed. Previous work in GPLVMs infer the latent space by either taking a MAP estimate (Lawrence, 2005) or using a variational approximation (Damianou et al., 2016). However, such methods ignore or underestimate the uncertainty in the latent space and could be vulnerable to overfitting. Following prior work in deep Gaussian processes and GPLVMs, we choose to take exact posterior samples of \(\mathbf{X}\) using the elliptical slice sampler (Gadd et al., 2021; Damianou and Lawrence, 2013; Sauer et al., 2020; Murray et al., 2010, ESS). In the ESS, we may take posterior draws of any parameter with a Gaussian prior, regardless of the likelihood. The elliptical slice sampler proposes posterior transitions by sampling new states along an ellipse passing through the current parameter state and a draw from the parameter's prior. If the new state is not accepted, then the proposal bracket is shrunk until a new state is accepted. Moreover, ESS does not require any tuning parameters, unlike typical Hamiltonian Monte Carlo samplers, and always transitions to a new state, unlike Metropolis-Hastings samplers. Therefore, we can generalize the sampling process for the latent space despite the choice of likelihood. In the prior on latent variable \(\mathbf{X}\), there are additional parameters governing the dynamic behavior. If we are using the Gaussian process prior on \(\mathbf{X}\), then we use an inducing point approximation for the covariance kernel \(\mathbf{K}_{xx}\) to reduce the computational complexity of sampling from the prior from \(\mathcal{O}(N^{3})\) to \(\mathcal{O}(NM^{2})\)(Snelson and Ghahramani, 2005). To sample the posteriors of the GP hyperparameters and the locations of the inducing points, we again use the elliptical slice sampler on the Gaussian covariance parameters (Murray and Adams, 2010). The Gaussian-distributed mapping weights, \(\boldsymbol{\beta}_{j}\), in the RFLVM are sampled equivalently to a Bayesian linear model given \(\varphi_{w}(\mathbf{X})\). Only the posterior sampling of \(\boldsymbol{\beta}_{j}\) is likelihood dependent, but in cases where we cannot obtain a closed-form sample of the full conditional of \(\boldsymbol{\beta}_{j}\) then we can use the general elliptical slice sampler to take posterior draws (for example, when the likelihood is Poisson-distributed). In other cases, we can again use the elliptical slice sampler to update the mapping weights. If the likelihood is also Gaussian, then we analytically integrate out the weights. For certain count data likelihoods, the mapping weights are equivalent for linear logistic models for which we may use Polya-gamma augmentations to sample \(\boldsymbol{\beta}_{j}\) is closed form for binomial (Polson et al., 2013), negative binomial (Zhou et al., 2012), or multinomial likelihoods (Chen et al., 2013; Linderman et al., 2015). For these cases, we may represent the likelihood as equal to \[\prod_{i=1}^{N}c(y_{ij})\frac{(\exp(\varphi_{w}(\mathbf{x}_{i})\boldsymbol{ \beta}_{j}))^{a(y_{ij})}}{(1+\exp(\varphi_{w}(\mathbf{x}_{i})\boldsymbol{\beta }_{j}))^{b(y_{ij})}}=2^{-b_{ij}}e^{\kappa_{ij}\varphi_{w}(\mathbf{x}_{i})_{j} }\int_{0}^{\infty}e^{-\omega\varphi_{w}(\mathbf{x}_{i})_{j}^{2}/2}p(\omega) \mathrm{d}\omega, \tag{26}\] where \(\kappa_{ij}=a_{ij}-b_{ij}/2\) and \(p(\omega)=\mathrm{PG}(\omega\mid b_{ij},0)\). A random variable \(\omega\) is Polya-gamma distributed with parameters \(b>0\) and \(c\in\mathbb{R}\), denoted \(\omega\sim\mathrm{PG}(b,c)\), if \[\omega\overset{d}{=}\frac{1}{2\pi^{2}}\sum_{k=1}^{\infty}\frac{g_{k}}{(k-1/2) ^{2}+c^{2}/(4\pi^{2})}, \tag{27}\] where \(\overset{d}{=}\) denotes equality in distribution and \(g_{k}\sim\mathrm{Ga}(b,1)\) are independent gamma random variables. Equation (26) allows us to rewrite the likelihood as proportional to a Gaussian. We can sample \(\omega\) conditioned on \(\psi_{ij}\) as \(p(\omega\mid\psi_{ij})\sim\mathrm{PG}(b_{ij},\psi_{ij})\). This enables convenient, closed-form Gibbs sampling steps of \(\boldsymbol{\beta}_{j}\), conditioned on Polya-gamma augmentation variables \(\omega_{ij}\): \[\begin{split}\omega_{ij}\mid\boldsymbol{\beta}_{j}& \sim\mathrm{PG}(b_{ij},\varphi_{w}(\mathbf{x}_{i})^{\top}\boldsymbol{ \beta}_{j}),\quad\mathbf{V}_{\boldsymbol{\omega}_{j}}=(\varphi_{w}(\mathbf{X}) ^{\top}\boldsymbol{\Omega}_{j}\varphi_{w}(\mathbf{X})+\mathbf{B}_{0}^{-1})^{- 1},\\ \boldsymbol{\beta}_{j}\mid\boldsymbol{\Omega}_{j}& \sim\mathcal{N}(\mathbf{m}_{\boldsymbol{\omega}_{j}},\mathbf{V}_{\boldsymbol{ \omega}_{j}}),\quad\quad\quad\mathbf{m}_{\boldsymbol{\omega}_{j}}=\mathbf{V}_{ \boldsymbol{\omega}_{j}}(\varphi_{w}(\mathbf{X})^{\top}\boldsymbol{\kappa}_{j} +\mathbf{B}_{0}^{-1}\boldsymbol{\beta}_{0}),\end{split} \tag{28}\] where \(\boldsymbol{\Omega}_{j}=\mathrm{diag}([\omega_{1j}\ldots\omega_{Nj}])\) and \(\boldsymbol{\kappa}_{j}=[\kappa_{1j}\ldots\kappa_{Nj}]^{\top}\). If we set \(a_{ij}=y_{ij}\) and \(b_{ij}=1\), then we have a sampler for binomial observations. Alternately, if we set \(a_{ij}=y_{ij}\) and \(b_{ij}=y_{ij}+r_{j}\), then we have a sampler for negative binomial observations for a dispersion parameter \(r_{j}\). Consider the negative binomial hierarchical model: \[y_{nj}\sim\mathrm{NB}(r_{j},p_{nj}),\qquad r_{j}\sim\mathrm{Ga}(a_{0},1/h), \qquad h\sim\mathrm{Ga}(b_{0},1/g_{0}). \tag{29}\] Then, Zhou and Carin (2012) that showed we can sample \(r_{j}\) as follows: \[r_{j}\sim\mathrm{Ga}\left(L_{j},\frac{1}{-\sum_{n=1}^{N}\log(\max(1-p_{nj},- \infty))}\right). \tag{30}\] where \[L_{j}=\sum_{n=1}^{N}\sum_{t=1}^{\ell_{j}}u_{n\ell},\qquad u_{n\ell}\sim\log(p _{nj}),\qquad\ell_{j}\sim\mathrm{Poisson}(-r_{j}\ln(1-p_{nj})). \tag{31}\] In order to derive a Gibbs sampler for the multinomial likelihood, we first must use the reparameterization of the likelihood (Holmes et al., 2006). We may rewrite the likelihood as \[\begin{split} p(\mathbf{Y}|\mathbf{X},\boldsymbol{\beta}, \mathbf{W})&=\prod_{i=1}^{N}\frac{\Gamma\left(\sum_{j=1}^{J}y_{ ij}+1\right)}{\prod_{j=1}^{J}\Gamma\left(y_{ij}+1\right)}\prod_{j=1}^{J} \left(\frac{\exp\left\{\varphi_{\mathbf{W}}(\mathbf{x}_{i})\boldsymbol{\beta}_{ j}\right\}}{\sum_{j=1}^{J}\exp\left\{\varphi_{\mathbf{W}}(\mathbf{x}_{i}) \boldsymbol{\beta}_{j}\right\}}\right)^{y_{ij}}\\ &\propto\prod_{i=1}^{N}\prod_{j=1}^{J}\frac{\left(\exp\left\{ \varphi_{\mathbf{W}}(\mathbf{x}_{i})\boldsymbol{\beta}_{j}-\xi_{ij}\right\} \right)^{y_{ij}}}{\left(1+\exp\left\{\varphi_{\mathbf{W}}(\mathbf{x}_{i}) \boldsymbol{\beta}_{j}-\xi_{ij}\right\}\right)^{y_{ij}+\sum_{j=1}^{J}y_{ij}}} \end{split} \tag{32}\] where \(\xi_{ij}=\log\sum_{j^{\prime}\neq j}\exp\{\varphi_{\mathbf{W}}(\mathbf{x}_{i}) \boldsymbol{\beta}_{j^{\prime}}\}\). By convention and for identifiability purposes, we set \(\boldsymbol{\beta}_{J}=0\). We let \(\kappa_{ij}=y_{ij}-\sum_{j=1}^{J}y_{ij}/2\). Now that we have written the likelihood in this form, we may use the Polya-Gamma augmentation trick again where the likelihood is proportional to: \[p(\mathbf{Y}|\mathbf{X},\boldsymbol{\beta},\mathbf{W},\boldsymbol{\Omega}) \propto\prod_{i=1}^{N}\prod_{j=1}^{J}\exp\Big{\{}\kappa_{ij}\Big{(}\varphi_{ \mathbf{W}}(\mathbf{x}_{i})\boldsymbol{\beta}_{j}-\xi_{ij}\Big{)}-\frac{ \omega_{nj}}{2}\Big{(}\varphi_{\mathbf{W}}(\mathbf{x}_{i})\boldsymbol{\beta}_ {j}-\xi_{ij}\Big{)}^{2}\Big{\}}. \tag{33}\] So this gives us a posterior w.r.t. \(\boldsymbol{\beta}_{j}\) as \[p(\boldsymbol{\beta}_{j}\mid\mathbf{y}_{j},\mathbf{X},\boldsymbol{\Omega}_{j })\propto p(\boldsymbol{\beta}_{j})\prod_{n=1}^{N}\exp\Big{\{}\boldsymbol{ \kappa}_{i}\Big{(}\varphi_{\mathbf{W}}(\mathbf{x}_{i})\boldsymbol{\beta}_{j} \!-\!\boldsymbol{\xi}_{j}\Big{)}\!-\!\frac{1}{2}\Big{(}\varphi_{\mathbf{W}}( \mathbf{x}_{i})\boldsymbol{\beta}_{j}\!-\!\boldsymbol{\xi}_{j}\Big{)}^{T} \boldsymbol{\Omega}_{j}\Big{(}\varphi_{\mathbf{W}}(\mathbf{x}_{i})\boldsymbol{ \beta}_{j}\!-\!\boldsymbol{\xi}_{j}\Big{)}\Big{\}}. \tag{34}\] We rewrite this into a closed form update as \[\boldsymbol{\beta}_{j}\mid\boldsymbol{\omega}_{j}\sim\mathcal{N}(\mathbf{m}_{ \boldsymbol{\omega}_{j}},\mathbf{V}_{\boldsymbol{\omega}_{j}}) \tag{35}\] where \[\boldsymbol{\Omega}_{j} =\text{diag}([\boldsymbol{\omega}_{1j},\ldots,\boldsymbol{\omega} _{Nj}])\] \[\mathbf{V}_{\boldsymbol{\omega}_{j}} =(\varphi_{w}(\mathbf{X})^{\top}\boldsymbol{\Omega}_{j}\varphi_{ w}(\mathbf{X})+\mathbf{B}_{0}^{-1})^{-1},\] \[\mathbf{m}_{\boldsymbol{\omega}_{j}} =\mathbf{V}_{\boldsymbol{\omega}_{j}}(\varphi_{w}(\mathbf{X})^{ \top}(\boldsymbol{\kappa}_{j}+\boldsymbol{\xi}_{j}^{T}\boldsymbol{\Omega}_{j })+\mathbf{B}_{0}^{-1}\boldsymbol{\beta}_{0}), \tag{36}\] \[\boldsymbol{\kappa}_{j} =\mathbf{y}_{j}-\frac{1}{2}\sum_{j=1}^{J}y_{ij}.\] We sample \(\boldsymbol{\omega}_{j}\) with \[\boldsymbol{\omega}_{j}\mid\boldsymbol{\beta}_{j}\sim\text{PG}\left(\sum_{j=1 }^{J}y_{ij},\varphi_{w}(\mathbf{X})\boldsymbol{\beta}_{j}-\boldsymbol{\xi}_{j }\right). \tag{37}\] ## 4 Experiments To evaluate our proposed model, we first examine the ability for the RFLVM to MAP estimate an S-shaped latent space from which we generate a synthetic data set from the prior generating process. Next, we will evaluate the capacity of our latent variable model to place similar observations close together in latent space, given ground truth labels for a variety of empirical data sets. Then, we apply our dynamic RFLVM model on a collection of time series data sets where we will examine how well various static latent variable and dynamic state space models compete in terms of imputing held out data through exact posterior sampling2. Footnote 2: Our code is available at [https://github.com/michaelzhang01/bayesian-rflvm](https://github.com/michaelzhang01/bayesian-rflvm). We first evaluate the RFLVM on a simulated data set where the latent space, \(\mathbf{X}\) is set to be a 2-D S-shaped manifold, and the parameters and data simulated from the prior generating process of a GPLVM with an RBF covariance kernel. For all simulations, we set \(N=500,J=100\), and \(D=2\). In these experiments, we computed the mean-squared error (MSE) between test set observations, \(\mathbf{Y}_{*}\), and predicted observations \(\mathbf{\hat{Y}}_{*}\), where we held out 20% of the observations for the test set. We evaluated our model's ability to estimate the GP outputs \(f_{j}(\mathbf{X})\approx\varphi_{w}(\mathbf{X})\boldsymbol{\beta}_{j}\) by comparing the MSE between the estimated \(\varphi_{w}(\hat{\mathbf{X}})\boldsymbol{\beta}_{j}\) and the true generating \(f_{j}(\mathbf{X})\). Lastly, we computed the mean and standard deviation of the MSE by running each experiment five times. We compared the performance of a Gaussian RFLVM to the inducing point Bayesian GPLVM, which we will just refer to as the "GPLVM" (Titsias and Lawrence, 2010)3. We ran these experiments across multiple values of \(M\), where \(M\) denotes the number of random features for the RFLVM and the number of inducing points for the GPLVM. Both models accurately recovered the true latent variable \(\mathbf{X}\) and the non-linear maps, \(\mathbf{F}\) (Fig. 1, upper middle). The GPLVM shows better performance for estimating \(\mathbf{Y}_{*}\) than the RFLVM (Fig. 1, lower middle). We hypothesize that this could be because Nystrom's method has better generalization error bounds than RFFs when there is a large gap in the eigenspectrum (Yang et al., 2012), which is the case for \(\mathbf{K}_{X}\). Moreover, we would expect that variational methods like the GPLVM will produce more accurate point estimates for predictions than sampling based methods like RFLVM. However, we see that the RFLVM approximates the true \(\mathbf{K}_{X}\) given enough random features (Fig. 1, right), even though it may be less accurate than the GPLVM (Fig. 1, lower middle). Next, we wish to evaluate our model's performance on count-valued data. We first compared results for simulated count data, sampled from the Poisson RFLVM's prior data generating process, against the following benchmarks: PCA, nonnegative matrix factorization (NMF, Lee and Seung, 1999), hierarchical Poisson factorization (HPF, Gopalan et al., 2015), latent Dirichlet allocation (LDA, Blei et al., 2003), variational autoencoder (VAE, Kingma and Welling, 2013), deep count autoencoder (DCA, Eraslan et al., 2019), negative binomial VAE (NBVAE, Zhao et al., 2020), and Isomap (Balasubramanian et al., 2002). We refer to the Poisson-distributed GPLVM using a double Laplace approximation as _DLA-GPLVM_(Wu et al., 2017). DLA-GPLVM is designed to model multi-neuron spike train data, and the code4 initializes the latent space using the output of a Poisson linear dynamical system (Macke et al., 2011), and places a GP prior on \(\mathbf{X}\). To make the experiments comparable for all GPLVM experiments, we initialize DLA-GPLVM with PCA and assume \(\mathbf{x}_{n}\sim\mathcal{N}_{D}(\mathbf{0},\mathbf{I})\). Footnote 4: [https://github.com/waq1129/LMT](https://github.com/waq1129/LMT) We refer to our GPLVM with random Fourier features as _RFLVM_ and explicitly state the assumed distribution. In Sec. 4, we use a Gaussian RFLVM with the linear coefficients \(\{\boldsymbol{\beta}_{j}\}_{j=1}^{J}\) marginalized out for a fairer comparison with the GPLVM. Since hyperparameter tuning our model on each dataset would be both time-consuming and unfair without also tuning the baselines, we fixed the hyperparameters across experiments. We used 2000 Gibbs sampling iterations with 1000 burn-in steps, \(M=100\) random features, and a latent dimensionality fixed to \(D=2\). We initialized the number of mixture components to be \(K=20\) and the concentration parameter to be \(\alpha=1\). Additionally, we compared results to our own naive implementation of the Poisson GPLVM that performs coordinate ascent on \(\mathbf{X}\) and \(\mathbf{F}\) by iteratively taking MAP estimates without using RFFs. We refer to this method as _MAP-GPLVM_. We found that the Poisson RFLVM infers a latent variable that is more similar to the true latent structure than other methods (Fig. 2). Linear methods such as PCA and NMF lack the flexibility to capture this non-linear space, while non-linear but Gaussian methods such as Isomap and VAEs recover smooth latent spaces that lack the original structure. The MAP-GPLVM appears to get stuck in poor local modes (see Wu et al., 2017) because we do not have gradients of the posterior in closed form. Both DLA-GPLVM and RFLVM, however, do have closed-form gradients and approximate the true manifold with similar results. Next, we look at a subjective analysis of the inferred latent space for a wide class of state space models in comparison with our dynamic RFLVMs on the synthetic S-curve data sampled from the aforementioend Poisson GPLVM data generating process. In this analysis, we compare our dynamic RFLVMs with PCA, the Gaussian process dynamical model (Wang et al., 2007), a recurrent neural network (Hochreiter and Schmidhuber, 1997), an unscented Kalman filter (Wan and van der Merwe, 2000), and a deep GP (Damianou and Lawrence, 2013)5. We see in Fig. 6 that the RFLVMs and the DLA-GPLVM can accurately estimate the S-curve in the latent space, whereas the other dynamic competing methods (GPDM, RNN, UKF, and Deep GP) and PCA cannot properly estimate the S-curve. Again, we can see that correct model specification is important to properly estimating the latent space. ### Text, image, and time series data Next, we examine whether an RFLVM captures the latent space of text, image, and empirical data sets. We hold out the labels and use them to evaluate the estimated latent space using \(K\)-nearest neighbors (KNN) classification on \(\hat{\mathbf{X}}\) with \(K=1\). We ran this classification five times using 5-fold cross validation and report the mean and standard deviation of KNN accuracy across five experiments. Across all eight data sets, the Poisson and negative binomial RFLVMs infer a low-dimensional latent variable \(\hat{\mathbf{X}}\) that generally captures the latent structure as well as or better than linear methods like PCA and NMF (Lee and Seung, 1999). Moreover, adding non-linearity but retaining a Gaussian data likelihood--as with real-valued models like Isomap (Tenenbaum et al., 2000), a variational autoencoder (VAE, Kingma and Welling, 2013), and the Gaussian RFLVM, or even using the Poisson-likelihood DLA-GPLVM--perform worse than the Poisson and negative binomial RFLVMs (Tab. 1, Figs. 3, 4, 5). The point of these results is not that RFLVMs are the best method for every dataset, a spurious claim given "no free lunch" theorems (Wolpert and Macready, 1997), but rather that our framework allows for the easy implementation of a large number of non-conjugate GPLVMs. Thus, RFLVMs are useful when first performing non-linear dimension reduction on non-Gaussian data. We posit that our improved performance is because the generating process from the latent space to the observations for these data sets is (in part) non-linear, non-RBF, and integer-valued. As in the Section 4, we look at a subjective analysis of the latent space estimated from human motion capture data. We compare the latent space of dynamical models on human motion capture data from the CMU Graphics Lab Motion Capture Database. The motion capture data is real-valued data recorded from people performing a variety of actions. In this paper, we look at a recording of someone jumping forward for several leaps, rotating 180 degrees and then jumping several leaps \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & PCA & NMF & HPF & LDA & VAE & DCA \\ \hline Bridges & \(0.8469\pm 0.0067\) & \(\mathbf{0.8664\pm 0.0164}\) & \(0.7860\pm 0.0328\) & \(0.6747\pm 0.0412\) & \(0.8141\pm 0.0301\) & \(0.7093\pm 0.0317\) \\ CIFAR-10 & \(0.2651\pm 0.0019\) & \(0.2450\pm 0.0028\) & \(0.2516\pm 0.0074\) & \(0.2248\pm 0.0040\) & \(0.2711\pm 0.0083\) & \(0.2538\pm 0.0178\) \\ Congress & \(0.5558\pm 0.0098\) & \(0.5263\pm 0.0108\) & \(0.6941\pm 0.0537\) & \(0.7354\pm 0.1018\) & \(0.6563\pm 0.0314\) & \(0.5017\pm 0.0674\) \\ MNIST & \(0.3794\pm 0.0146\) & \(0.2764\pm 0.0197\) & \(0.3832\pm 0.0370\) & \(0.2176\pm 0.0387\) & \(0.6512\pm 0.0228\) & \(0.1620\pm 0.0976\) \\ Montreal & \(0.6802\pm 0.0099\) & \(0.6878\pm 0.0207\) & \(0.6144\pm 0.1662\) & \(0.6238\pm 0.0271\) & \(0.6702\pm 0.0325\) & \(0.6601\pm 0.0997\) \\ Newsgroups & \(0.3896\pm 0.0043\) & \(0.3892\pm 0.0042\) & \(0.3921\pm 0.0122\) & \(0.3261\pm 0.0193\) & \(0.3926\pm 0.0113\) & \(0.4000\pm 0.0153\) \\ Spam & \(0.8454\pm 0.0037\) & \(0.8237\pm 0.0040\) & \(0.8719\pm 0.0353\) & \(0.8699\pm 0.0236\) & \(0.9028\pm 0.0128\) & \(0.8920\pm 0.0414\) \\ Yale & \(0.5442\pm 0.0129\) & \(0.4739\pm 0.0135\) & \(0.5200\pm 0.0071\) & \(0.3261\pm 0.0193\) & \(0.6327\pm 0.0209\) & \(0.2861\pm 0.0659\) \\ \hline & NBVAE & Isomap & DLA-GPLVM & Poisson FRLVM & Neg. binn. RFLVM & Multinomial RFLVM \\ \hline Bridges & \(0.7485\pm 0.0613\) & \(0.8375\pm 0.0240\) & \(0.8578\pm 0.0101\) & \(0.8440\pm 0.0165\) & \(\mathbf{0.8664\pm 0.0191}\) & \(0.7984\pm 0.0102\) \\ CIFAR-10 & \(0.2671\pm 0.0048\) & \(0.2716\pm 0.0056\) & \(0.2641\pm 0.0063\) & \(\mathbf{0.2789\pm 0.0080}\) & \(0.2656\pm 0.0048\) & \(0.2652\pm 0.0024\) \\ Congress & \(\mathbf{0.8541\pm 0.0074}\) & \(0.5239\pm 0.0178\) & \(0.7815\pm 0.0185\) & \(0.7673\pm 0.0109\) & \(0.8093\pm 0.0154\) & \(0.6516\pm 0.0385\) \\ MNIST & \(0.2918\pm 0.0174\) & \(0.4408\pm 0.0192\) & \(0.3820\pm 0.0121\) & \(0.6491\pm 0.0210\) & \(0.4463\pm 0.0313\) & \(0.3794\pm 0.0153\) \\ Montreal & \(0.7246\pm 0.0131\) & \(0.7049\pm 0.0098\) & \(0.2885\pm 0.0001\) & \(\mathbf{0.8158\pm 0.0210}\) & \(0.7530\pm 0.0478\) & \(0.7555\pm 0.0784\) \\ Newsgroups & \(0.4079\pm 0.0080\) & \(0.4021\pm 0.0098\) & \(0.3687\pm 0.0077\) & \(\mathbf{0.4144\pm 0.0029}\) & \(0.4045\pm 0.0044\) & \(0.4076\pm 0.0039\) \\ Spam & \(\mathbf{0.9570\pm 0.0045}\) & \(0.8272\pm 0.0047\) & \(0.9521\pm 0.0069\) & \(0.9515\pm 0.0023\) & \(0.9443\pm 0.0035\) & \(0.9397\pm 0.0015\) \\ Yale & \(0.5261\pm 0.03436\) & \(0.5891\pm 0.0155\) & \(0.4788\pm 0.0991\) & \(\mathbf{0.6894\pm 0.0295}\) & \(0.5394\pm 0.0117\) & \(0.5441\pm 0.0059\) \\ \hline \hline \end{tabular} \end{table} Table 1: Classification accuracy evaluated by fitting a KNN classifier (\(K=1\)) with five-fold cross validation. Mean accuracy and standard deviation were computed by running each experiment five times. again. In the recorded data that we analyze, the person makes four total laps (see Fig. 7). The three dimensional latent space estimated for the motion capture data over a variety of models shows that the GPDM's latent space looks similar to the PCA result, which is what we use to initialize each of the models (Figure 8). The RNN, UKF, and Deep GP produce latent spaces that are not particularly interpretable, and fail to reflect the dynamics of the observed data. However, results from the dynamic Gaussian RFLVM indeed reflect the four laps of jumps observed in the motion capture data. ### Missing data imputation Next, we wish to evaluate the performance of our dynamic RFLVM with some other popular dynamical models in a missing data imputation setting. Here, we randomly hold out 20% of the \(y_{ij}\) of the observed data, \(\mathbf{Y}\), and impute the missing value, \(y_{ij}^{\text{m}}\), using its posterior expected value-\(\mathbb{E}\left[y_{ij}^{\text{m}}|\mathbf{Y}^{\text{obs}},\mathbf{X},-\right]\) where \(\mathbf{Y}^{\text{obs}}=\{y_{ij}:y_{ij}\text{ is observed}\}\). In these experiments, we used the missing data imputation function for the Bayesian GPLVM from GPy to predict the posterior predictive mean of the missing data and implemented a missing value imputer for probabilistic PCA (Tipping and Bishop, 1999), the GPDM, a linear dynamical model (essentially a GPDM with a linear kernel, \(\mathbf{K}_{X}=\mathbf{X}\mathbf{X}^{T}\)), a static RFLVM, and the dynamic RFLVM. In Table 3 we see the mean squared error averaged over five trials for the missing data imputation, reported with one standard error. The data sets that we examine in this experiment include the S-curve data set used previously, the synthetic Lorenz attractor typically used in state space model evaluation, the aforementioned CMU data set, and a data set of foreign currency values across time6. We use the same experimental settings from the previous sections for the missing data experiment as well. With the exception of the CMU data set, we see that our dynamic RFLVM performs the best on this task in terms of the mean squared error. Thus, we can see that the RFLVM is capable of performing well in a predictive setting, as well as capturing the latent space for a wide variety of synthetic and empirical data sets. Footnote 6: [https://www.kaggle.com/datasets/brunotly/foreign-exchange-rates-per-dollar-20002019](https://www.kaggle.com/datasets/brunotly/foreign-exchange-rates-per-dollar-20002019). ### Scalability To assess scalability of RFLVMs, we computed the wall-time in minutes required to fit both RFLVMs and the benchmarks (Table 2). For both the VAE and deep count autoencoder, we trained the neural networks for 2000 iterations (default used in software package7). For DLA-GPLVM, we ran the optimizer for 50 iterations (default used in software package8). For RFLVMs, we ran the Gibbs samplers for 100 iterations. While results in Table 1 were run for 2000 Gibbs sampling iterations to ensure convergence for all datasets, we found empirically that reducing the number of iterations to 100 did not significantly change the results. All experiments in this section were run on the Princeton University computing cluster, using only CPUs for computation. We find that RFLVMs are indeed slower than most methods, but not substantially so. For example, on the CIFAR-10 dataset, a VAE takes 23.7 minutes, while a Poisson RFLVM takes 22.9 minutes, and a negative binomial RFLVM takes 55.7 minutes. The DLA-GPLVM is the slowest, taking 69.8 minutes. Footnote 7: [https://github.com/theislab/dca](https://github.com/theislab/dca) Footnote 8: [https://github.com/waq1129/LMT](https://github.com/waq1129/LMT) ## 5 Conclusion The GPLVM is popular for non-linear latent variable modeling due to its elegant probabilistic formulation, but inference for a non-Gaussian data likelihood has proven to be challenging due to the fact that inference for latent variables are doubly intractable. In this paper, we introduced a Bayesian method for non-linear latent variable modeling that is designed to accommodate a wide spectrum of data likelihoods. Our approach emulates the GPLVM using a random Fourier feature approximation of the covariance kernel. Using the random Fourier features allows us to sample from the posterior distribution of the covariance kernel. \begin{table} \begin{tabular}{l c c c c c c} \hline & PCA & NMF & HPF & LDA & VAE & DCA \\ \hline Bridges & \(0.0186\pm 0.0005\) & \(0.0182\pm 0.0012\) & \(0.0273\pm 0.0002\) & \(0.0528\pm 0.0067\) & \(1.8193\pm 0.0708\) & \(0.5740\pm 0.0255\) \\ CIFAR-10 & \(0.4398\pm 0.0743\) & \(0.4151\pm 0.0123\) & \(1.0894\pm 0.0500\) & \(0.8674\pm 0.0199\) & \(23.6707\pm 0.3789\) & \(1.1341\pm 0.0540\) \\ Congress & \(0.0244\pm 0.0002\) & \(0.0245\pm 0.0007\) & \(0.7296\pm 0.0824\) & \(0.0846\pm 0.0221\) & \(4.2919\pm 0.0539\) & \(0.5448\pm 0.0134\) \\ MNIST & \(0.2368\pm 0.0064\) & \(0.2522\pm 0.0273\) & \(1.0004\pm 0.1880\) & \(0.3264\pm 0.0237\) & \(15.3385\pm 1.8402\) & \(0.8719\pm 0.0018\) \\ Montreal & \(0.0171\pm 0.0008\) & \(0.0164\pm 0.0001\) & \(0.0523\pm 0.0350\) & \(0.0632\pm 0.0065\) & \(2.0585\pm 0.0947\) & \(0.5028\pm 0.0120\) \\ Newsgroups & \(0.0219\pm 0.0006\) & \(0.0227\pm 0.0000\) & \(0.1757\pm 0.0215\) & \(0.1163\pm 0.0344\) & \(6.8089\pm 0.7869\) & \(0.8551\pm 0.0527\) \\ Spam & \(0.0230\pm 0.0004\) & \(0.0235\pm 0.0012\) & \(0.3039\pm 0.0149\) & \(0.1262\pm 0.0381\) & \(6.8484\pm 0.7796\) & \(0.7146\pm 0.0453\) \\ Yale & \(0.0884\pm 0.0003\) & \(0.0984\pm 0.0064\) & \(0.3774\pm 0.0181\) & \(0.1381\pm 0.0072\) & \(5.5177\pm 0.1645\) & \(0.6410\pm 0.0223\) \\ \hline & NBVAE & Isomap & DLA-GPLVM & Poisson RFLVM & Neg. binom. RFLVM & Multinomial RFLVM \\ \hline Bridges & \(0.0867\pm 0.0157\) & \(0.0098\pm 0.0018\) & \(0.5182\pm 0.0206\) & \(0.3318\pm 0.0135\) & \(0.4915\pm 0.0502\) & \(0.5715\pm 0.0473\) \\ CIFAR-10 & \(2.1002\pm 0.0594\) & \(0.4366\pm 0.0034\) & \(69.7898\pm 2.4mode\) & \(22.9299\pm 1.2624\) & \(55.6701\pm 2.6837\) & \(59.8926\pm 9.9910\) \\ Congress & \(1.5888\pm 0.0725\) & \(0.0226\pm 0.0005\) & \(45.8584\pm 2.9771\) & \(1.9835\pm 0.1041\) & \(20.4514\pm 0.3995\) & \(0.94656\pm 2.7319\) \\ MNIST & \(2.1104\pm 0.1020\) & \(0.2148\pm 0.00019\) & \(24.6745\pm 1.5429\) & \(17.8148\pm 0.0493\) & \(33.8967\pm 1.4385\) & \(74.3100\pm 2.1778\) \\ Montreal & \(0.0819\pm 0.0009\) & \(0.0080\pm 0.0001\) & \(0.8723\pm 0.0237\) & \(0.5006\pm 0.0143\) & \(0.9291\pm 0.0434\) & \(0.8769\pm 0.0376\) \\ Newsgroups & \(0.7432\pm 0.0248\) & \(0.0721\pm 0.0008\) & \(1088.2659\pm 35.5089\) & \(2.6302\pm 0.0463\) & \(3.2600\pm 0.0892\) & \(2.8393\pm 0.1525\) \\ Spam & \(1.8411\pm 0.0283\) & \(0.0795\pm 0.0036\) & \(440.5968\pm 26.7441\) & \(1.6039\pm 0.4018\) & \(17.9958\pm 2.8573\) & \(19.0018\pm 2.4612\) \\ Yale & \(0.7931\pm 0.0589\) & \(0.0402\pm 0.0026\) & \(6.7210\pm 0.1193\) & \(9.8992\pm 0.5530\) & \(21.6030\pm 0.8839\) & \(45.4209\pm 4.4139\) \\ \hline \end{tabular} \end{table} Table 2: Wall-time in minutes for model fitting. Mean and standard error were computed by running each experiment five times. Figure 1: **Simulated data with Gaussian emissions. (Left) Inferred latent variables for both a GPLVM and Gaussian RFLVM. (Upper middle) Comparison of estimated \(f_{j}(\mathbf{X})\) for a single feature as estimated by GPLVM and RFLVM. (Lower middle) Comparison of MSE reconstruction error on held out Y\({}_{*}\) for increasing \(M\), where \(M\) is the number of inducing points for GPLVM and random Fourier features for RFLVM. (Right) Ground truth covariance matrix \(\mathbf{K}_{X}\) compared with the RFLVM estimation for increasing \(M\).** the posterior of the latent space for a wide variety of data likelihoods. We show that our posterior samples effectively learn the latent dynamics in synthetic and empirical time-series data, as well as accurately predict held out missing data compared to popular latent variable and state space models. Currently, we are extending the RFLVM to incorporate sparsity in the latent space as well as to automatically select the latent dimensionality using an Indian buffet process prior (Zhang, 2022). In future work, we would like to extend our dynamic random feature latent variable model to accommodate non-stationary behavior in the dynamics, and further investigate the problem of modeling neural spike train data where the observations are sparse time-series counts. Figure 3: **MNIST digits.** Digits visualized in 2-D latent space inferred from DLA-GPLVM (left) and Poisson RFLVM (right). Following Lawrence (2004), we plotted images in a random order while not plotting any images that result in an overlap. The RFLVM’s latent space is visualized as a histogram of 1000 draws after burn-in. The plotted points are the sample posterior mean. Figure 2: **Simulated data with Poisson emissions.** (Top) True latent variable \(\mathbf{X}\) compared with inferred latent variables \(\hat{\mathbf{X}}\) from benchmarks (see text for abbreviations) and a Poisson RFLVM. (Bottom) Distance matrices between true \(\mathbf{X}\) and \(\hat{\mathbf{X}}\) from the above benchmark (darker is farther away). Figure 4: **Yale face data set.** Face data visualized in 2-D latent space using a Poisson RFLVM (left). Synthetic faces for the Yale dataset sampled from the posterior data generating process using a Poisson RFLVM (right). Figure 5: **CIFAR-10 and MNIST images.** CIFAR-10 image data set visualized in 2-D latent space using a Poisson RFLVM (left). Synthetic digits for MNIST sampled from the posterior data generating process using a Poisson RFLVM (right). Figure 6: **Simulated data with Poisson emissions.** Latent dynamic spaces for S-curve across nine methods, labeled in the figure. Color axis refers to time index. Figure 7: **CMU motion capture data.** Selected observations from the motion capture data. \begin{table} \begin{tabular}{l|c c c c} & CMU & S-Curve & Lorenz & Forex \\ \hline GPLVM & \(\mathbf{0.071\pm 0.014}\) & \(0.479\pm 0.094\) & \(1.641\pm 0.205\) & \(1.0019\pm 0.0038\) \\ PPCA & \(0.084\pm 0.001\) & \(0.761\pm 0.013\) & \(1.227\pm 0.012\) & \(0.3237\pm 0.0019\) \\ GPDM & \(0.183\pm 0.008\) & \(0.390\pm 0.009\) & \(1.316\pm 0.016\) & \(0.2327\pm 0.0035\) \\ LDM & \(0.088\pm 0.001\) & \(0.768\pm 0.013\) & \(1.256\pm 0.005\) & \(0.3605\pm 0.0041\) \\ RFLVM & \(0.093\pm 0.001\) & \(0.367\pm 0.006\) & \(1.221\pm 0.010\) & \(0.3289\pm 0.0023\) \\ DRFLVM & \(0.075\pm 0.001\) & \(\mathbf{0.344\pm 0.007}\) & \(\mathbf{1.198\pm 0.008}\) & \(\mathbf{0.1317\pm 0.0021}\) \\ \end{tabular} \end{table} Table 3: Mean squared error of imputed held out missing data for time series data. Mean and standard error were computed by running each experiment five times. Figure 8: **CMU motion capture data.** Latent dynamic space for motion capture data across six methods, labeled in the figure. Color axis refers to the time index.
2307.07348
Mott-Enhanced Exciton Condensation in a Hubbard bilayer
We study the conditions to realize an excitonic condensed phase in an electron-hole bilayer system with local Hubbard-like interactions at half-filling, where we can address the interplay with Mott localization. Using Dynamical Mean-Field Theory, we find that an excitonic state is stable in a sizeable region of a phase diagram spanned by the intra-layer (U) and inter-layer (V) interactions. The latter term is expected to favour the excitonic phase which is indeed found in a slice of the phase diagram with V > U . Remarkably, we find that when U is large enough, the excitonic region extends also for U > V in contrast with naive expectations. The extended stability of the excitonic phase can be linked to in-layer Mott localization and inter-layer spin correlations. Using a mapping to a model with attractive inter-layer coupling, we fully characterize the condensate phase in terms of its superconducting counterpart, thereby addressing its coherence and correlation length.
Samuele Giuli, Adriano Amaricci, Massimo Capone
2023-07-14T13:57:22Z
http://arxiv.org/abs/2307.07348v1
# Mott-Enhanced Exciton Condensation in a Hubbard bilayer ###### Abstract We study the conditions to realize an excitonic condensed phase in an electron-hole bilayer system with local Hubbard-like interactions at half-filling, where we can address the interplay with Mott localization. Using Dynamical Mean-Field Theory, we find that an excitonic state is stable in a sizeable region of a phase diagram spanned by the intra-layer (\(U\)) and inter-layer (\(V\)) interactions. The latter term is expected to favour the excitonic phase which is indeed found in a slice of the phase diagram with \(V>U\). Remarkably, we find that when \(U\) is large enough, the excitonic region extends also for \(U>V\) in contrast with naive expectations. The extended stability of the excitonic phase can be linked to in-layer Mott localization and inter-layer spin correlations. Using a mapping to a model with attractive inter-layer coupling, we fully characterize the condensate phase in terms of its superconducting counterpart, thereby addressing its coherence and correlation length. ## I Introduction The condensation of excitons in a macroscopic quantum state has been proposed soon after the success of BCS theory of superconductivity [1; 2] owing to the similarities between the Cooper pairs created by the binding of two electrons, and the excitons, bound states formed by an electron and a hole. However, the observation of excitonic phases has long eluded the experimental effort, mainly because of the short lifetimes of the excitons due to electron-hole recombination processes. The developments in the engineering of devices and heterostructures have provided ideal platforms to observe exciton condensation (EC), which has been indeed proposed and reported in quantum-Hall bilayers [3; 4], graphene double bilayers [5; 6; 7; 8] and semiconductors quantum wells [9; 10]. Excitonic ordering has also been recently reported also in bulk solids [11; 12; 13; 14; 15; 16; 17; 18]. Bilayer structures are arguably ideal platforms to observe condensation of spatially indirect excitons composed by holes and electrons belonging to different layers, for which recombination is essentially inhibited by the presence of a dielectric material between the layers. Quantum Monte Carlo calculations for electron-hole gases coupled by the long-range Coulomb interaction [19; 20; 21] have indeed shown that an excitonic phase is stable at very low densities, a result which has been confirmed by simulations of double bilayer graphene [5; 6]. In an analogous lattice model with local interactions some indication of exciton condensation has been found away from half-filling [22] and in the half-filled system when the interlayer interaction is larger than the intra-layer repulsion [23; 24]. Similar models have been investigated using Dynamical Mean-Field Theory (DMFT). In Ref. [25] the competition between EC and s-wave superconductivity has been addressed in a model without intra-layer repulsion. A variety of two-orbital models including, e.g., energy splitting between bands, the Hund's coupling and including non-trivial topology have also been found to host excitonic states in some regions of parameters [26; 27; 28; 29; 30; 31]. In this work we aim at identifying a generic mechanism connecting strong correlation physics and excitonic phases which can be used to gain a deeper insight on results on more involved and richer models for specific systems. In particular, we address the interplay between the EC and Mott physics, the most direct fingerprint of correlations, in an idealized model for an electron-hole bilayer system with local Hubbard-like interactions. Our focus is on the relative role of the intra-layer (\(U\)) and inter-layer (\(V\)) interactions. We consider the system at half-filling, where a Mott transition can take place, so that our phase diagram will be characterized by the competition/interplay between Mott insulating and EC phases. The paper is organized as follows: In Sec. II we introduce the model, our implementation of Dynamical Mean-Field Theory and the relevant observables we consider. In Sec. III we present the normal-phase results where we discard excitonic ordering, while Sec. IV is devoted to the results for the EC phase. Sec. V reports our concluding remarks. ## II Model and method We consider a two-layer Hubbard model with a local interaction term: \[H = -\sum_{\langle ij\rangle\sigma m}t_{m}c^{\dagger}_{i\sigma m}c_{ j\sigma m}+H.c.-\mu\sum_{i\sigma m}n_{i\sigma m} \tag{1}\] \[+U\sum_{im}n^{\prime}_{i\uparrow m}n^{\prime}_{i\downarrow m}+V \sum_{i\sigma\sigma^{\prime}}n^{\prime}_{i\sigma A}n^{\prime}_{i\sigma^{ \prime}B}\] where \(c_{i\sigma m}\) (\(c^{\dagger}_{i\sigma m}\)) is the annihilation (creation) operator of an electron on site \(i\), layer \(m=A,B\) and with spin \(\sigma\), \(n_{i\sigma m}\) is the number operator and \(n^{\prime}_{i\sigma m}=n_{i\sigma m}-1/2\) is introduced to write the model in a particle-hole symmetric form which implies that both bands are half-filled for \(\mu=0\). We set \(t_{A}=t\) and \(t_{B}=ot_{A}\). In our calculations we will consider \(\alpha=-1\) in order to describe an electron-like band (A) and a hole-like band (B). \(U\) and \(V\) are both positive and they measure the intra-layer and inter-layer local screened Coulomb repulsion. We will study an excitonic state characterized by a uniform (\(q=0\)) spin-singlet excitonic order parameter (EOP) \[\Delta_{0}=\frac{1}{N}\sum_{i\sigma}\langle c_{iA\sigma}^{\dagger}c_{iB\sigma}\rangle \tag{2}\] which is expected to be degenerate with spin-triplet counterparts due to the SU(2)\(\times\)SU(2) spin symmetry of our model. Models including other interaction terms and material-specific features, may favour one or the other spin symmetries [28; 29; 31]. We solve the model at zero temperature using DMFT [32], a state-of-the-art method which treats different interactions non perturbatively and it is particularly well suited to study the Mott transition [32], strongly correlated metallic phases as well as superconductivity and other broken-symmetry states. Within DMFT the lattice model is mapped onto an impurity model which has to be solved self-consistently requiring that the impurity Green's function coincides with the local component of the lattice Green's function. We solve the impurity model at \(T=0\) using Lanczos/Arnoldi exact diagonalization (ED) [33; 34; 35]. As customary in the DMFT community, we consider a Bethe lattice with a semicircular density of states \(N_{m}(\epsilon)=\frac{2}{\pi D_{m}^{2}}\sqrt{D_{m}^{2}-\epsilon^{2}}\), where \(D_{m}\propto t_{m}\) is the half-bandwidth. In order to study the EC phase, the bath of the impurity model has to include an excitonic amplitude, analogously to the superconducting case. Using a spinorial representation where \(\Psi_{k,\sigma}^{\dagger}=(c_{k\sigma A}^{\dagger},c_{k\sigma B}^{\dagger})\), where \(k=0\) identify the impurity and \(k=1,...,N_{bath}\) the bath levels, we can write it as \[H_{imp}^{(0)}= \sum_{k\sigma}\begin{pmatrix}\Psi_{k\sigma}^{\dagger}&\Psi_{0 \sigma}^{\dagger}\end{pmatrix}\begin{pmatrix}\mathcal{H}_{k\sigma}&V_{k}\cdot \mathbb{I}_{2}\\ V_{k}\cdot\mathbb{I}_{2}&0\end{pmatrix}\begin{pmatrix}\Psi_{k\sigma}\\ \Psi_{0\sigma}\end{pmatrix} \tag{3}\] where \(\mathbb{I}_{2}\) is the \(2\times 2\) identity and \[\mathcal{H}_{k\sigma}= \begin{pmatrix}\epsilon_{k}+M_{k}&P_{k}\\ P_{k}&\epsilon_{k}-M_{k}\end{pmatrix} \tag{4}\] where \(P_{k}\) is the inter-orbital excitonic hybridization term in the bath Hamiltonian, \(\epsilon_{k}+(-)M_{k}\) is the bath energy on orbital \(A\) (\(B\)) and \(V_{k}\) is the hybridization between the impurity and bath site \(k\). Within ED-DMFT we have to limit the number of bath sites to be able to solve the impurity model. We fixed the number of bath sites to be \(N_{bath}=4\) and we fixed the system at global half-filling \(\langle\sum_{\sigma m}n_{\sigma m}\rangle=2\) by imposing \(\mu=0\), then since we are focusing on orbitals with opposite dispersion relation we also fixed \(\epsilon_{k}=0\ \ \forall k\) and since we focus on state with orbital half-filling, this required that for each \(M_{k}\) parameter on bath site \(k\) there must be another bath site \(k^{\prime}\) with opposite energy \(M_{k^{\prime}}=-M_{k}\). ## III Normal state We start our investigation from the normal state where we inhibit excitonic ordering, as well as any other broken-symmetry state like antiferromagnetism or staggered orbital ordering. This is a standard strategy which has helped to understand the Mott transition disentangling Mott localization from magnetic ordering [32]. For our model, a normal-state phase diagram has been reported in Ref. [36], but we find it useful to present our results in order to emphasize the aspects which are useful to better address the excitonic phase. The model is expected to feature two different Mott-insulating solutions that we can easily understand from the atomic (\(t_{m}=0\)) limit. Among all configurations with two electrons per site, the four with one electron in each layer \(|\uparrow,\downarrow\rangle\), \(|\downarrow,\uparrow\rangle\), \(|\uparrow,\uparrow\rangle\) and \(|\downarrow,\downarrow\rangle\) have energy \(E_{11}=-\frac{1}{2}U\), while the two configurations with two electrons in the same layer \(|\uparrow\downarrow,0\rangle\) and \(|0,\uparrow\downarrow\rangle\) have energy \(E_{20}=\frac{1}{2}U-V\). Therefore the former set of states is favoured for \(U>V\) and the latter for \(U<V\). Hence when \(U\) and \(V\) are much larger than the hopping and \(U>V\) we expect an insulator with one electron on every site of each layer. This state, that we label as U-Mott (U-MI) is expected to be unstable towards antiferromagnetic ordering if we allow for symmetry breaking. On the other hand, for \(V>U\) we have an insulator where every site is in a mixture between the two solutions with one doubly occupied layer. This state, henceforth V-Mott (V-MI), would be naturally unstable towards a staggered orbital (layer) ordering. In order to monitor the Mott localization we compute the quasiparticle weight \(Z_{m}\) which measures the metallicity of the system [32]. The progressive destruction of the metallic state is described by a reduction of \(Z_{m}\) from 1 (non-interacting limit) to 0 (correlated insulator). The connected local density-density correlations \(C_{m,m^{\prime}}=\langle n_{m}n_{m^{\prime}}\rangle-\langle n_{m}\rangle \langle n_{m^{\prime}}\rangle\) can be used to study the competition between the two **interaction** terms and the approach to the atomic limit insulators. The orbital symmetry implies \(C_{AA}=C_{BB}\) and \(C_{AB}=C_{BA}\). It is easy to see from the above discussion that the atomic \(U-MI\) has \(C_{AA}=0\) and \(C_{AB}=0\), while the atomic \(V-MI\) has \(C_{AA}=1\) and \(C_{AB}=-1\). In Fig. 1 we show as dotted lines the evolution of \(Z_{A}=Z_{B}\) and of the inter- and intra-layer correlations \(C_{AA}\) and \(C_{AB}\) as functions of \(V/D\) for different values of \(U/D\). The boundaries of the U-MI and V-MI phases are marked by dotted lines with crosses in the phase diagram of Fig. 2 The cuts for \(U/D=1\) and 2 in Fig. 1 clearly show a metal-insulator transition towards the V-MI state with \(Z_{A}=0\), \(C_{AA}=1\) and \(C_{AB}=-1\). For \(U/D=3\), we find a U-MI for small \(V\) followed by a metallic region and the V-MI as \(V\) increases. For large \(U/D=4\) we have only a tiny slice of \(V\) with a metallic solution sandwiched by the two insulators. The main feature of the normal-state phase diagram, as already pointed out in Ref. [36], is the existence of a metallic region when \(U\) and \(V\) are comparable, even when they are so large to independently drive a Mott transition (in the absence one of the other). The region shrinks as we increase \(U\) and \(V\) but it does not close. In particular, for \(U=V\) we always find a metallic solution, similarly to other models where the competition between different atomic states leads to intermediate phases which can have either a metallic [37; 38] or an insulating [39] nature. ## IV Excitonic phase We now turn to solutions where the exciton condensation is allowed. The values of \(Z_{A}\), \(C_{AA}\) and \(C_{AB}\) are shown as solid lines in Fig. 1 and compared with their normal-state counterparts. Indeed, the excitonic state is stable in a wide region of parameters and its onset makes the evolution from the U-MI to the V-MI smoother, thereby increasing also the quasipartcle weight. Reporting this information on the phase diagram of Fig. 2, where the boundaries of the excitonic region are black solid lines, we clearly see that the EC region is roughly centered around the normal state transition towards the V-Mott state. The picture is simple: Increasing \(V\), before the interaction is large enough to drive the system insulating, it leads to the binding **of electrons and holes on different layers into** excitons. However, the effect of \(U\) changes the position and the nature of the transition. For small and moderate \(U\) the EC establishes only when \(V\) prevails over \(U\) (above the \(V=U\) line, marked with a dashed grey line) in agreement with previous work [23; 24; 25]. A much less expected result emerges when we increase \(U\) and we approach the boundary of the U-MI phase. Here we find that the stability region of the EC increases and, remarkably, it extends in the region where \(U<V\) signaling a non-trivial intrinsic many-body effect due to the interplay of the two interactions. As a result, for \(U\gtrsim 3D\), the whole metallic region between the two Mott insulators is replaced by an excitonic state. The positive effect of the Hubbard repulsion on the excitonic order is evident in Fig. 3 (a), where we plot the order parameter \(\Delta\) as a function of \(V\) for the same cuts of Fig. 1. Here we show that the EC for large \(U\) is not only stable in a wider range of \(V\), but its amplitude is also larger. For instance, for \(U/D=4\) the maximum value of \(\Delta\) is more than twice the \(U=0\) maximum. For every value of \(U\), the transition from the metal to the EC appears of first-order, while the transition from the EC to the V-MI state is associated with a continuously vanishing \(\Delta\). Figure 2: \(V\) vs \(U\) Ground State Phase Diagram. In _yellow_ the region of EC phase, in _orange_ the metallic phase, in _blue_ the U-Mott insulator and in _green_ the V-Mott one. The dashed lines with crosses symbols indicate the two Mott-transition boundaries in the normal state, while the gray dashed line highlight the \(U=V\) line. Figure 1: Quasiparticle weight (top), intra-orbital density-density correlation (center) and inter-orbital density-density correlation (bottom), as a function of \(V/D\) for \(U/D=0.0\) (black), 2.0 (green), 3.0 (red) and 4.0 (blue). Dotted lines are data in the normal state, solid lines mark the same quantities in the excitonic phase ### Exciton Ordering and Mott physics In this section we link the enhancement of the EC region for \(V<U\) and large \(U/D\) to the magnetic correlation between orbitals near the V-MI phase that is enhanced by the nearby U-MI phase. The main effect of \(U\) is to drive a standard Mott localization within each layer. Hence the double occupation on each layer \(d_{m}\) is strongly reduced. For a half-filled non-magnetic system this reflects directly in the formation of local moments as measured by \(\langle S_{m}^{z}S_{m}^{z}\rangle=\frac{1}{4}(n_{m,\uparrow}-n_{m,\downarrow} )^{2}\rangle=\frac{1}{2}(\frac{1}{2}-d_{m})\) which approaches 1/4. While the spins on the two layers are uncorrelated in the normal state, when we reach the EC region and \(U\gtrsim 3D\) the inter-layer spin correlations \(\langle S_{A}^{z}S_{B}^{z}\rangle\) become sizeable and negative eventually approaching the limit -1/4. The local quantum state (computed from the impurity model within DMFT) approaches for large \(U\)\(|\psi\rangle\sim\frac{1}{\sqrt{2}}(|\uparrow_{A}\downarrow_{B}\rangle+|\uparrow _{B}\downarrow_{A}\rangle)\) for which \(\langle S_{A}^{z}S_{A}^{z}\rangle=\frac{1}{4}\) and \(\langle S_{A}^{z}S_{B}^{z}\rangle=-\frac{1}{4}\). Note however that the interplay between Mott localization and exciton ordering is not trivial. The singlet atomic excitonic state is indeed a linear combination of \(|\uparrow_{A}\downarrow_{B}\rangle\) and \(|\uparrow_{B}\downarrow_{A}\rangle\) which are favoured by increasing \(U\), but also of the states \(|\uparrow_{A}\downarrow_{A},0\rangle\) and \(|0,\uparrow_{B}\downarrow_{B}\rangle\), which are instead depleted by \(U\). Hence, while the magnetic correlations develop approaching the U-Mott state, they first contribute to the onset of excitonic ordering, but as we exceed a given "optimal" distance from the Mott state, the EOP decreases, leading to the existence of a bell-shaped behavior of the order parameter. We finally notice that the spin-singlet correlations follow from our choice to study spin-singlet excitons, and we expect the same picture to hold for spin-triplet exciton. The key idea is that Mott localization within each layer leads to localized moments which are naturally prone to acquire any inter-layer correlation when exciton ordering is allowed. Finally, in the U-MI state the EOP vanishes and the \(SU(2)\times SU(2)\) spin symmetry with four independent ground states is recovered. ### Characterizing the Excitonic State via a mapping on Superconductivity A particle-hole transformation on layer B: \[c^{\dagger}_{i\sigma B}\to c_{i\sigma B}(-1)^{\sigma} \tag{5}\] Figure 3: Excitonic order parameter \(\Delta_{0}\) (top), stiffness \(D_{s}\)(center) and coherence length \(\xi\) for (from left to right) \(U/D=0.0\), 2.0, 3.0, 4.0 with the same color codes of Fig. 1. The vertical dashed line indicate the first order Metal-EC phase transition. Figure 4: Local magnetic moments (intra-orbital spin correlations) (top) and inter-orbital magnetic correlation (bottom). Dotted and solid lines indicate, respectively, the normal and the excitonic phase solution. Data are for \(U/D=0.0\) (black), 2.0 (green), 3.0 (red) and 4.0 (blue). maps our model for \(\alpha\)= -1 onto a two-orbital model with the same form of Eq. (1) in which the two orbitals share the same hopping \(t_{A}=t_{B}=t\) and the inter-orbital interaction becomes attractive (-\(V\)), while the intra-layer remains repulsive. This model can indeed host an inter-orbital s-wave superconducting state, which maps on our excitonic state via the same particle-hole transformation (5). We can exploit this mapping to compute some observable which characterize the superconducting state and allow to better characterize the EC. The superfluid stiffness \(D_{s}\)[40] is a crucial parameter that controls the critical temperature. It measures the coherence of the superconducting state and its rigidity to fluctuations of the phase of the order parameter. Indeed, a superconductor with small \(D_{s}\) has a small critical temperature even if the zero-temperature modulus of the order parameter is large, as it happens in the strong-coupling limit in a single-orbital attractive Hubbard model [41] In the effective model with inter-layer attraction \(-|V|\) obtained via the transformation (5) \(D_{s}\) reads \[\frac{D_{S}}{\pi e^{2}}=\langle-E_{kin}\rangle-\chi_{jj}({\bf q}\to 0, \omega=0) \tag{6}\] where \(j\) is the current operator and \(E_{kin}\) is the expectation value of the hopping part of the Hamltonian. For a Bethe lattice we obtain [41] \[\frac{D_{S}^{\varepsilon\pi}}{e^{2}\pi}=-\frac{4\alpha}{\beta}\sum_{i\omega_{ n},\sigma}\int d\varepsilon V(\varepsilon)D(\varepsilon)|G_{AB}(\varepsilon,i \omega_{n})|^{2} \tag{7}\] where \(V(\epsilon)=\frac{4t^{2}-\epsilon^{2}}{2}\) is the square of the current vertex for orbital \(A\) and \(\alpha=t_{B}/t_{A}\) (See Appendix A for derivation). We underline that the total current of the attractive model corresponds, in model (1), to the operator \[j_{ex}({\bf q},i\omega_{n})=j_{A}({\bf q},i\omega_{n})-j_{B}({\bf q},i\omega_{ n}), \tag{8}\] which is clearly different from the current operator associated with the total charge. Hence, the \(D_{s}\) can be considered a real superfluid stiffness only for the auxiliary attractive model. Yet, \(D_{s}\) provides direct also information about the coherence and stability properties, which translates into an analogous information about the EC phase of our model (1). The coherence length \(\xi\) has indeed naturally the same meaning in the two frameworks, namely it measures the length over which the constituents of the pair/exciton retain quantum coherence. It is given by [42; 43] \[\xi^{2}=\frac{\sum_{\bf k}|\nabla_{\bf k}F({\bf k})|^{2}}{\sum_{\bf k}|F({\bf k })|^{2}} \tag{9}\] where \[F({\bf k})=\sum_{i\omega_{n}}e^{i\omega_{n}0^{+}}G_{AB}(\epsilon_{\bf k},i \omega_{n}) \tag{10}\] The results for \(D_{s}\) and \(\xi\) are reported in panels (b) and (c) of Fig. 3 in order to compare their behavior with the EOP. The results for \(U=0\) are qualitatively similar to an attractive model and they reflect the BCS to Bose-Einstein Condensate (BEC) crossover as a function of the coupling. Indeed both \(D_{s}\) and \(\xi\) are maximal in the weak-coupling side and they decrease as the interaction grows. Increasing \(|V|\) we have a progressive reduction of the coherence length, associated with more localized pairs/excitons characteristic of the BEC limit. Also \(D_{s}\) decreases as result of the smaller coherence of the pairs/excitons and it actually vanishes at the continuous transtion to the V-MI state. When we introduce and increase \(U\), we find an important difference on the "weak-coupling" side of the crossover. Indeed both \(D_{s}\) and \(\xi\) are depleted also close to the smallest values of \(V\) required to establish the EC. As a result, for large \(U\) the two quantities have a maximum around the \(U\sim V\) line. These results clearly confirm the \(U\)-induced localization of the excitons that we discussed above and the crucial role of the interplay between the two interactions to induce an EC for \(V<U\). ## V Conclusions We used DMFT to assess the existence of an excitonic state in the zero-temperature phase diagram of a two-layer Hubbard model with intra-layer (\(U\)) and inter-layer (\(V\)) density-density repulsive interactions. Working at half filling, we can study how the excitonic long-range order is affected by the Mott physics. We find a sizeable region of exciton ordering when the two interactions are comparable. The transition from EC phase to the Mott insulating phase is continuous, while the transition from Metal to EC is of the first order. For small and intermediate \(U\), the excitonic state is present only if \(V>U\). On the other hand, for \(U\gtrsim 3D\) i.e., close to a standard Mott transition within each layer, we find an exciton state also when \(V<U\), signaling a non-trivial interplay in which quantum fluctuations play an active role. We have indeed shown that the enlargement of the excitonic phase in the proximity of the intra-layer Mott transition can be connected with the \(U\)-driven development of local magnetic moments that, in turn, favour magnetic correlations between the two layers (singlets in our case). We expect this mechanism to be general, and in particular, to be present also for models where the exciton and the magnetic correlations have a triplet symmetry. Exploiting a simple mapping onto a model with attractive inter-layer interactions, we have been able to further characterize the excitonic state. The coherence length, which has essentially the same interpretation of that of a superconductor, shows that the proximity to the V-driven Mott state leads to localized pairs with very short coherence length. Analogously, the equivalent of the superconducting superfluid stiffness shows that the coherence of the EC state tends to vanish when the V-Mott insulator is reached. In other words, when we approach the Mott transition, the EC state is driven towards the strong-coupling limit, which in the superconducting language corresponds to the BEC limit [41]. We notice in passing that the BEC nature and its evolution from a BCS limit can be experimentally assessed via both thermodynamic [41] and spectral properties [44; 45]. These results further strengthen our picture where the charge localization induced by \(U\) is central in the stabilization of the excitonic condensate for \(V<U\) and in determining its properties. The existence of excitonic states for \(V<U\) is important because in a real bilayer system, or in a multi-orbital correlated material, we always expect \(V<U\). We notice however that an electron-phonon coupling of the Holstein type (coupled with the total local electron density) can effectively reduce \(U\), making in principle the effective \(U\) closer or even smaller than \(V\)[46; 47; 39]. As we anticipated in the introduction, our model has been introduced as the minimal model for a bilayer system in which excitonic phases can be present and, at the same time, Mott physics is effective. The results we have obtained have to be considered as a basis to build the understanding of richer and more involved models including, among others, different and more complex hopping structures, energy difference and/or hybridization betweeen the two bands and a richer structure of the interactions. ## Acknowledgements We acknowledge funding by MUR through the PRIN 2017 (Prot. 20172H2SC4 005), PRIN 2020 (Prot. 2020JLZ52N 002) programs, National Recovery and Resilience Plan (NRRP) MUR Project No. PE000023-NQSTI and ICSC-Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union - NextGenerationEU (Grant number CN00000013) - Mission 4 Component 2 Investments 1.3 and 1.4.